Using Docker Instead of Vagrant for Web Dev on a Windows PC

Warning: this entry is pretty old, probably obsolete in many ways, and I’m not updating it as I’m now running Docker on my Synology NAS.

I started using Vagrant to develop Linux-based websites on my Windows desktop in 2014, as an upgrade from running XAMPP. At the time there were significant practical differences between Vagrant and Docker, but since then they’ve been moving towards each other – functionally if not in strict architectural terms – as Docker Inc. has been gobbling components with the acquisition of KitematicTutum (now Docker Cloud) and other projects.

I’ve been reading about Docker to wrap my head around this ecosystem, and now that I’ve started using it here are my notes, updated on an ongoing basis as I go through a bunch of research/trial/error/fix phases. You’ll find plenty of practical tips to address the kind of pesky issues you run into when you really try to get a whole stack working, as opposed to just kicking the tires with a couple containers that don’t do much.

1. Docker On a Windows Desktop

The product known as Docker for Windows (July 2016 launch) has become Docker’s recommended desktop offering since it got out of private beta.  However it requires at least Windows 10 Pro because Hyper-V virtualization is not available in Windows Home. Using Hyper-V rather than VirtualBox is advertised as “faster and more reliable”, and indeed in my experience it is much faster to install images and get them running than reaching the functional equivalent with the Vagrant/VirtualBox combo.

Still, if you don’t have Hyper-V or want to keep using virtual machines, Docker Engine, Compose, Machine, and Kitematic, as well as Oracle VM VirtualBox, are packaged as an installer dubbed Docker Toolbox (August 2015 launch) which includes the following components:

  • Docker Engine “is a lightweight runtime and robust tooling that builds and runs your Docker containers.” The core.
  • Docker Compose “is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.” With a dockerfile you define your app’s environment, while docker-compose.yml defines the services that make up your app. In other words this handles orchestration, i.e. what you’d do with Vagrantfiles and Puphpet. Compose was introduced as a follow-up to Fig in February 2015.
  • The Kitematic desktop client does a lot of you’d do with Vagrant to provision your VMs, with a GUI that integrates with Docker Hub where images can be found to instantiate containers from. It’s pretty convenient if only to grab and copy those long container IDs.
  • Docker Machine “was the only way to run Docker on Mac or Windows previous to Docker v1.12. Starting with the beta program and Docker v1.12, Docker for Mac and Docker for Windows are available as native apps and the better choice for this use case on newer desktops and laptops.” This was formerly known as boot2docker.

You’re still going to use Compose and Kitematic with Docker for Windows, so these packages overlap to some extent.

2. Traps to Avoid and Tips to Get Started on a Windows Host

If you need to learn the Docker basics from scratch, you’ll find links and videos to tutorials by topic. In this section I assume you already researched Docker 101 and I’ll only list the stumbling blocks that I had to resolve despite consulting these sources. Think of it as a Docker 103 type of class.

I ran into a bunch of vexing issues while I was climbing through the learning curve, and I bet you do too. In these cases it always pays to slow down, RTFM as opposed to skim/copy/paste, and understand more deeply what you’re doing. I’ll admit to resorting to Cargo Cult Programming to try and get things done, but you have to recognize when you can’t get away with it. Yes, it means you have to learn stuff you’d wish was plumbing that just works. You might even have to consult books. The horror!

Back to Docker, you’ll see that whenever I can I use Compose since I hate to type a bunch of arcane command line instructions when I don’t have to. That said there’s a time and place for building your own images.

Again, this entry is a follow up to my Vagrant for Windows guide, and if you read it you’ll see I ran into some of the same issues with Docker.

2.1. Drives, Folders, Files

First off if you want to locate your project files outside of C:/Users, you need to enable Shared Drives in the Docker for Windows settings. Having failed to do that, my first contact with the freshly installed Nginx container – impressively within just minutes of installing Docker! – returned a rather hostile “403 Forbidden” message.

The Kitematic GUI does have an option to configure volumes, but it doesn’t know how to handle volume directories outside the Users directory. This is IMHO a functional flaw, but not in Docker’s opinion.  Likewise, support for Docker Compose is absent from Kitematic, though there’s no lack of demand for it. The whole toolbox is clearly still a work in progress. For alternatives, see: A Comparison of Docker GUIs.

So instead refer to your volume wherever you need it under Services in your docker-compose.yml file. This will look something like this if you want your e:/somedir Windows directory to be mapped to /var/www/html for your PHP and Nginx services:

  image: wodby/wordpress-php
    - e:/somedir:/var/www/html    #or a relative path off the root where docker-compose.yml is located: ./somedir:/var/www/html
  image: wodby/wordpress-nginx
    - php

Start a PowerShell command line, run docker-compose up -d to launch your containers in the background, then docker volume ls should now show your newly created volume while docker ps lists containers. Just like I warned in my Vagrant on Windows tutorial, you will have to use the command line at some point, there is no end-to-end GUI experience in this world. Just be happy you don’t have to use a Mac. (Chill out Mac users, this is gentle sarcasm.) What Chris Fidao calls “Accidental Sysadmin Syndrome” in Servers for Hackers resonates with my own experience: you end up needing some level of familiarity with system administration to be able to function as a full-stack web developer.

You want to use PowerShell and not the regular Windows  command line otherwise some commands won’t be recognized – e.g. docker volume rm $(docker volume ls -q -f “dangling=true”) – to delete orphaned volumes which you will generate in droves during your initial experiments with Docker.

I thought I could use a Named volume to manage my persistent data, since containers are temporary, but you apparently can’t mount named volumes. Present state: confused, I need to sleep on this.


File permissions can be an issue, whether with Drupal or WordPress. I ran into this while trying to update or install WP plugins/themes, with WP asking me for ftp connection information. Apparently in my case /wp-content/ should be owned by www-data. (I did get WordPress to work in its own container eventually, but after some back and forth I’ve switched to setting it up as a Composer dependency).


Docker image and container files are stored in the Moby Linux instance, so default physical storage is at c:\users\public\documents\hyper-v\virtual hard disks\ and can be changed with Hyper-V Manager at Settings > ScSI Controller > Hard Drive. The virtual eventually needs a physical embodiment. You’ll have to stop Docker and turn off the VM before you relocate the virtual hard disk elsewhere. Source.


2.2. Networking

2.2.1. Host-Docker Networking

Networking between the Windows host and Docker is handled via a vEthernet (DockerNAT) driver that you will see listed under Control Panel\Network and Internet\Network Connections. Underneath they use the Hyper-V Virtual Switch, so some info will also be visible from the Hyper-V Manager app that comes with Windows. The subnet address, displayed under Network in the Docker settings, defaults to, so typically your web server served from a Docker container will be accessible at

Here’s an overview of the network layers involved:

DeviceIP addressNetworkTypical/default IPHow to find it
Windows PC (host)External IPinternetwhatever your ISP gives
Windows PC (host)Internal IPLocal Area Network192.168.1.2[Windows command line] ipconfig /all
DockerVirtual switchLocal Area Network10.0.75.0Docker for Windows UI > Settings > Network
DockerGatewayDocker's internal network127.20.0.1[*] See below

[*] There are plenty of ways to get networking information about Docker containers

  • From Kitematic
  • From your Windows command line:
docker network ls

docker network inspect {name of network}
  • From within a container:
[Windows CLI] docker exec -it {containername} bash

[Docker container CLI] ifconfig

TCP ports used by your containers are typically assigned in your Docker Compose file. The local:docker port mappings can be seen in Kitematic or by typing docker ps -a from the command line.

All of this networking info may look somewhat scary but it’s mostly needed just for troubleshooting purposes.


2.2.2. Accessing Your Containers From Outside Your Host

Some people edit their hosts files to assign a domain name to their PC that makes it accessible across the local network, but that’s tedious to do let alone maintain, and on most smartphones and tablets you can’t even edit the hosts file unless you root your device. Instead, use a local DNS server. I have a Synology NAS that does this very well, it’s likely there are options for Windows. As a side benefit this will also speed up your regular web browsing. For reference, here’s more on DNS and Windows.

If this sounds like too much trouble – though it’s not that complicated really – try

If you want easy access from outside your LAN (i.e. the internet), then use a DDNS service such as DuckDNS with an auto-updater, or set up a secure tunnel to localhost with Ngrok.

2.3. Databases

Database access from host apps such as PhpStorm or SQLYog requires a binding setting change from MySQL defaults (see Configuring MariaDB for Remote Client Access and the “Using a custom MySQL configuration file” section here), as well as port mapping from the db container to the host. Here’s how to do it from the docker-compose file:

  - <strong>./yourhostpath/my.cnf:/etc/mysql/conf.d/my.cnf</strong>      # Configuration overrides with a my.cnf file containing "bind-address=". Path on container will vary depending on the source image.
    - <strong>"53306:3306"</strong>    # Exposes the container to the host network on the local port of your choice
Your container made available to host apps
Database container made available to host apps such as SQLyog

2.4. Overriding Configuration Files

2.4.1. Config Files to/from Containers

The configuration supplement/override technique shown above for the database works more broadly and lets you finetune app settings within a container without having to create your own image.

This comes handy for PHP if you want to turn off the opcache – which is useless in a dev environment and can drive you crazy because changes to PHP files are not reflected instantly in the browser – or change settings such as the maximum upload size.

On the other hand while you can use several conf files for Nginx and load them with Docker volumes with the same method, you cannot override Nginx settings by redeclaring them, unlike with MariaDB or PHP. If you do Nginx will complain about duplicate directives. So in this case you’d first copy the nginx.conf file from the container:

docker cp nginxcontainername:/etc/nginx/nginx.conf localpath

then edit it and load your local version. See “complex configuration” here for details.

2.4.2. Environment Variables

Environment variables can also be set with Compose, which is another way to define some configuration values. I’ll expand on that in the future once I’ve looked into this with more attention, but for now I’ll mention these:

2.4.3. Config for LEMP

2.5. Consulting Container Logs

Sooner rather than later you will need to see what’s going on within your containers, usually when something doesn’t seem to work. There are many ways to do that, from the quick and dirty to the elaborate.

2.5.1. UI and Command Line Basics

Docker Compose doesn’t work (yet) in interactive mode under Windows (source – possible workaround with Winpty that I didn’t test). That said, you can use Kitematic or PhpStorm to read logs and have a real-time view into what containers are up to.

WordPress container in Kitematic
WordPress container in Kitematic

This is the equivalent of typing docker logs <container name> from the command line. This displays logs collected by Docker, whose behavior can be changed with its logging drivers.

You can also bash into a container and have a look there:

[PowerShell prompt] docker exec -it webdock bash (where webdock is the name of the container)

[Container prompt] find / -name "logs" (where logs is the name of the file/directory you're looking for)

Then use cd, ls -l, vi to respectively change directories, see the content of the directory you moved to, and read the log file you’re interested in. See Learning the Shell if you need help with Linux and this vi cheat sheet.

Or you can mount the log directory/file of your choice to a directory on your host so that you can load said log file in the text editor of your choice, or even in a dedicated app such as logfusion. Do so in your docker-compose.yml file.

- ./backend/tmp/xdebug.log:/tmp/xdebug.log
Xdebug log in sync between container and host
Xdebug log in sync between container and host, in Notepad++

2.5.2. Logging Infrastructure & Search

Beyond these ways to take a quick glimpse into the latest log entries, there are plenty of options to to collect and process logs depending on your goals, what you are in charge of, and your architecture. See Top 5 Docker logging methods to fit your container deployment strategy.

At the moment I’m experimenting with running a Logspout container connected to Loggly, so that all logs end up in one place where I can use powerful querying and filtering a la Splunk. This took me maybe 10 minutes to set up, and I may well extend this to production environments in the future.


By default Docker logging is done by capturing data sent by containers to /dev/stdout and dev/stderr, and turn that in json files. To verify that any given image is set up that way, check its container’s log path with docker inspect –format='{{.LogPath}}’ $INSTANCE_ID. (More on the Inspect command.)

You could do something similar with Papertrail (but their free plan is much more stingy than Loggly’s), SumoLogic, Logentries, and probably others.

If you want to stick to containers rather than rely on a cloud service, a popular alternative is the ELK  – Elasticseach, Logstash, Kibana – stack. However I could see how this could take me a couple of hours to figure out, so I haven’t tinkered with it yet:

Like often in the software world, there are so many options that overlap to a large extent. Do some research based on your needs, test a product or two and move on to the next part of your project!

2.5.3. Logging for LEMP

2.6. Making Your Own Docker Images

Once I was comfortable using pre-configured images and Compose, it became time to graduate to making my own images rather than limiting myself to those listed on the Docker Hub registry. Still, for my first attempt I started from one of these rather than start from a blank Dockerfile.

2.6.1. Git Cloning & Hosting

I forked the Github repository of the image I was interested in into a private repo at Bitbucket (it’s free). Or you can just import directly from the source Github repo.  I then have source code that I can clone locally with SourceTree. You could of course make a public fork and/or stay within Github depending on your preferences.

At Docker Hub I then clicked Create > Create Automated build. To start the process I was asked to link my Bitbucket account to my DockerHub account. That took just a couple of clicks. I then selected my Bitbucket image, and asked Docker Hub to create a private repository out of it. Docker Hub then started to build the new image (just a clone at this point since I haven’t edited the forked source code yet).

If the link between Docker Hub and Bitbucket was properly established, the Docker deployment key should be automatically added to your Bitbucket account, as per the screenshot below. If that didn’t happen, get the key from your Build Settings at Docker Hub and add it manually in Bitbucket.


In other words it goes like this:

Docker Hub – public repository (base image)

-> Github – public repository (code for the base image)

-> Bitbucket – cloned or forked private repository (mess around without embarrassing yourself publicly)

-> Docker Hub – automated build (initial pull from Bitbucket)

-> Docker Hub private repository (your own cloned Docker image).

-> Bitbucket – pushed commits (whenever you commit and push edits to your work).

-> Docker Hub automated build (subsequent pulls from Bitbucket) -> Docker Hub private repository (your very own Docker image).

You can tie your git branch to your Docker tag name

If the automated build fails for some reason, try building the image locally to rule out an issue with Docker Hub, since it’s reportedly somewhat flaky. Also, Docker Hub’s error reporting is not exactly user friendly, you’ll understand better where things go wrong from your local command line.

2.6.2. Local Image Building

Open a command line in the local directory where your image’s repository was cloned, then type:

docker build .

Or better, tag the image with a user-friendly name, which you can do while or after you create the image. (Similarly, make your life easier and name your containers in Compose with container_name).

docker build -t userfriendlyname .
Docker image built locally and certified organic
Docker image built locally and certified organic

2.6.3. Loading Your Image from Compose

Great, so I have a private image and changes to the underlying Git project are built automatically on Docker Hub. I feel very pro. Yet I can’t seem to be able to retrieve it via Docker Compose. Which makes sense because I haven’t given Docker my Docker Hub credentials, and using docker-compose –verbose pull I can see there’s an authentication problem.

So I use docker login to feed my credentials in, which get duly recognized. This leads to the creation of a config.json file in C:\Users\{WinUser}\.docker, which mentions that my creds are stored in wincred. And… another verbose pull shows that no auth entry is found for This turns out to be an open bug. I told you this is still beta software… Fine, for now I’m sticking to loading the local image.

2.6.4. Other Considerations

Note that Windows line endings can mess with Docker. Consider using Notepad++ to edit files that you need to load in your Docker images since it lets you choose the end-of-line character. It’s an excellent editor.

Notepad++ takes 6.6MB of RAM
Notepad++ takes just a few MBs of RAM, is highly customizable

3. Dev Automation Stack: Should I Stay or Should I Dock?

3.1. Docker Orchestration Inception (Up to a Point)

My stack to retrieve dependencies and build projects relies on Composer, Bower, and Grunt. I initially tried to get them to run in their own Docker containers, which I got “almost working”. I do have Composer working via docker-compose (here’s how to) which I followed with docker-compose run -d composer install (or update). But I found it more complicated to get Node/NPM/Grunt running from the root directory of my WP project so that directories created by these tools would end up in the right place.

Instead of bumping my head for too long on this, I refocused on what I wanted to accomplish. It is intellectually enticing to run everything on Docker, but the truth is I don’t need to run these tools in Docker since they’re not going to be deployed to production anyway. Just because you can put in in Docker doesn’t mean you should. It’s easy to get carried away by the possibilities of the newest gizmo and forget that tools are just means to an end. That end is to develop productively and seamlessly between local and production environments, not to “sharpen my axe” at the expense of everything else.

A viable alternative is to install Node, Git, Grunt et. al. locally, and get them recognized by your IDE of choice.


“Since front-end developers are already familiar with npm and bower, they can continue to use them to get the latest and great front-end code and to run the application. Meanwhile, grunt is running on host to watch for file updates.”

As Woah! I switched to Windows and it’s awesome for PHP development notes, PHP and many of its ancillary tools work on Windows, so what you can’t/won’t run in your containers, you probably can get to work on your host with little work, even if it may look less elegant than using containers that you can turn on/off on demand in any new environment (as opposed to installing Windows executables).

To that effect, as a test I installed PHP and Xdebug on my host. That turned out to be pretty easy, the only thing that threw me for a loop was a couple tweaks to php.ini: Installation of extensions on Windows, and  making sure I was using the right DLL name and path for Xdebug. Type php –version from the Windows command line to see that everything is on the up and up, the result should look something like this:

PHP 5.6.26 (cli) (built: Sep 15 2016 18:11:35)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
with Xdebug v2.4.1, Copyright (c) 2002-2016, by Derick Rethans

However, the built-in web server coming  with PHP (the one installed on Windows) has no idea what’s the deal with my Docker-contained database so… this setup doesn’t actually work, at least not out of the box. I guess I could work with port settings and the like, but at this point this looks too complicated to bother. That turned out to be somewhat of a dead end. In the end I found a “zero config” setup with PhpStorm that doesn’t need to be aware of the PHP interpreter.

3.2. JetBrains WebStorm/PhpStorm

In case you want dependency tools to run in your host as opposed to containers, consider JetBrains’ IDEs, they’re pretty powerful and fast and integrate with a huge number of tools. I’ve come to like PhpStorm so much that I’ve written a separate post about it. See PhpStorm Integration & Productivity Tips .

If you’re using another IDE, see Docker meets the IDE!

4. Useful Docker Resources

A Note of Caution while Googling! If you land on official Docker help fies from a Google query, make sure not to be accidentally reading docs from older versions, whose number is  reflected in the URL (e.g. Some of these obsolete reference pages still rank high. Better search from within

There’s a fair amount of overlap between these links, browse around and see what clicks for you.

4.1. Getting Started

4.2. PHP

4.3. LAMP/LEMP Stack

You don’t have to run WP with the Mysql and Apache defaults. In brief, MariaDB (or Percona) are faster than Oracle’s MySQL thanks to the XtraDB storage engine (a fork of InnoDB), while Nginx scales much better than Apache.

4.4. MySQL (MariaDB, Percona)

Excellent explanation at MySQL Docker Containers: Understanding the basics.

4.5. WordPress

4.5.1. How to

4.5.2. Starter Projects

docker-wordpress-wp-cli simply adds WP-CLI to the official WordPress image (tagged latest), which is at the moment of this writing using Apache and PHP 5.6. Add a database container and you’ll be good to go. WP’s default choices make sense for maximum compatibility, but they’re far from optimized and I don’t like being tied to them (see LEMP stack earlier in this entry) so I’m not using this image.

Visible WordPress Starter packages WordPress, Apache, and MySQL. Docker beginners may start here as it’s pretty straightforward. In their own saying: “Our goal is to make WordPress development slightly less frustrating.” Hah!

Docker4WordPress bundles PHP-FPM, Nginx, MariaDB, and Redis containers optimized for WordPress use, as well as a couple optional containers. It is a step up from the previous project and packs a punch without overwhelming you. Recommended to get your feet wet with LEMP.

Wp-project by Finnish agency Geniem is a significant step up in complexity from the previous projects, and the choice to put several services in one container goes against the grain of Docker convention (though they scaled it back from their earlier docker-alpine-wordpress).

Nonetheless there are lots of interesting choices and conventions at play here that make sense for bigger endeavors:

  • The project structure is very similar to Bedrock
  • Composer is used to install WP and plugins, as well as some extra tools
  • Whoops is used for error handling
  • Dotenv is used to load environment variables
  • WP-CLI is installed
  • Many opinionated choices to enforce good practices, limit discrepancies between development and production to a minimum, and tighten up security

Underneath you can use their docker-wordpress  package (Docker Hub image) which provides the PHP7/PHP-FPM7/Nginx parts, or the docker-wordpress-development (Docker Hub) variant that enables Xdebug and disables the Opcache. These are not just clones of default PHP/Nginx images either, look at how their nginx.conf file sets up constraints to help make WordPress secure.

This can be further complemented with docker-wordpress-project-builder (Docker Hub image) to add development and testing tooling. A lot of thought obviously went into the whole thing, but it’s not for the faint of heart.

Wocker, a local WordPress development environment that uses Vagrant, Virtualbox and Docker; while puphpet-docker lets you run a configuration file in Docker. But I’m not sure combining Docker and Vagrant is that useful for local development since the launch of the native Docker for Windows app.

4.6. Docker Compose

Caution: Compose uses YAML, which relies on indentation with spaces. Do not put tabs in your docker-compose.yml files. If your Compose file fails to execute, start by checking you didn’t mess up indentation, regardless of what the error message might say. It helps to set up your editor to display spaces and tabs:

white space matters
white space matters

Some tools:

  • and Docker Compose UI are the Docker equivalents of Puphpet to help you generate Compose files. Weirdly even the newer Docker for Windows doesn’t include a GUI or wizard for Compose. As I mentioned earlier, this would make most sense as a Kitematic feature.
  • Compose Registry is a search engine for Docker Compose stacks, made by the developer behind Docker Compose UI.
  • Panamax is a  “containerized app creator” that does pretty much with a GUI what you do with Compose, but as of the fall 2016 there is no Windows version (some people run it on Windows via Vagrant/VirtualBox, but I for one I’m moving away from these). Too bad, Panamax can even create Compose files.
  • Docker and Visual Studio Code

4.7. Dockerfile

4.8. Deployment to Production

I haven’t deployed Docker containers to production yet, but that’s obviously a big part of the appeal.

In conclusion, the obligatory Downfall video:

Leave a Reply

Your email address will not be published. Required fields are marked *