This post is outdated as of October 2016. Refer to the Docker Project
Boilerplates for updated versions.

I recently shared a post on Dockerizing Ruby applications for TDD &
Deployment. This post continues the theme by introducing
a similar structure for Node.js applications. The general approach
goes like this:

  1. Use docker to generate node_modules from package.json and
     commit source control.
  2. Use node_modules to create a docker image
  3. Use the previously built docker image to run arbitrary code w/o
     rebuilding the docker image (only useful in TDD).

The structure is available as a cloneable boilerplate repo. The
rest of the post describes the process in-depth.

Start by creating a package.json. I will assume you need some
testing libraries.

	"name": "docker-node-boilerplate",
	"version": "1.0.0",
	"description": "Slashdeploy Docker & Node.js boilerplate",
	"main": "index.js",
	"dependencies": {
		"mocha": "~2.4.5",
		"jshint": "~2.9.2"
	"scripts": {
		"test": "mocha -u tdd test/*_test.js",
		"lint": "jshint src/*.js test/*.js"
	"author": "",
	"license": "ISC"

Next create the runtime environment from the dependency information.
This is done running npm install inside a docker container. Next
node_modules is tar‘d into archive and committed to source
control¹. Here’s an excerpt from the Makefile.

.PHONY: foo
$(PACKAGE): package.json
	mkdir -p $(@D) tmp/cache
	docker run --rm \
		-v $(CURDIR):/data \
		-e NPM_CONFIG_CACHE=/data/tmp/cache \
		-u $(shell id -u) \
		-w /data \
			npm install
	tar -czf $@ node_modules
	rm -rf node_modules tmp/cache

There are few things going on in this make target. There are two
specific bits worth calling out. First note the -u argument. This
important because the current directory is a mounted volume in the
docker container. New files will be created with the user’s ID instead
of root (the docker default user). Next the -e NPM_CONFIG_CACHE
option. This is set to a directory on /data. This ensures that
npm will write package archives to a directory the user with id
(-u) has access too. npm may get permission errors without it.
This may not be required depending on how you run docker but it covers
all bases.

Now that dependencies are available it is time to build the docker
image. Here’s the relevant Makefile excerpt and the complete

	tar xzf $<
	mkdir -p $(@D)
	touch $@

	docker build -t $(IMAGE) .
	mkdir -p $(@D)
	touch $@

Now for the Dockerfile:

FROM node:4-slim

RUN mkdir -p /app

COPY . /app

CMD [ "npm", "test" ]

The Dockerfile is refreshingly small! Effectively the source is
copied into the Docker image and that’s a wrap. Now that we have a
docker image, we can use a volume mount to run quick code changes.

.PHONY: test
test: $(DOCKER)
	docker run --rm -v $(CURDIR):/data -w /data $(IMAGE) \
		npm test

Everything is packaged up in a handy boilerplate repo for use on
your projects. You can build on this structure to add more tests, push
images to your docker registry, and finally to deploy.

  1. Application dependencies should be vendored (e.g. they
    should be committed to source control). Committing dependencies to
    source control puts you back in control. It ensures repeatability
    across all your pipeline stages and isolates you against changes in
    upstream package repo. The node_modules folder is tarred here
    because there is no need to commit that directory because it would
    create huge diffs of little value. Committing the tar file is a
    decent enough compromise.