Benefits of Containers - Develop against your environment

The benefits of Docker in terms of its consistent deployment model are well known. Given the same docker configuration, you are able to expect identical results on every machine that you deploy to, even production.

From a developers perspective, we can leverage key features of Docker to enable us to develop code against the environment complete with dependencies that your project will be deployed to, rather than developing against your development machine and finding out that the code runs differently elsewhere.

NB: Though there are other types of containers, I will be using Docker in all my examples and references and hence the term containerization and Docker may be used interchangeably.

Using volumes to develop with Docker:

Volumes offer us a way to specify a mapping on the file system of a container to a persistent storage location outside the container.

From a developers' perspective, this is great. We can work with our code locally and use a volume to map to our code whilst it runs in our container.

For example, imagine we have a web API that will run in an environment using Node.js 8.11.1. We can run a web API served using Node 8.11.1 based on code that we are currently developing in a folder on our local machine.

Changes that we make to our code will be reflected immediately and we can use Docker to map ports used by applications running inside our containers to ports on our local machine.

This means that whilst we can view it like any other app we would develop without using containerization, we can ensure that our code definitely works with Node 8.11.1, even if another version of Node.js is installed on our machine. In fact, if we are running our code using Docker, we don't actually need the Node.js version our application uses running on our development machine.

The following is a docker-compose.yml file that instructs Docker to build the codebase in the current folder using a Dockerfile.

version: '3'  
services:  
  web:
    build: .
    ports:
    - "3000:8000"
    volumes:
    - ./server:/server

The file has a volume declaration that maps the contents of the server folder in the current directory to the server folder within the container.
Once started using the command:

 docker-compose up --build

the web application is then accessible via port 3000 on our development machine. Due to the use of a volume, changes in our code will result in changes to the web application running within the container.

For reference the Dockerfile that is used is:

FROM node:8.11.1  
COPY package.json .  
RUN npm install  
EXPOSE 8000  
CMD ["npm","run","start:dev"]  

This only copies the package.json file into the container. This means without using this file in conjunction with the docker-compose.yml file the app wouldn't start up as that is what attaches the volume to the container which contains the code that runs. We'd probably want a different Dockerfile for a version of our container that would be deployed.

The code for the application can be found here. The NPM script that is set up in the package.json file will run the test application using Nodemon, so you can:

  1. Run the application on your machine using docker-compose up --build
  2. Make changes to the running application.
  3. Refresh your browser at http://localhost:3000 to see your latest changes.

Using docker to run tests:

Whilst using Docker to develop our code against the environment that our code will run in is ideal, we may be developing an application that needs to run in multiple possible environments.

Imagine the following scenario. We have the following requirements:

  1. Develop a CLI application that hashes a string.
  2. It needs to run in environments that support Node 8.1.3 or 4.2.1.
  3. This should be tested as part of our CI process

The first step is to create an image that is able to take an environment variable (NODE_VER in this example):

ARG NODE_VER  
FROM "node:${NODE_VER}"

COPY package.json .

COPY .babelrc .

RUN npm install

COPY /src ./src

COPY /test ./test

CMD [ "npm", "run", "test" ]  

This will allow us to pass in the Node.js runtime version that we wish our code to execute against.

Next, we can use a docker-compose.yml to declare all the runtimes that we wish to test against.

version: '3'  
services:  
  node_env_8.1.3:
    build:
      context: ../
      dockerfile: ./test/Dockerfile
      args:
        NODE_VER: 8.1.3
  node_env_4.2.1:
    build:
      context: ../
      dockerfile: ./test/Dockerfile
      args:
        NODE_VER: 4.2.1

Using the build instruction we are able to use the same code and Dockerfile to run the application that we wish to test but we are able to pass in the Node.js version, by setting an environment variable, to each configuration/container that is to be run.

I have created a sample application here.

This has the following integration test:

const { expect } = require('chai'),  
spawn = require('spawn-command'),  
{ Buffer } = require('safe-buffer');

describe('CLI tool that encodes', function () {  
    const text = 'Test Data';

    describe('when converting', () => {
        it('converted text can be unencoded', async () => {
            const result = await runCLI(`-t "${text}"`);

            const normalString = Buffer.from(result, 'base64').toString('utf-8')

            expect(normalString).to.equal(text);
        });
    });
});

function runCLI(argsAsString) {  
    const cwd = process.cwd();
    return new Promise((resolve, reject) => {
        let stdout = '';
        let stderr = '';
        const command = `node ./lib/main.js ${argsAsString}`;
        const child = spawn(command, { cwd });

        child.on('error', error => {
            reject(error);
        });

        child.stdout.on('data', data => {
            stdout += data;
        });

        child.stderr.on('data', data => {
            stderr += data;
        });

        child.on('close', () => {
            if (stderr) {
                reject(stderr);
            } else {
                resolve(stdout);
            }
        });
    });
}

This will basically run the sample application by spawning a process and running the application as if it was being done from the command line.

This test is then run inside the container and our usage allows us to run the same code and tests across multiple runtimes.

The application has the following code to encode the base64 string:

const convertToBase64 = (inputString) => {

    const buffer =  Buffer.from(inputString);

    return buffer.toString('base64');
};

module.exports = convertToBase64;  

If we were to run our code against both runtimes then the test actually fails in the container that has Node.js 4.2.1.

Whilst our code is using Babel to ensure our syntax is transpiled to support Node.js 4.2.1, the actual interface for the Node Buffer class is different between the Node runtimes that we are testing against.

This is a pretty nasty issue to be stung by, but fortunately, it is something that we can use Docker to help find issues of this nature and help safeguard against.

We can add the SafeBuffer NPM package to our application and this then allows the code to work across both runtimes.

const { Buffer } = require('safe-buffer');  
const convertToBase64 = (inputString) => {

    const buffer =  Buffer.from(inputString);

    return buffer.toString('base64');
};

module.exports = convertToBase64;  

This guarantees that our tool will run in both runtimes that we are testing against.

Using git-hooks with Docker

Just as you would have unit tests or linting as part of your pre-commit / pre-push checking (using something like Husky) these containerised integration test suites could be run locally on a developers machine before committing or pushing to a remote branch to try and ensure that anyone who works on our code is helped to ensure that the code they are contributing will run in the environments that the code will be used in, not just on their machine. Obviously, this check can be performed in CI so it depends on whether your team prefers to be slowed down when committing/pushing for the sake of potentially saving time facing broken builds.

Final thoughts

Before using containers, being able to develop against your runtime environment without changing your development machine or using virtual machines (which would be somewhat heavy and laborious) was something that was simply too hard to make the norm.

Proponents of Docker often evangelize the fact it clears up many of the "it works on my machine issues" that one might see, something that vastly reduces hard to diagnose issues further on in the development cycle, where getting enough information to diagnose the root cause of issues is much harder.

Volumes allow us to do this whilst editing code and maintaining the feedback loop that one desires as a developer.

What's even more interesting is the fact that docker-compose files in conjunction with the ability to parameterize configurations opens up some interesting testing scenarios.

Containers really can help to stop hard to diagnose issues becoming a problem by having us deal with them where they are easiest to diagnose and solve.

Andrew de Rozario

Self / community educated developer who loves all things web, JavaScript and .Net related. Passionate about sharing knowledge, teaching and eating though maybe not in that order.

Subscribe to Just In Time Coder

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!