FullStack Labs

Please Upgrade Your Browser.

Unfortunately, Internet Explorer is an outdated browser and we do not currently support it. To have the best browsing experience, please upgrade to Microsoft Edge, Google Chrome or Safari.
Upgrade

Docker: How to Set Up a Development Environment to Closely Match Production

Written by 
Jeison Berdugo
,
Senior Software Engineer
Docker: How to Set Up a Development Environment to Closely Match Production
blog post background
Behind the Scenes: React Hooks API
How DRM Affects Right to Repair Hardware / Software
How To Set Up a Productive Work- From-Home Development Office

Table of contents

Long time developers usually need to use different versions for database services or programming languages over the course of their careers. You’ve probably used version managers like nvm or rvm to help with packages, but they generally do not give much aid in setting up databases. After deployment, a project usually never changes its database version, and you can find yourself using a more recent database, which can lead you to implement unavailable functionality or use deprecated APIs. There is a way to avoid these issues, set up the project with the correct versions, and not worry about breaking your development machine. 

Most development environments in projects can be divided into a few components. Generally, these will be all the services that need to be running --- for example, a database, a backend, and a frontend. In some cases, you might also have separate microservices, a Redis instance, and so on. We will explore different setup possibilities, from having a Docker instance for each service to creating a Dockerfile that allows you to set up everything in a single instance.

Set up docker: Mac or Windows, or here for other operating systems. 

Identify the services

This will depend on the project --- for example, a simple Node.js project with a Postgres database, with just an API and a React.js client. It’s okay if you are using nvm; in that case, you only need the database instance. Assume the following versions:

  • PostgreSQL v10.14
  • Node v8.1.4 (for both api and client)

Let’s try the first approach

First, let’s get our database set up in one instance. Search for Postgres in Docker Hub and go to the Tags tab. Filter tags by version 10.14, and you should see the following tags:

  • 10.14-alpine
  • 10-14

The -alpine version is made from an official Docker image that uses Alpine Linux. It’s very lightweight, and we’ll be using it. Either version is okay to use. Copy and run the following command: 

	
docker pull postgres:10.14-alpine
	

And now run the following to create an instance:

	
docker run --name postgres1014 -e POSTGRES_PASSWORD=yoursecretpassword -e POSTGRES_USER=user -d -p 5432:5432 postgres:10.14-alpine
	

I’ve used postgres1014 for the instance name, but you can use whatever you like. I recommend a combination of the project name and postgres. We’ll use the name to log in and run commands.

If you currently have Postgres installed and running on your machine then you can either permanently disable it or bind the instance port to a different port using: -p 5433:5432. The format for this is local_port:instance_port.

POSTGRES_PASSWORD and POSTGRES_USER are required environment variables used to initially set up the database. In the description tab for the Postgres image page in Docker Hub there are more environment variables lists including instructions on how to use them.

Run docker ps to see the instance. Now to get onto the container:

	
docker exec -ti postgres1014 sh
	

You should now be able to use the Postgres utils and any other command.

We can use a single instance for the API and the client, or one for each if you prefer that the setup is similar.

Now let’s look for our Node version in Docker Hub:

	
docker pull node:8.1.4-alpine
docker run --name node814 -it -p 8080:8080 -p 3000:3000 node:8.1.4-alpine sh
	

With the -it options and the sh at the end we are creating a shell in the new instance. If you need to install a package use

	
apk add --no-cache package_name_a package_name_b
	

or

	
docker exec -ti node814 apk add --no-cache package_names
	

The --no-cache option is to avoid bloating up the image with cache files.

Install git and any other needed packages and clone the project:

	
apk update
apk add --no-cache git openssh
git clone project_url
cd project
npm install
	

Pay attention to any errors that come up, as you may need to install some packages like gcc, g++, make, or python (some things that node-gyp needs).

Since the database is in another container and the Node container needs to connect to it, you should update any environment variable for the database host. To figure out which is the Postgres IP, run:

	
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' postgres1014
	

Perform any additional setup that your project requires --- creating the database, running migrations, seeds, etc.

You should now be able to start your API process.

For the client process, you can either open a new shell by running:

	
docker exec -ti node814 sh
	

and then clone the client project and install packages, or do so in a new Node instance by running:

	
docker run --name node814-client -it -p 3000:3000 node:8.1.4-alpine sh
	

This is probably only useful if you are using a different Node version.

Your application should now be running. Test it out by going to http://localhost:3000!

Now, how do you do development? You could use vim from within the container, or, if you are used to other editors like vscode and atom, you can use those as well, as they provide some extensions for remote development. If you are using remote-containers in vscode, you’ll get an error when trying to connect to the Node container: “Alpine Linux 3.9 or later required”. This is because the node:8.1.4-alpine image is based on Alpine 3.6, so in this case you’ll have to create a new container with only Alpine:

	
docker pull alpine:latest
docker run --name node814 -it -p 3000:3000 -p 8080:8080 alpine:latest sh
	

Then do a normal Node installation and finally set up your app API and client like before.

Using a single Dockerfile

Since we want to create an image that has both Node and Postgres versions we require, we need to search if any alpine linux version has them in the repositories. Go to the alpine linux packages page and search for nodejs and postgresql in different branches. If you find a branch that has the packages at the versions you need, then it’s just a matter of having this at the top of your Dockerfile:

	
FROM alpine:VERSION

RUN apk add --no-cache nodejs
RUN apk add --no-cache --update npm
RUN apk add --no-cache postgresql
	

You can add RUN apk update before installing any package to make sure the corresponding latest version in that branch is installed.

It’s not necessary to have the clone instructions in Dockerfile. You can set up a folder that contains the Dockerfile and the API and client projects already cloned but don’t install Node packages. You may also need to remove the package-lock files. To include all files in the current directory add the following:

	
WORKDIR /home/app
COPY . .
	

The WORKDIR instruction moves the current working directory to the specified path. If it doesn’t exist then it’s created.

You can install packages for each project by using:

	
WORKDIR /home/app/project
RUN npm install
	

Finally, expose some ports. This is more for documentation and doesn’t actually publish the ports (doing so always requires you to use the -p option):

	
EXPOSE 5432
EXPOSE 8080
EXPOSE 3000
	

Finally, we can add a CMD or ENTRYPOINT for the image:

	
CMD [ "/bin/sh" ]
	

We could omit this, but when using the image we have to give a command to run or the container creation will fail.

At this point, the Docker file should look something like:

	
FROM alpine:VERSION

RUN apk add --no-cache nodejs npm
RUN apk add --no-cache postgresql
# Run any database setup commands e.g.
# RUN createdb db_dev

WORKDIR /home/app
COPY . .

WORKDIR /home/app/project_api
RUN npm install
# Run any setup commands for api e.g.
# RUN npm run script

WORKDIR /home/app/project_client
RUN npm install

EXPOSE 5432
EXPOSE 8080
EXPOSE 3000

CMD [ "/bin/sh" ]
	

To create the image run:

	
docker build -t image_tag_name .
	

image_tag_name can be whatever you like. Make sure you are running the above command in the same directory of the Dockerfile.

To use it, run:

	
docker run --name project_container -it -p 3000:3000 -p 8080:8080 -p 5432:5432 image_tag_name
	

LIkely, the Node and Postgres versions available for each Alpine version are not the required ones. In this case, you have two options:

  • Add RUN instructions in the Dockerfile that manually installs Node and Postgres.
  • Create an Alpine base image that has Node and Postgres at the versions needed. 

The first option can be somewhat complicated, so let’s explore the second option. First, let’s create a running container with the latest Alpine version:

	
docker run --name alpine-latest -it alpine:latest sh
	

Within the container install some packages that might be required for the installations:

	

apk update
apk add --no-cache git openssh gcc g++ make python2 readline-dev zlib-dev linux-headers

	

Then install Node

	
wget https://nodejs.org/dist/v8.1.4/node-v8.1.4.tar.gz
tar -xzvf node-v8.1.4.tar.gz
cd node-v8.1.4
./configure
make -j4
make install
cd ..
	

Then install Postgres:

	
wget https://ftp.postgresql.org/pub/source/v10.14/postgresql-10.14.tar.gz
tar -xzvf postgresql-10.14.tar.gz
cd postgresql-10.14
./configure
make
make install
adduser --disabled-password postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
exit
	

If you are installing more services, check their installation instructions and make sure any required packages are installed.

Log out of the container and create the image:

	
docker commit alpine-latest image_name:version_tag
	

version_tag can be whatever helps you better identify the image.

To help you identify the source of any issues you may encounter while installing the services, try creating a commit from which you then create an image before you start which will then be used to create a new container. Having a separate image allows you to compare the “before” and “after” while troubleshooting.

The Dockerfile now should look like:

	
FROM image_name:version_tag

WORKDIR /home/app
COPY . .

WORKDIR /home/app/project_api
RUN npm install

WORKDIR /home/app/project_client
RUN npm install

EXPOSE 5432
EXPOSE 8080
EXPOSE 3000

CMD [ "/bin/sh" ]
	

Build the new image and run a new container with it:

	
docker build -t image_tag_name .
docker run --name project_container -it -p 3000:3000 -p 8080:8080 -p 5432:5432 image_tag_name
	

And that’s it. You can now start your API and client services.

Conclusion

Using techniques like what is listed above, we have had the opportunity to address our clients’ concerns and they love it! If you are interested in joining our team, please visit our Careers page.

Jeison Berdugo
Written by
Jeison Berdugo
Jeison Berdugo

I began programming with a hobbyist's interest to mod games I played using Java. The more I built, the more I wanted to learn, so I decided to pursue a degree in Systems Engineering. I love the challenge that comes with creating products with new technologies. I've developed apps for logistics companies, universities, and parents who want to talk to their kids. I'm thoughtful, focused, and methodical, and I particularly enjoy React and vanilla JavaScript. When I’m not working I’m either playing video games or spending time with friends and family.

FullStack Labs Icon

Let's Talk!

We’d love to learn more about your project.
Engagements start at $50,000.

company name
name
email
phone
Type of project
How did you hear about us?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.