Long time developers usually need to use different versions for database services or programming languages over the course of their careers. You’ve probably used version managers like nvm or rvm to help with packages, but they generally do not give much aid in setting up databases. After deployment, a project usually never changes its database version, and you can find yourself using a more recent database, which can lead you to implement unavailable functionality or use deprecated APIs. There is a way to avoid these issues, set up the project with the correct versions, and not worry about breaking your development machine.
Most development environments in projects can be divided into a few components. Generally, these will be all the services that need to be running --- for example, a database, a backend, and a frontend. In some cases, you might also have separate microservices, a Redis instance, and so on. We will explore different setup possibilities, from having a Docker instance for each service to creating a Dockerfile that allows you to set up everything in a single instance.
This will depend on the project --- for example, a simple Node.js project with a Postgres database, with just an API and a React.js client. It’s okay if you are using nvm; in that case, you only need the database instance. Assume the following versions:
First, let’s get our database set up in one instance. Search for Postgres in Docker Hub and go to the Tags tab. Filter tags by version 10.14, and you should see the following tags:
The -alpine version is made from an official Docker image that uses Alpine Linux. It’s very lightweight, and we’ll be using it. Either version is okay to use. Copy and run the following command:
And now run the following to create an instance:
I’ve used postgres1014 for the instance name, but you can use whatever you like. I recommend a combination of the project name and postgres. We’ll use the name to log in and run commands.
If you currently have Postgres installed and running on your machine then you can either permanently disable it or bind the instance port to a different port using: -p 5433:5432. The format for this is local_port:instance_port.
POSTGRES_PASSWORD and POSTGRES_USER are required environment variables used to initially set up the database. In the description tab for the Postgres image page in Docker Hub there are more environment variables lists including instructions on how to use them.
Run docker ps to see the instance. Now to get onto the container:
You should now be able to use the Postgres utils and any other command.
We can use a single instance for the API and the client, or one for each if you prefer that the setup is similar.
Now let’s look for our Node version in Docker Hub:
With the -it options and the sh at the end we are creating a shell in the new instance. If you need to install a package use
The --no-cache option is to avoid bloating up the image with cache files.
Install git and any other needed packages and clone the project:
Pay attention to any errors that come up, as you may need to install some packages like gcc, g++, make, or python (some things that node-gyp needs).
Since the database is in another container and the Node container needs to connect to it, you should update any environment variable for the database host. To figure out which is the Postgres IP, run:
Perform any additional setup that your project requires --- creating the database, running migrations, seeds, etc.
You should now be able to start your API process.
For the client process, you can either open a new shell by running:
and then clone the client project and install packages, or do so in a new Node instance by running:
This is probably only useful if you are using a different Node version.
Your application should now be running. Test it out by going to http://localhost:3000!
Now, how do you do development? You could use vim from within the container, or, if you are used to other editors like vscode and atom, you can use those as well, as they provide some extensions for remote development. If you are using remote-containers in vscode, you’ll get an error when trying to connect to the Node container: “Alpine Linux 3.9 or later required”. This is because the node:8.1.4-alpine image is based on Alpine 3.6, so in this case you’ll have to create a new container with only Alpine:
Then do a normal Node installation and finally set up your app API and client like before.
Since we want to create an image that has both Node and Postgres versions we require, we need to search if any alpine linux version has them in the repositories. Go to the alpine linux packages page and search for nodejs and postgresql in different branches. If you find a branch that has the packages at the versions you need, then it’s just a matter of having this at the top of your Dockerfile:
You can add RUN apk update before installing any package to make sure the corresponding latest version in that branch is installed.
It’s not necessary to have the clone instructions in Dockerfile. You can set up a folder that contains the Dockerfile and the API and client projects already cloned but don’t install Node packages. You may also need to remove the package-lock files. To include all files in the current directory add the following:
The WORKDIR instruction moves the current working directory to the specified path. If it doesn’t exist then it’s created.
You can install packages for each project by using:
Finally, expose some ports. This is more for documentation and doesn’t actually publish the ports (doing so always requires you to use the -p option):
Finally, we can add a CMD or ENTRYPOINT for the image:
We could omit this, but when using the image we have to give a command to run or the container creation will fail.
At this point, the Docker file should look something like:
To create the image run:
image_tag_name can be whatever you like. Make sure you are running the above command in the same directory of the Dockerfile.
To use it, run:
LIkely, the Node and Postgres versions available for each Alpine version are not the required ones. In this case, you have two options:
The first option can be somewhat complicated, so let’s explore the second option. First, let’s create a running container with the latest Alpine version:
Within the container install some packages that might be required for the installations:
Then install Node
Then install Postgres:
If you are installing more services, check their installation instructions and make sure any required packages are installed.
Log out of the container and create the image:
version_tag can be whatever helps you better identify the image.
To help you identify the source of any issues you may encounter while installing the services, try creating a commit from which you then create an image before you start which will then be used to create a new container. Having a separate image allows you to compare the “before” and “after” while troubleshooting.
The Dockerfile now should look like:
Build the new image and run a new container with it:
And that’s it. You can now start your API and client services.
We’d love to learn more about your project.
Engagements start at $50,000.