Deploying your application is one of the last steps you’ll take when creating an app, but that doesn’t mean it’s not one of the most important parts of the software development process. There are several technologies and tools out there that we can use to deploy applications, but Kubernetes (k8s) is one of the few (if not the only one) that offers system orchestration capabilities. With Kubernetes, we could easily manage multiple instances of the same application, even more, multiple applications in a single place.
In this tutorial, we will be using Ruby 3.2.1 with Rails 7, but any other version is valid as well, just beware of the versions in the configuration files.
To install Ruby and Rails, we can run the following commands in our shell using rbenv:
Once we have installed Ruby and Rails, we will be able to create a new Rails app by running this command:
For this tutorial we won't do any CRUD operations, so there’s no need to configure a Postgres database connection. By default, Rails will set up your application to use SQLite. In our case, it should be enough, and then, we should be able to run your application.
Now that we have our basic application setup, let’s try and add some code. But first, let’s run our app with the following command on our shell:
Now, let’s go to the ApplicationController and add a dummy action that returns a plain text “OK”.
Now we should map this endpoint in the routes file, otherwise we won’t be able to make requests to it. Go to the routes file and add:
And that’s it, we should be able to go to our browser and type “http://localhost:3000/health_check”. The app should show something like this:
And that’s all the code we will need in our dummy Rails application.
All our previous steps were just basic examples of how to create a Rails app. Now, let’s dive in on our topic at hand. To deploy an RoR application to Kubernetes we first need to build a Docker image of the application we previously implemented.
Usually, when building RoR Docker images, developers use alpine-based images, which already come with a lot of dependencies installed, but mainly the Ruby version our app is built with. However, for this example, we will be creating an Ubuntu-based Docker image of our application, as it gives us more flexibility in installing dependencies and setting up the Docker image the way we want it. Moreover, most users are familiar with Ubuntu OS, so most of the commands used for building the image will likely be known.
Let’s get hands-on work. We’ll be breaking down the Dockerfile and explaining each instruction, and then we will review its final version altogether in the end.
The very first thing we need in a Dockerfile is the FROM clause. It tells Docker which image we are going to use to build ours. In this case, we will use a built-in Ubuntu image, which only has the necessary packages to run a very basic Ubuntu OS, from which we can start building our RoR image by adding the necessary packages.
Now, to install ruby 3.2.1 we will use the Ruby version manager rbenv, which is one of the most straightforward methods to install Ruby. We can check other methods here. To use rbenv, we need to load it every time we start a new terminal, and we can simulate this action by adding the following to the Dockerfile:
Here we are telling Docker to use bash to run all our commands. The -l option is used to load and run all the bash profile files. In this case, we want to run the ~/.bash_profile, in which we have the command to load rbenv. The -c option allows us to pass several commands as a string.
Now we need to install all the packages and dependencies, so we can install and run Ruby. This is a very common command for Ubuntu users:
The following command specifies the folder in which our Rails app code will reside:
At this point we haven’t installed rbenv, but we can add to the PATH environment variable the location to where the rbenv binaries will be:
Notice that we are not using the docker RUN command to set an environment variable. ENV is the right way to set an environment variable in a Docker image.
Here we just set the bundle path:
As our Dockerfile is located in the root folder of our Rails app, here the COPY command just duplicates all the files in our app to the Docker WORKDIR we specify above:
We are now only missing installing Ruby, so let’s install rbenv first:
We could have just installed rbenv using the Ubuntu apt package manager, but it’s not updated with the most recent Ruby versions. So in the previous command, we are just installing an up-to-date rbenv version using its GitHub repository. Notice that the last command echo adds to the ~/.bash_profile file the command needed to load rbenv in the terminal.
With rbenv already installed, we can go ahead and easily install Ruby and our project gems with the following commands:
Let’s pay a little more attention to the bundle config set --local without 'development test’ command. This one is important when building Rails apps for production since it excludes unnecessary gems used for development and testing.
Now, let’s expose the port in which our app is going to be running. By default Rails apps run in port 3000, so let’s go ahead and specify it:
Finally, we only need to tell our Docker image how it will be executed. Let’s first create a bash script file in our ./bin folder:
Notice that with the -b 0.0.0.0, we are telling our app to be exposed to all network interfaces and not to localhost. As localhost inside the container points to itself, we won’t be able to access our application in our host machine through http://localhost:3000, because localhost in our host machine points to our machine itself, not to the container.
And then, we add to our Dockerfile this line, which tells Docker how to execute our app:
Notice that we could have put CMD ["bin/rails", “s”, “-b”, “0.0.0.0”] in our Dockerfile, but as I mentioned previously, we need to load rbenv for the system to be able to detect the Ruby versions installed. So, with this comment #!/bin/bash --login in the bash file, we are telling it to run the bash_profile files, so rbenv gets loaded.
And this is how our final Dockerfile should look like:
All we need to do now is build our image. We can do so with the following command:
(I am assuming you already have Docker installed on your machine, but you can do so by following the instructions here).
Notice that by default, the Docker docker build command looks for a file named ‘Dockerfile’ in the specified directory. In this case, the current directory ..
First things first, for production deployments, we would usually use a cloud service that provides us with a Kubernetes engine, like Google Cloud, AWS, etc. But, for this tutorial, we will be using Minikube, a tool for running Kubernetes locally.
If you are using a cloud service, for the Kubernetes engine to be able to pull our Rails image, you need to push the image to your cloud service registry (a registry is like a repository for Docker images). Each cloud service has its own registry, Google Cloud, AWS, etc. To be able to push images to your cloud registry you need to connect to it. You can do so by using the docker login command. After logging in to your registry, you only need to push your image using the docker push command. That way, your cloud Kubernetes engine will be able to retrieve and run your image.
With Minikube, we only need to run the following command:
(See here for instructions on how to install Minikube)
First, we start our Minikube Kubernetes engine:
This will run a Kubernetes engine in the background.
Now, we just push our image to Minikube. There are several ways to do this (see here). But, we will do it using the following command:
To deploy an image to Kubernetes you need a configuration file like the one below. It basically tells Kubernetes how to orchestrate your application. In this case, this is the configuration file we will be using, and I will break down the key properties.
All we are missing now is to create our deployment using the kubectl tool (see here to install).
By default, Kubernetes resources are not available outside the Kubernetes cluster. To make them available you need to route incoming cluster requests to your Deployments or Pods resources. There are several types of services Kubernetes offers to achieve this, like LoadBalancer, NodePort, Ingress, etc. (see here).
For our case, as we have two instances of our application running we need a LoadBalancer, so requests are routed to both containers. We can expose our deployment outside the cluster with the following command:
This command will create a Kubernetes Service resource that exposes our app externally. You can set the port number you want, Kubernetes will route requests to the ports specified in the deployment yaml file.
Finally, our cluster needs a public IP to start receiving requests. You can read more about it on your cloud service documentation, e.g., Configuring Domain Name Static IP for Kubernetes on Google Cloud. With Minikube, things are a lot simpler, we only need to run the following command:
It basically exposes the Kubernetes cluster using the localhost (127.0.0.1) IP address. Make sure you are not running bin/rails s on your machine and that port 3000 is free before running the minikube tunnel command.
And that’s it, you should be able to go to http://localhost:3000/health_check on your browser and see the same response we saw at the beginning of this tutorial:
Kubernetes is a great alternative to deploy applications to production environments. It really eases and fastens the delivery process and saves us a lot of time doing software releases. With the proper CI/CD integration, we could easily deploy applications by just pushing a commit to our Git repository. Also, we can easily increase the number of instances of our load-balanced applications when there are high peaks in the number of visits to the website, and then decrease the number of replicas again when concurrency is low. Most robust software systems have this capability to react on demand.
We’d love to learn more about your project.
Engagements start at $75,000.