FullStack Labs

Please Upgrade Your Browser.

Unfortunately, Internet Explorer is an outdated browser and we do not currently support it. To have the best browsing experience, please upgrade to Microsoft Edge, Google Chrome or Safari.
Upgrade
Welcome to FullStack Labs. We use cookies to enable better features on our website. Cookies help us tailor content to your interests and locations and provide many other benefits of the site. For more information, please see our Cookies Policy and Privacy Policy.

Small is Beautiful: How Container Size Impacts Deployment and Resource Usage

Written by 
Ogo Ozotta
,
Senior Devops Engineer
Small is Beautiful: How Container Size Impacts Deployment and Resource Usage
blog post background
Recent Posts
Getting Started with Single Sign-On
Choosing the Right State Management Tool for Your React Apps
Accessibility in Focus: The Intersection of Screen Readers, Keyboards, and QA Testing

Table of contents

Containers have become a popular choice for deploying applications due to their flexibility and scalability, but larger containers can impact deployment time, resource usage, scalability, and cost. By using minimal base images, reducing dependencies, removing unused files, using multi-stage builds, and dividing applications into smaller microservices, developers can optimize container size and minimize its impact on deployment and resource usage.

Optimizing Container Sizes 

Containerization has gained popularity as a means of deploying and managing applications in the field of software development. As businesses continue to look for ways to optimize their operations and reduce their environmental footprint, one area that's come under scrutiny is the size of the containers used for deployment. Applications can be packaged and executed using containers in a lightweight, portable manner in a variety of settings, from development to production. The size of the containers themselves is one component of containerization that is frequently disregarded. We will discuss the significance of container sizes for quick deployment and resource optimization in this blog article.

What is an Image, Container, and Dockerfile?

An image is a lightweight, standalone, executable package that contains all the necessary code, libraries, system tools, and runtime to execute an application. An image is created by assembling a set of instructions, usually through a Dockerfile, that specifies how to build the image. Images are static, and they can be shared and run on any system that supports Docker.

A container is a lightweight, portable, and self-contained executable package that is created from an image. A container is a running instance of an image that can be launched and managed on a host system. Containers are designed to be highly portable, and they can be run on any system that supports containerization.

A Dockerfile is a script used to build a Docker image. It contains a set of instructions for building an image, including the base image to use, the packages to install, and the commands to run. The Dockerfile is used to create a reproducible image that can be shared and run on any system that supports Docker.

Container size has a significant impact on resource utilization and deployment in a number of areas, including: 

  1. Time to Deploy: The longer it takes to deploy a container, the longer it will take to update or roll back an application. 
  2. Resource Usage: Larger containers use up more CPU (central processing unit) and memory, which can affect the performance of the host system. 
  3. Scalability: To manage increased traffic, numerous instances of the same container can operate at once because containers are made to grow horizontally. However, larger containers might need more resources, which would make scaling more difficult. 
  4. Cost: Running the program may get more expensive since larger containers may need more resources.

What is “Container size”? 

First, let's explain what "container size" means. The container size describes how much memory and disk space a container needs to function. Smaller containers demand less memory and disk space, which can significantly affect deployment speed and resource utilization. 

Managing the resources needed by the containers is one of the main issues when deploying applications in containers. Smaller images are quicker to download and transfer, which means they can be deployed more quickly. Oversized containers may take longer to install, use more memory, and require more CPU power, which can affect performance and raise costs. On the other hand, too-small containers might not have enough resources to run the application, which would lead to unstable and slow performance. For example, if you're deploying an application to hundreds or thousands of servers, shaving a few seconds off the deployment time can add up hours or even days of saved time.

It's crucial to optimize container sizes for the application being deployed in order to achieve a balance between these two extremes. This entails examining the resource needs of the application and identifying the ideal balance between container size and performance.

Best Practices for Container Size Optimization 

Consider the following best practices to optimize container size and reduce its effect on deployment and resource usage:

  1. Use Small Base Images: Instead of starting with a full Linux distribution when developing a Python web application for instance, use a minimal base image like Alpine. By doing this, the container's size can be decreased from about 1GB to 400MB or less as shown in the example below. 
  2. Reduce Dependencies: Only include the necessary libraries, not all the ones your program might require. For example, if you're constructing a Node.js application that just needs Express and Mongoose, only include those dependencies. By doing this, the container's size can be decreased from about 200MB to 50MB or less.
  3. Remove Unused Files: If your container contains any files that aren't required for the program to function, remove them. For instance, you might have temporary files or log files that are not required. The container's size may be decreased by several megabytes as a result. 
  4. Use Multi-Stage Builds: To keep the build environment and runtime environment separate, use multi-stage builds. By just including the files required for the runtime environment, this can reduce the size of the final image. For instance, after compiling a binary using a multi-stage build, you can transfer that binary into a smaller base image for the runtime environment. By doing this, the container's size can be decreased from several hundred megabytes to less than 50MB.
  5. Divide your program into smaller, easier-to-manage microservices, each with its own container, by using microservices. This can increase scalability and optimize resource utilization. You might have a monolithic program that consists of a web server, a database, and a cache, for instance. You can decrease the size of the containers and increase scalability by splitting it up into smaller microservices, each with its own container.

Sample Dockerfile and corresponding size:

In the first example, the resulting image size is 1.13GB. To reduce the image size, the base image will be changed.


FROM node:14 
WORKDIR /app
COPY . ./
ENV CI=true 
ENV PORT=8080 
RUN npm install
RUN npm run lint
RUN npm install --save-dev --save-exact prettier 
RUN npm run prettier
RUN npm run test
RUN npm run build
CMD ["npm", "start"]

In the example below, the base image has been changed to Alpine and thereby reducing the image size by 70%. The resulting image size is then 334MB.


FROM node:14-alpine 
WORKDIR /app
COPY . ./
ENV CI=true 
ENV PORT=8080 
RUN npm install
RUN npm run lint
RUN npm install --save-dev --save-exact prettier 
RUN npm run prettier
RUN npm run test
RUN npm run build
CMD ["npm", "start"]

To further reduce the layers in the images, to optimize the container, the RUN command on the Dockerfile can be reduced as shown below:


FROM node:14-alpine 
WORKDIR /app
COPY . ./
ENV CI=true 
ENV PORT=8080 
RUN npm install \
    && npm run lint \
    && npm install --save-dev --save-exact prettier \
    && npm run prettier \
    && npm run test \
    && npm run build
CMD ["npm", "start"]

The image layers can affect container performance and runtime in the following ways:

  1. Startup time: The more layers an image has, the longer it will take for the container to start up because each layer must be pulled from the registry and loaded into memory. This can be noticeable if you have many small layers, as the overhead of loading each layer can add up.
  2. Disk usage: Each layer in an image adds to the size of the final image, which in turn affects the disk usage of the container. This can become a problem if you have many images or if you're working with limited disk space.
  3. Caching: Docker caches intermediate layers to speed up image building, but this can cause issues if you're not careful. If you make a change to a layer that's already been cached, Docker won't rebuild that layer and you may end up with unexpected behavior in your container.
  4. Security: Each layer in an image represents a potential attack surface, so minimizing the number of layers can improve the security of your container. This is because each layer can potentially contain vulnerabilities or malicious code that could be exploited

Overall, it's important to carefully consider the number and size of layers in your Docker images to optimize performance and security. By minimizing the number of layers and keeping them as small as possible, you can improve the performance and security of your containers.

The use of a technology like Docker's multi-stage builds is one method to optimize container sizes. Using a lightweight base image as a starting point and gradually adding just the dependencies and packages needed by the application, this enables developers to build containers in phases. By employing this method, programmers can minimize the size of the finished container while guaranteeing that it has all the components required to operate the application. 

Utilizing container orchestration solutions like Kubernetes is another method for optimizing container sizes. Auto-scaling, one of the capabilities offered by Kubernetes, enables containers to scale up or down dynamically depending on the workload. This guarantees resource efficiency and prevents containers from using up more resources than they should.

Small container sizes have advantages beyond quick deployment and resource optimization. Smaller containers, for instance, are simpler to handle and move between environments. Additionally, they facilitate the implementation of a microservices architecture, in which larger applications are divided into more manageable and independently deployable services.

Conclusion

In conclusion, reduced container sizes are crucial for the deployment of contemporary applications. Developers may speed up deployment, use less resources, and guarantee reliable and efficient operation of apps by optimizing container sizes. Taking the time to optimize container sizes is well worth the effort, whether you're deploying a simple business system or a small application.

References:

Docker Documentation: https://docs.docker.com/get-started/overview/

Kubernetes Documentation: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Google Cloud Blog: Kubernetes Best Practices: How and Why to Build Small Container Images. https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-how-and-why-to-build-small-container-images

Stack Overflow: How to Estimate Kubernetes Resources for a Pod. https://stackoverflow.com/questions/69910713/how-to-estimate-kubernetes-resources-for-a-pod

Kube by Example: Learning Path - Application Development on Kubernetes - Lesson 4: Customize Deployments. https://kubebyexample.com/learning-paths/application-development-kubernetes/lesson-4-customize-deployments-application-0

Sysdig Blog: Kubernetes Capacity Planning. https://sysdig.com/blog/kubernetes-capacity

Ogo Ozotta
Written by
Ogo Ozotta
Ogo Ozotta

Ogo has a Master’s Degree in Engineering with a track record of success in code and script generation. She enjoys working with different technologies like DevOps, Linux, and AWS. Ogo has also developed automated CI/CD pipelines that maximize application efficiency.

People having a meeting on a glass room.
Join Our Team
We are looking for developers committed to writing the best code and deploying flawless apps in a small team setting.
view careers
Desktop screens shown as slices from a top angle.
Case Studies
It's not only about results, it's also about how we helped our clients get there and achieve their goals.
view case studies
Phone with an app screen on it.
Our Playbook
Our step-by-step process for designing, developing, and maintaining exceptional custom software solutions.
VIEW OUR playbook
FullStack Labs Icon

Let's Talk!

We’d love to learn more about your project.
Engagements start at $75,000.