Getting The Most Out Of Docker
Docker is the industry leader in the world of containerization. While the concept of containers is not new, still, it was not easy to create new containers. The underlying operating system complexities of creating a container were very challenging. Docker changed that and made things easier by handling the complexities mentioned above to launch a container. It's now super easy for developers to build, manage, scale and deploy their applications securely using Docker.
In this article, we will go through best practices when using Docker containers. Using these practices will ensure you will get the most out of Docker, including better security, faster application startup, faster deployments, fault tolerance, etc.
Morgan Perry
May 29, 2022 · 5 min readLet's take an in-depth look at best practices for Docker containers:
#Container Images
- Make sure you use an official and verified base image for your particular programming environment.
For example, if you are using a reactJS-based application, then instead of taking a base OS image and then installing reactJS, NPM etc., by yourself, you should use an official reactJS image for your application. An official and verified image is built with best practices, resulting in a cleaner Docker file. - Use the specific version of the Docker image.
Instead of always using the image with the latest tag, you should use the specific version suitable for your application. Default is the latest one, so you must specify the exact version to use a previous version. - Use smaller images.
When selecting a base image, you will see many base image variations. Each image will have its own set of operating system distributions and tools packaged in that image. Try to use a smaller image instead of full-blown operating system distribution. That will reduce the time to transfer this image and improve security as well, i.e. reduced surface area, more secure images, etc. - Use caching intelligently for Docker image layers when building images.
Try to use a cached image when building the Docker image; that will be much faster and easier. By default, Docker tries to use cached image layers as much as possible. As the caching works on the Docker file command from top to bottom, it is better to write those commands on top which are least likely to change. Similarly, any commands which are likely to change frequently should be written at the bottom. - Use .dockerignore for sensitive files and configuration.
You do not want any API keys, secrets, etc., to be part of the built Docker container image. You also do not want to include unnecessary folders to be part of the image e.g. Build folder, readme, etc. The best way to achieve this is by using the dockerignore file. Just mention the files and folders in this file, and Docker will do the rest. - Make use of multi-stage builds.
Multi-stage builds are helpful when some libraries are only needed when building the image but do not need to be part of the final image. Multi-stage build allows you to use temporary build images for build purposes but lets you create a final lean image without these build dependencies included. It also improves security, as the final image exposes less surface area. - Tag your container images.
Use the “stable” and “unique” tags to manage your images. You should use the “Stable” tag with base images of your container and use the “Unique” tag with your deployment containers. - Build one image for all environments.
Suppose you have multiple development environments like staging, QA, UAT, production, etc. Instead of creating a separate image for each environment. In that case, you should create one image for all the environments. The configuration or environment-specific configuration will be picked from the .env files and will not be part of the image itself. That will result in consistency and improved testing of your application. - Use Fixed Labels for Immutability.
Do not push newer versions to the same image tag, it will result in inconsistent images during the build, and it will be tough to track bugs and fixes. Try to use an immutable (static) tag or label in production environment to ensure your deployment does not change automatically if someone updates the same tag with a different image.
#General
- Separate configuration from code.
Make sure that any settings, configuration, or environment-specific configuration, should not be part of the code itself. Docker will pick the configuration from the external configuration (.env file or some configuration store, etc.) when building an image, so the application configuration must not be part of the image itself. - Run stateless containers.
Docker containers are designed to execute immutable code. That means they should not be used to store any information. Make sure you use some external portable storage like Amazon Elastic Block Store (EBS) or Amazon RDS etc. That will enable you to scale out your containers. - Use one application per container.
Compared to VMs, the containers are lightweight, and multiple applications should not run on a single container. For example, if you have a MERN stack application, then you should have one container for Mongo DB, one for express, one for react, and one for Node. - Use linter to ensure best Docker practices.
Linter is a static code analysis tool that applies Docker file best practices. Using a linter will ensure that the Docker images being produced are built with best practices for the Docker images.
#Security
Container security is a vast area. You can find our detailed post on container security here. Let’s summarize the best practices related to Docker security below:
- Always use signed images or a Docker image with a valid checksum.
Always verify the images before pulling them. - Do not use a third party repository if its URL does not support HTTPS
- Use the least privileged user for containers.
By default, the root user is used to start the container application. However, you should use a less privileged user explicitly created to run the application in that container. - Scan your container images, code repository, and container hosts for any vulnerabilities.
It is better to automate this scanning and get this scanning integrated with the CI/CD pipeline during the building of Docker images - By default, new containers are allowed to acquire new privileges, so explicitly set the configuration to disallow containers from getting new privileges.
- Do not run a Docker container with –privileged flag; otherwise, this container will have elevated access to the underlying host, compromising security.
#Conclusion
Docker has changed the way applications can be developed and deployed using containers. Docker has made it very simple to containerize your application by encapsulating all the complexities needed to run containers. In this article, we discussed many best practices related to Docker container images, security practices, and general guidelines to build and run your Docker containers efficiently. Adopting the above measures will allow you to take advantage of all the benefits of Docker.
Qovery can help you attain all the benefits of Docker containers without indulging in the complexities mentioned above. Qovery has great support for Docker containers and strong integration with AWS EKS, which lets you take maximum benefits of Docker containers on your AWS account with simplicity and ease. Start deploying containerized applications with Qovery!
Your Favorite DevOps Automation Platform
Qovery is a DevOps Automation Platform Helping 200+ Organizations To Ship Faster and Eliminate DevOps Hiring Needs
Try it out now!Your Favorite DevOps Automation Platform
Qovery is a DevOps Automation Platform Helping 200+ Organizations To Ship Faster and Eliminate DevOps Hiring Needs
Try it out now!