Basics of Docker
In the world of DevOps, moving fast while keeping code quality high is paramount. Traditionally, many organizations subscribed to the monolithic approach where all components were tightly-coupled and loads of dependencies exist within. Due to this tight integration, tons of tests had to be performed, teams had to be involved, and more. It could take months before a single change deploys to production. To move fast but stay efficient, we need a better way.
One way to speed up development time is to break up that monolithic application and rely more on a distributed model using microservices. Under a microservices architecture, each component of software is a standalone component down to the operating system. Containers are a technology that greatly assists in microservices and DevOps.
Although containers have been around for many years, they have been traditionally hard to use until a company called Docker came around. Docker humanized container management and provided an easier way to work with containers.
Docker, for DevOps purposes, is a tool that encapsulates a microservice or any application for that matter and bundles it up with all infrastructure dependencies. This Docker application “package” exists in a container.
Using Docker, a DevOps pro can quickly create and deploy an entire application and operating system dependencies and remove the “It works on my machine” comment. From the time a container starts, runs the code, gets tested, and gets deleted, it could easily be minutes. Think of how efficient your software delivery process could be with such a great amount of time saved!
There are three major components that make up Docker you need to understand: the Docker Engine, Docker Images, and Dockerfiles. Let’s first cover the main components and then dig into the project for this chapter.