Let’s say you’ve played the container game for a while now and you get the gist of how Docker works. You’ve brought up a few containers and have now found yourself with dozens or even hundreds of containers to manage in production. You now have a new problem. You no longer need to learn how to build images and run containers, but you now need to orchestrate them.
Containers, by themselves, have limited scalability in any environment. The Docker engine, for example, can be used to create containers. But Docker can’t create exact replicas of the same container. Once you learn more about containers and attempt to manage them at scale, you’ll find multiple issues like the aforementioned one.
You need a way to orchestrate containers at a scale by incorporating redundancy, automation, and resiliency. For example, when you create a Docker container and it fails, all services fail without the orchestration of some kind. There is nothing to account for that failure and take appropriate remediation steps. The container will shut down until you re-create it or start the stopped container.
A Kubernetes cluster, sometimes referred to as K8s, is a popular solution to the container orchestration problem.
Kubernetes is an orchestration system that takes care of scaling containers and a concept called self-healing. When a container is controlled with Kubernetes and fails or the application the container serves needs more resources, Kubernetes can automatically build another container or bring up new containers to handle the load. Kubernetes takes out the manual aspect of managing containers and redundancy.
In this chapter, you will take a hands-on approach to learn what Kubernetes is and why it’s useful. You’ll get a solid overview of Kubernetes and its components and learn how to build a Kubernetes cluster in the chapter project.
In this chapter’s project, you will install and configure the Kubernetes Command Line Interface (kubectl) to interact with a Kubernetes API. You will then create a local Kubernetes cluster using Minikube, which is a development version of Kubernetes for testing. Following that, you will learn how to create and manage containers inside of Kubernetes.
Let’s get going!
Even though Kubernetes is a wildly popular container orchestration product, its complexity is infamous. There’s no possible way to cover the entire ecosystem of Kubernetes concepts in a single chapter, but we’re going to hit the basics. We’ll cover the main concepts here to prime you for the hands-on learning you’ll do in the chapter’s project.
All Kubernetes clusters run on one or more servers from a Windows 10 laptop, Linux, or Windows Server virtual machine to a Raspberry Pi. Regardless of where Kubernetes is running, every Kubernetes cluster requires at least two nodes: a single master node and at least one worker node. Nodes are Kubernetes resources that either control container orchestration (master node) or run the containers (worker nodes).
The single master node is the core of a Kubernetes cluster. Master nodes control and manage all Kubernetes resources including networking, scheduling, container locations, and container self-healing procedures.
Think of the master node like the brain. The brain plays a crucial part in cognitive movements, awareness, thoughts, and sensation, etc. It controls mostly everything. This is much like the master node in a Kubernetes cluster.
The master node is a server serving up an API, which you, the administrator, and the Kubernetes resources call upon to manage resources.
The API hosted on the master node will vary based on the Kubernetes version. You’ll see later and in the real-world that various managed solutions only offer specific versions of Kubernetes. These vendors rely on specific versions in their own Kubernetes offerings because they haven’t updated their own capabilities to the newest Kubernetes API version.
You’ll see in this chapter that Kubernetes is heavily dependent on APIs.
Worker nodes, or sometimes just called nodes, are where Kubernetes hosts nearly all of its resources (networking, containers, etc.). Worker nodes are where the work (no pun intended) gets done.
Worker nodes contain various resources that the master node controls. The master node serves as the orchestrator of the worker node symphony. Each worker node stored resources like containers, networking configuration, Pods, and many others. The worker node is the primary communication between you and the application running in Kubernetes.
Below you can see an example of a basic Kubernetes cluster architecture.