Scaling Applications Using Containers (II)

Learn how to deploy and scale a simple Node.js web server on Kubernetes (using minikube).

What is Kubernetes

We just ran a Node.js application using containers, hooray! Even though this seems like a particularly exciting achievement, we have just scratched the surface here. The real power of containers comes out when building more complicated applications. For instance, when building applications composed of multiple independent services that need to be deployed and coordinated across multiple cloud servers. In this situation, Docker alone isn’t sufficient anymore. We need a more complex system that allows us to orchestrate all the running container instances over the available machines in our cloud cluster: we need a container orchestration tool.

A container orchestration tool has a number of responsibilities:

  • It allows us to join multiple cloud servers (nodes) into one logical cluster, where nodes can be added and removed dynamically without affecting the availability of the services running in every node.

  • It makes sure that there’s no downtime. If a container instance stops or becomes unresponsive to health checks, it’ll be automatically restarted. Also, if a node in the cluster fails, the workload running in that node will be automatically migrated to another node.

  • It provides functionalities to implement service discovery and load balancing.

  • It provides orchestrated access to durable storage so that data can be persisted as needed.

  • It provides automatic rollouts and rollbacks of applications with zero downtime.

  • It provides secret storage for sensitive data and configuration management systems.

One of the most popular container orchestration systems is Kubernetes, originally open sourced by Google in 2014. The name Kubernetes originates from the Greek “κυβερνήτης” meaning “helmsman” or “pilot,” but also “governor” or more generically, “the one in command.” Kubernetes incorporates years of experience from Google engineers running workloads in the cloud at scale.

One of its peculiarities is the declarative configuration system that allows us to define an “end state” and let the orchestrator figure out the sequence of steps necessary to reach the desired state, without disrupting the stability of the services running on the cluster.

The whole idea of Kubernetes configuration revolves around the concept of “objects.” An object is an element in the cloud deployment, which can be added, removed, and have its configuration changed over time. Some good examples of Kubernetes objects are:

  • Containerized applications

  • Resources for the containers (CPU and memory allocations, persistent storage, access to devices such as network interfaces or GPU, and so on)

  • Policies for the application behavior (restart policies, upgrades, fault tolerance)

A Kubernetes object is a sort of “record of intent,” which means that once we create one in a cluster, Kubernetes will constantly monitor (and change, if needed) the state of the object to make sure it stays compliant with the defined expectation.

A Kubernetes cluster is generally managed through a command-line tool called kubectl.

There are several ways to create a Kubernetes cluster for development, testing, and production purposes. The easiest way to start experimenting with Kubernetes is through a local single-node cluster, which can be easily created by a tool called minikube.

Deploying and scaling an application on Kubernetes

We’ll be running our simple web server application on a local cluster. So, it’s important to make sure we have kubectl and minikube correctly installed and started.

The first thing that we want to do is build our Docker image and give it a meaningful name.

Get hands-on with 1400+ tech skills courses.