What Problem Does Helm Solve?

Learn about the basics of Kubernetes and its application.

A brief introduction to Kubernetes

Kubernetes is one of the most popular tools used to build cloud solutions. It owes its popularity to the fact that it was designed to be an open-source solution released by Google, who based it on its own, battle-tested solution called Borg.

In short, Kubernetes is a container orchestration platform that helps to deploy and manage software in a cloud. It helps to manage containerized applications (tools like Docker) by scaling them up and making sure that they’re available. Also, it takes over the burden of managing server infrastructure by providing an elegant abstraction.

These elegant abstractions are called Kubernetes objects and are represented in the .yaml files. Here is an example of the smallest but the most important object called Pod:

Press + to interact
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80

Pod represents a running application that usually consists of a single container (e.g., Docker). In the snippet above, Pod contains a container with a plain NGINX server.

Although it’s possible to operate on Pod, it’s not recommended. Instead, we can use Deployment, which will not only create a Pod but also take care of several application instances. With Deployment, we can specify how many Pods we would like to have. If a container stops (due to an error in the program), Kubernetes will automatically start a new one to make sure that the number of running Pods is the same as what was set.

Here, is an example of a Deployment manifest, again with a plain NGINX server, but this time we set it to 2 replicas, which means that we would like to have two instances:

Press + to interact
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80

A minimal application setup would require defining one more thing—Service. Kubernetes API is built using the Unix philosophy that each object is designed to do one thing. As already mentioned, Pods are for wrapping one or more containers and Deployments are for managing their lifecycle. Services, on the other hand, have a different purpose.

They are designed to expose applications to other software either within or outside the Kubernetes cluster. The reason for this is that when a Pod is created, it gets its IP address. If there are two Pods of the same application, both of them receive a different IP address. Now, if any other application wants to reach one of them, it would need to know the IP address of one or both of the Pods, which is very inconvenient. Another issue is that Pods can be easily destroyed and recreated, and each time this happens, they get a different IP address, which makes it necessary for all the other applications connected with a killed Pod to update their IP address.

Due to this problem, Services were introduced. They provide an abstraction that defines access to Pods so that addresses to applications can be static and will not change if a Pod is killed.

An example of a Service is given below:

Press + to interact
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
selector:
name: nginx

Above manifest is a definition of a Service with the type NodePort. There are a couple more, and each one of them was designed with a different purpose.

Apart from Deployments and Services, applications installed on Kubernetes require other types of objects, such as the following:

  • Ingress: These are for managing external access to applications in a cluster.

  • Volumes: These represent a directory in which files and data can persist.

  • ConfigMap: This is used to store and inject environment variables to Pods.

  • Secret: This is similar to a ConfigMap, but it’s used for more sensitive information like passwords.

  • Custom resource: This is used to extend Kubernetes API and provide an infinite number of Kubernetes objects

We don’t need to know what each of these objects does right now. Throughout the course, a number of them will be introduced.

Scaling Kubernetes applications

In real-world applications, teams tend to start with a small number of Kubernetes objects, but as the project grows, the number of YAML files also increases. A project becomes very complex and hard to maintain even for a small task.

The burden is even more significant for distributed systems where the number of applications gets bigger and bigger.

Trying to maintain all of these YAML files is quite a challenge that doesn’t scale well.

And this is where (though not the only place) Helm comes to the rescue!