Trusted answers to developer questions
Trusted Answers to Developer Questions

Related Tags

kubernetes
communitycreator

What is Kubernetes core architecture?

Arsh Sharma

Grokking Modern System Design Interview for Engineers & Managers

Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process.

Why Kubernetes?

To answer “why” K8s exists, you will have to understand a bit about how deployment with containers works.

Containers in Kubernetes

The simplest way to understand this is to imagine your containers running on computers present “somewhere”. This “somewhere” is generally referred to as “the cloud”, and services listed below simply provide you access to these computers.

  • AWSAmazon Web Services
  • Azure
  • Google Cloud

These computers can be best thought of as our remote hosting machines, where we can install Docker and run containers.

Problems with containerization

  1. The containers you run might shut down and will need to be replaced.

  2. If there is greater traffic, you might need to spin up more containers.

  3. You might also want to ensure that only one container isn’t doing the heavy lifting and that the load is equally distributed amongst all running instances.

K8s aims to solve all these problems. Those familiar with services like AWS ECS might argue that they play a similar role, so why bother with K8s?

While these services can solve our problem, you would have to learn that particular service, and if you wanted to switch to something else in the future, then you would have to learn that new service as well.

So, why not familiarize yourself with a standardized way that will work regardless of the provider you choose? That is simply why one would prefer K8s over these services.


You need some provider-specific setup with Kubernetes, too, but it is a lot less than what you would need when not using K8s.


What is Kubernetes?

The official K8s website describes Kubernetes as:


Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.


This explanation should make a lot of sense now after our discussion on why we should use K8s.

The gist is that it will make our lives easier by helping us with deploying containers, scaling them based on the traffic we receive, and overall management of our containerized application.

Kubernetes core architecture

When starting with Kubernetes, a lot of people get overwhelmed or confused by the way its architecture works. I will try to simplify this as much as possible in this shot.

I do recommend you check out the official documentation after reading this shot since, instead of technical accuracy, my aim is to simplify things so that you can understand the big picture.

widget

Let’s analyze this chart from right to left.

Pods

The rightmost unit in the diagram is a pod. It can be described as the smallest unit in the world of K8s.

K8s doesn’t run containers directly and uses these “pods” to wrap one or more containers. The containers in a pod share the same resources. The pods are created and managed by K8s.


In short: Imagine a pod as a wrapper for our container(s).

Worker nodes

A K8s cluster is nothing but a network of computers. The term “node” can be interpreted as a single computer in this network.

There are two kinds of nodes:

  1. Worker
  2. Master

Worker nodes host the pods that run the containers, as discussed above. There can be multiple pods running different containers present in the same worker node.

A node is just a computer somewhere on the internet (offered by a cloud provider).

These nodes hold a certain amount of CPU and memory. Therefore, we can run totally different containers and tasks on it.

Apart from pods, three important things are present in Worker nodes:

  1. Docker: We need Docker to run the application containers.

  2. kubelet: This can be understood as an application that is responsible for communication between the Master and Worker nodes.

  3. kube-proxy: This handles network communications between the pods and network sessions inside or outside the entire K8s cluster.


In short: Just imagine a Worker node as a computer that has the required tools and pods.

The Master node

The final thing we need to talk about is the Master node. The Master node hosts the “Control Plane,” which can be understood as the brain of our K8s cluster. The control plane basically ensures that our K8s cluster is working as we configured it.

A few important things running in the Master node are:

  1. API Server: The most important service running on the Master node, and the counterpart for the kubelet introduced above. It is responsible for communication with the Worker nodes.

  2. Scheduler: It is responsible for watching our pods and choosing the Worker nodes in which new pods should be created.

We need new pods in case a pod gets unhealthy and goes down, or because of scaling.

The scheduler is responsible for telling the API Server what to tell the Worker nodes.

There are some other things present that you can look at in the official docs, but for now, this will suffice.


In short: The Master node is the brain of our K8s cluster.

RELATED TAGS

kubernetes
communitycreator

CONTRIBUTOR

Arsh Sharma
Copyright ©2022 Educative, Inc. All rights reserved

Grokking Modern System Design Interview for Engineers & Managers

Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process.

Keep Exploring