Home/Blog/Programming/Deploying a web app on Kubernetes
Home/Blog/Programming/Deploying a web app on Kubernetes

Deploying a web app on Kubernetes

8 min read
Oct 15, 2024
content
What’s the difference between Kubernetes and Docker?
Kubernetes vs. Swarm mode in Docker
What’s a Kubernetes cluster?
What’s fault tolerance in Kubernetes?
How to deploy a basic web app on Kubernetes
1. Creating Docker images
Explanation:
Building the Docker image
2. Setting up a Kubernetes cluster
3. Deploying the web app
Explanation:
4. Exposing the application to external traffic
Explanation:
Verifying the cluster
What to learn next?
Single deployment with Helm
Multiple deployments with Helm
Continue reading about Kubernetes

Key takeaways:

  • Kubernetes for app management: Kubernetes helps manage and scale web apps effectively, ensuring they can handle increasing user traffic and recover from unexpected issues.

  • Kubernetes vs. Docker: While Docker is focused on containerizing apps, Kubernetes goes further by orchestrating those containers across several machines, which is ideal for more complex setups.

  • Kubernetes cluster overview: A cluster is made up of multiple worker nodes, coordinated by a control plane, allowing for efficient scaling and resilience to system failures.

  • Key deployment steps: Deploying a web app on Kubernetes involves building Docker images, setting up a Kubernetes cluster, deploying the app, and then exposing it to users via services like a load balancer.

  • Resilience and fault tolerance: Kubernetes provides robust fault tolerance, ensuring applications continue running even if parts of the infrastructure fail.

  • Helm simplifies deployments: Helm, the package manager for Kubernetes, streamlines the deployment process with templates (charts) and supports managing both single and multiservice deployments.

In DevOps, you’ve probably heard about the term Kubernetes a lot after Docker. To understand what it means, picture this: you’ve built and deployed a web app, but as the number of users grows on your website, you realize that it needs to be scaled up or down to keep it running smoothly and always ready to recover from unexpected failures.

This is where Kubernetes comes into play in how applications are deployed and managed. It does all of that by focusing on what matters the most. In this blog, we'll uncover the magic behind Kubernetes.

Note: To see Kubernetes in action, check out this hands-on project on Deploying a Web Application Over Kubernetes to understand its deployment advantages better.

What’s the difference between Kubernetes and Docker?#

We should be very clear about the difference between Kubernetes and Docker. While Docker provides us a way to containerize the application, Kubernetes takes things a step further by orchestrating these containers across multiple hosts, known as a Kubernetes cluster. Kubernetes is designed to handle complex and multi-container applications.

Kubernetes is not limited to Docker; it supports a range of container runtimes, including containerd, CRI-O, and others that are CRI-compliant, providing flexibility depending on the needs of the infrastructure.

Cover
A Practical Guide to Kubernetes

Kubernetes is a powerful container management tool that's taking the world by storm. This detailed course will help you master it. In this course, you'll start with the fundamentals of Kubernetes and learn what the main components of a cluster look like. You'll then learn how to use those components to build, test, deploy, and upgrade applications and, as well as how to achieve state persistence once your application is deployed. Moreover, you'll also understand how to secure your deployments and manage resources, which are crucial DevOps skills. By the time you're done, you'll have a firm grasp of Kubernetes and the skills to deploy your own clusters and applications with confidence.

20hrs
Intermediate
3 Cloud Labs
72 Playgrounds

Kubernetes vs. Swarm mode in Docker#

Swarm mode in Docker offers a more straightforward option compared to Kubernetes. It's ideal for smaller-scale projects or teams that don't need Kubernetes' more advanced capabilities. While both Swarm and Kubernetes enable developers to deploy and manage containers across several nodes, Swarm's features for self-healing, load balancing, and automatic scaling are more limited.

What’s a Kubernetes cluster?#

Think of a Kubernetes cluster as the core of Kubernetes, made up of several worker machines, or nodes, all managed by a central system called the control plane. These nodes work together to run applications in containers. Each node hosts one or more pods, and these pods hold the containers where your applications live. The control plane is like the cluster's manager—it keeps everything running smoothly, decides where apps should run, scales them up or down as needed, and handles any failures.

The biggest advantage of using a Kubernetes cluster for deploying web apps is that it can easily handle changes in demand. By spreading applications across multiple nodes, Kubernetes ensures that your apps stay online, even if some parts of the system fail, keeping downtime to a minimum.

A Kubernetes cluster
A Kubernetes cluster

What’s fault tolerance in Kubernetes?#

If we want to build a truly resilient application, Kubernetes offers fault tolerance capabilities that traditional Docker deployments simply can't match. These fault tolerance abilities make applications running smoothly despite the presence of node crashes.

Note: You can learn more about how Kubernetes achieves this in the Fault-Tolerant Web Hosting on Kubernetes hands-on project. In this project, you'll explore how to maintain web services availability even when system components fail by leveraging Kubernetes' powerful features to deploy, scale, and manage containerized applications.

Cover
A Practical Guide to Kubernetes

Kubernetes is a powerful container management tool that's taking the world by storm. This detailed course will help you master it. In this course, you'll start with the fundamentals of Kubernetes and learn what the main components of a cluster look like. You'll then learn how to use those components to build, test, deploy, and upgrade applications and, as well as how to achieve state persistence once your application is deployed. Moreover, you'll also understand how to secure your deployments and manage resources, which are crucial DevOps skills. By the time you're done, you'll have a firm grasp of Kubernetes and the skills to deploy your own clusters and applications with confidence.

20hrs
Intermediate
3 Cloud Labs
72 Playgrounds

How to deploy a basic web app on Kubernetes#

Deploying a web app on Kubernetes can be broken down into a series of key steps:

  1. Creating Docker images

  2. Setting up a Kubernetes cluster

  3. Deploying and managing the app within the cluster

  4. Exposing the application to external traffic

Let’s look at each step in detail with code examples:

1. Creating Docker images#

To deploy a web app on Kubernetes, we first need to containerize it by creating a Docker image. Here’s a basic example of a Dockerfile for a Python web app (using Flask):

# Use the official Python image
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install the necessary dependencies
RUN pip install -r requirements.txt
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Define the command to run your app using gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]

Explanation:#

    • FROM specifies the base image.

    • WORKDIR sets the working directory inside the container.

    • COPY copies the application files to the container.

    • RUN installs the required Python dependencies.

    • EXPOSE opens the port that the app will run on.

    • CMD defines the command to run the Flask app.

Building the Docker image#

Once the Dockerfile is ready, we build the Docker image:

docker build -t my-flask-app .

This command creates a Docker image named my-flask-app. After building the image, we can test it locally using Docker:

docker run -p 5000:5000 my-flask-app

2. Setting up a Kubernetes cluster#

We need a Kubernetes cluster to run our Dockerized app on Kubernetes. We can set up a local cluster using minikube or a managed service like Google Kubernetes Engine (GKE).

For example, to create a Kubernetes cluster on minikube, we need to do the following:

minikube start --cpus=4 --memory=8192

This command starts a minikube cluster with 4 CPUs and 8GB of memory. We adjust the parameters based on our system’s resources.

Note: For a more detailed hands-on experience on deploying to Google Kubernetes Engine, check out Deploy a Flask Application to a Google Kubernetes Engine project available on Educative. In this project, you'll learn to use a Helm chart to deploy a Flask application on GKE. You'll set up a GCloud project, create a Kubernetes cluster, configure the Helm chart, and deploy it over Kubernetes step by step.

3. Deploying the web app#

With our cluster set up, it’s time to deploy the Dockerized app to Kubernetes.

First, we create a Kubernetes deployment file (e.g., deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-flask-app
template:
metadata:
labels:
app: my-flask-app
spec:
containers:
- name: my-flask-app
image: my-flask-app:latest
ports:
- containerPort: 5000

Explanation:#

  • kind: Deployment specifies a deployment resource, which manages a group of replicated pods to ensure that a desired number of them are running at any given time.

  • replicas define the number of pod replicas to be created and maintained. In this example, 3 replicas of the pod will be deployed.

  • selector specifies the label selector used to identify the pods managed by this deployment. In this case, it matches pods with the label app: my-flask-app.

  • template defines the pod configuration, including metadata and the specification for the containers. It includes the container image (my-flask-app:latest) and the port (5000) that the application will expose.

  • labels under metadata and template defines the labels that are applied to the deployment and each pod, respectively. They help the selector identify which pods are managed by this deployment.

Next, we apply the deployment to the cluster:

kubectl apply -f deployment.yaml

This command tells Kubernetes to create and manage pods based on the specifications in the deployment file.

4. Exposing the application to external traffic#

To expose the deployment to external traffic, we create a service (e.g., service.yaml):

apiVersion: v1
kind: Service
metadata:
name: my-flask-app-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 5000
selector:
app: my-flask-app

Explanation:#

    • kind: Service defines a service resource to expose the deployment.

    • type: LoadBalancer creates an external load balancer to distribute traffic.

    • ports specifies the mapping between external and internal ports.

Then, we apply the service configuration:

kubectl apply -f service.yaml

Kubernetes will provision a load balancer and route external traffic to our web app running inside the cluster.

Verifying the cluster#

To verify that our local cluster is up and running, we use the following command:

kubectl get nodes

We should see a list of nodes in our local cluster, confirming that the setup is successful.

Note: For a comprehensive guide for deploying a complete application, check out this project: Deploy a Full-Stack Web Application Over Kubernetes. You can effectively deploy your app to Kubernetes and leverage the full power of a Kubernetes cluster to handle scaling, load balancing, and fault tolerance, ensuring your web app is robust and ready for production in this project.

Cover
Programming with Kubernetes

Kubernetes is a popular open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. This course is designed to provide a comprehensive understanding of Kubernetes and its programming concepts. You’ll dive deep into advanced topics of Kubernetes. This course will cover topics such as Kubernetes architecture, frameworks, plugins, and interfaces. You’ll also learn the powerful extensibilities of Kubernetes and make full use of these built-in capabilities to build your customized Kubernetes. This course is ideal for developers, DevOps engineers, and system administrators who want to learn how to master Kubernetes. By the end of the course, you’ll have a solid understanding of Kubernetes, its programming concepts, and be able to deploy, scale, and manage containerized customizations on Kubernetes. You will also get hands-on experience extending Kubernetes to meet your requirements.

15hrs
Intermediate
27 Playgrounds
10 Quizzes

What to learn next?#

After learning the basics of deploying the web app to Kubernetes, the next step is to explore tools that simplify and automate the process even further. One such tool is Helm, the powerful package manager for Kubernetes that streamlines the deployment and management of applications by using "Helm charts"—preconfigured templates that define Kubernetes resources.

When using Helm, we can choose between single deployments and multiple deployments depending on our needs:

Single deployment with Helm#

A single deployment is ideal for smaller applications or when we need to deploy a single instance of our app to a Kubernetes cluster. This approach is straightforward, making it easier to manage and maintain. To understand how to set up a single deployment using Helm, check out Create Single Deployment Using Helm and K8s.

Multiple deployments with Helm#

Multiple deployments, on the other hand, are suitable for complex applications that consist of various microservices or require different configurations across environments (e.g., development, staging, and production). Helm makes it easy to manage these deployments by defining multiple releases from a single Helm chart, ensuring that each service is deployed and scaled correctly. To learn more about managing multiple deployments with Helm, try this project: Create Multiple Deployments Using Helm.

Continue reading about Kubernetes#

#

Frequently Asked Questions

Is Kubernetes easy to learn?

Kubernetes has a steep learning curve due to its complexity and vast feature set. However, with structured learning, practical experience, and familiarity with containerization concepts, Kubernetes becomes manageable to learn.

How do I deploy a web API in Kubernetes?

How many containers are inside a pod?

Why does Kubernetes use pods instead of containers?


Written By:
Syed Jawad Bukhari
Join 2.5 million developers at
Explore the catalog

Free Resources