Search⌘ K
AI Features

Creating Kubernetes Manifests

Explore how to create Kubernetes manifests, including deployments and services, to manage cloud-native applications on a Kubernetes cluster. Understand required manifest fields, resource specifications, and how to deploy and test workloads using kubectl. Gain practical skills for managing Kubernetes workloads with declarative configurations.

Creating Kubernetes manifests

Managing Kubernetes workloads is one of the most heavily used applications of GitOps, so it's important to understand the basics of declaring some of the most commonly used Kubernetes resources such as deployments and services. Declarative configurations for Kubernetes resources are commonly referred to as manifests.

A manifest is a plain text configuration file that uses the JSON or YAML formats to describe resources that should run on the cluster. The configuration in these files must adhere to the specifications for different resources found in the Kubernetes API reference. In most cases, YAML is the preferred format for manifests, so we’ll use it throughout our lessons.

Required fields

There are four required fields in every manifest that must be provided when describing a resource we want to deploy on a Kubernetes cluster.

  1. The first field is the apiVersion, which defines the version of the Kubernetes API being used to create the resource. Some resources are available for multiple and different API versions, so it's important to correctly identify the API version being used for the object.

  2. The second field is the kind field, which indicates the type of resource to be created. Its value will be the name of a Kubernetes resource, which can be something like deployment, service, or some other valid resource name.

  3. The third required field is metadata, which allows us to provide a name for the object in the Kubernetes cluster and optionally a namespace in which it should reside. A namespace is used to partition or isolate groups of resources within the cluster.

  4. The final and most complicated field is spec. This is where the configuration of the resource that will be deployed on the cluster is defined. The spec field will be different for each type of resource because each resource has a unique specification defined by the API.

Here's a shortened example of the required fields for a Kubernetes resource, which demonstrates the structure and example values for the fields:

YAML
apiVersion: v1
kind: Pod
metadata:
name: pricing-app
spec:
# Actual spec will vary per resource.

Note: Sometimes people are surprised to find out that namespace isn’t required for every Kubernetes manifest. However, it is helpful for organizing resources on the cluster.

Deployments

A deployment is used to provision and update applications that run as workloads on a Kubernetes cluster. Descriptions of the pods that should run on the Kubernetes cluster are placed in the deployment configuration. A pod is a deployment unit that contains one or more containers that run our applications. In most cases, the pods within a deployment only run a single container. A deployment and its enclosed pod specifications describe the container to run, the ports the container exposes, and other configurations necessary to run the workload on the cluster.

Here's an example of the configuration for a deployment resource that deploys two pods that run the myregistry.azurecr.io/pricing-app:1.0 container image:

YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: pricing-deployment
namespace: production
labels:
app: pricing-app
spec:
replicas: 2
selector:
matchLabels:
app: pricing-app
template:
metadata:
labels:
app: pricing-app
spec:
containers:
- name: pricing-app
image: myregistry.azurecr.io/pricing-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 2

Within this configuration, we can see the layout of a deployment specification starting at line 8. Inside this configuration, we can specify how many instances of the pod should run on the cluster by defining the number of replicas. We also specify a selector, which determines the resources to apply to this deployment. In this case, it's all the resources with a label of pricing-app.

The template field is another type of specification, a PodTemplateSpec. This specification describes the pods that should be created for this deployment. Notice that the metadata for the specified pod matches the selector for the deployment, which indicates that this pod should be part of the specified deployment.

Inside the pod specification, we define the containers that belong to the pod. The container that runs inside the pod is determined by the image field, in which we have referenced a container image located within an Azure Container registry. The configuration for the container allows us to define how the image should be pulled (line 21), what ports it exposes (line 23), and a livenessProbe that will be used by Kubernetes to observe the health of the running container (line 24).

Services

Once a deployment has been deployed to the cluster, and its pods are running, we’ll probably want them to be accessible to other clients that may run inside or outside the Kubernetes cluster. For example, if our pods contain a pricing service, a single-page application running in the browser (outside the cluster) may need to send requests to them to retrieve pricing information. Alternatively, perhaps another service running inside the cluster needs to send requests to the pricing service to get the price of a particular good or service.

This poses a problem because the pods are ephemeral, meaning they can be created or destroyed at any point in time as the container orchestrator manages their lifecycle. Therefore, relying upon the IP assigned to a pod is unreliable because a pod running one minute can be destroyed the next.

To handle this situation, Kubernetes provides a service resource. A service refers to a logical set of pods and provides a means to access them, which accounts for their dynamic lifecycles. This provides a form of service discovery for the pods, which are non persistent resources and may be created or destroyed sporadically. Using a service, a client can reliably send traffic to the logical group of pods running on the cluster while avoiding these address issues.

Here's an example of a service to review:

YAML
apiVersion: v1
kind: Service
metadata:
name: pricing-service
namespace: production
spec:
type: NodePort
selector:
app: pricing-app
ports:
- port: 8080
targetPort: 8080
nodePort: 30000

This service is configured to route traffic to the pods in the above-referenced deployment. Notice the service and deployment are placed within the same namespace, which is necessary for the traffic to route between them. The service is of type NodePort, which is one of several types of services provided by Kubernetes (line 7). A service of the type NodePort exposes a static port on a node within the cluster where traffic can be sent from outside the cluster. The service uses the selector field (line 8) to determine which pods to route the traffic to, which matches the label we placed on our pod within the specification for the deployment.

This illustration depicts how the configurations above would operate on a cluster:

There are other types of services provided by Kubernetes as well, including ClusterIP, ExternalName, and LoadBalancer. When working with cloud service providers, we’ll most likely be using a LoadBalancer to expose services from outside of the cluster.

Note: > For those interested in learning more about the different types of resources that Kubernetes provides, the API reference provides additional details about their specification.

Custom resource definitions

For our purposes in this course, these are all of the Kubernetes resources we need to understand. However, external tools like those used for GitOps can provide custom resource definitions that specify declarative configurations for workloads these tools will run on the cluster. In this way, Kubernetes is an amazingly extensible platform that can potentially be used for an infinite amount of scenarios. We’ll be working with these custom resource definitions provided by GitOps tools in other lessons within the course.

Declarative configuration

Let's practice creating Kubernetes manifests by working through a small exercise that will run a sample Python application on a k3d Kubernetes cluster. The sample application has already been packaged into a Docker image, which can be referenced using this image on Docker Hub.

We'll need to complete the following tasks to run the image on the cluster:

  • Create a deployment resource

  • Create a service resource

  • Apply the resource to the cluster

  • Test traffic to the service

Within the interactive widget below, we'll find the source code for the Python application and its dockerfile inside of the app directory. These files are provided only for your reference. The Kubernetes manifests that you will need to create can be found in the infrastructure directory.

Create a deployment resource

  1. Add the required fields into the deployment.yaml file for the Kubernetes resource.

YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-sample-deployment
  1. Next, we add the spec for the deployment, and the pod it will run on the cluster. In this configuration, the container image for the pod is set to kmbeducative/python-sample:1.0. This container will be run on the cluster and will receive traffic on port 5000 as defined by the declarative configuration's containerPort value field. This port was selected because it's the default port on which web applications that use Flask listen.

YAML
spec:
selector:
matchLabels:
app: python-sample
template:
metadata:
labels:
app: python-sample
spec:
containers:
- name: python-sample
image: kmbeducative/python-sample:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000

Create a service resource

  1. Add the required fields into the service.yaml file for the Kubernetes resource.

YAML
apiVersion: v1
kind: Service
metadata:
name: python-sample-service
  1. Next, we add the spec for the service and configure the appropriate ports. In this configuration, we set the selector to match the pricing-app label defined for the pods in our deployment. This establishes routing from the service to the pods in the deployment. In the port configuration for this service, the nodePort is configured to port 30000, which means the cluster’s nodes will listen on this port for external traffic entering the cluster. Upon receiving traffic on the nodePort, a node will route it to the service via port 8080, which is specified as the port the service will listen to for incoming traffic using the port field.

    Once the traffic reaches the service, it will route the traffic to pods matching the established selector on port 5000, which is specified as the targetPort. For pods to receive this traffic, it’s important that the containerPort be configured to match the targetPort.

YAML
spec:
type: NodePort
selector:
app: python-sample # Label pods must match to receive service traffic
ports:
- port: 8080 # Port to receive service traffic
targetPort: 5000 # Port to send traffic to pod
nodePort: 30000 # Port to receive traffic on cluster node

Applying the resource to the cluster

Once the declarative configurations have been defined for these resources, they must be applied to the cluster using kubectl, is a command line tool for interacting with a Kubernetes cluster. While this tool should not be used for GitOps purposes, it's valuable for experimentation purposes, like this exercise.

To continue the exercise, launch the interactive widget and execute the following commands:

  1. In the terminal, we'll inspect the number of pods currently running inside the cluster using the following command:

Shell
kubectl get pods

The output from the command will indicate that no pods are running on the cluster.

No resources found in default namespace.
Output from running the command to inspect pods on the cluster
  1. To apply the manifests to the cluster and launch the pods, navigate to the infrastructure directory and then use kubectl to apply the resources with the following command:

Shell
cd usercode/infrastructure
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

The Kubernetes cluster will respond with the following output that indicates the manifests have been applied:

deployment.apps/python-sample-deployment created
service/python-sample-service created
Output from the command to apply the manifests
  1. Now we can repeat the command to inspect the number of pods currently running inside the cluster.

Shell
kubectl get pods

The output from the command will indicate there's a single pod running on the cluster.

Test traffic to the service

Once the deployment and service have been applied to the cluster, traffic can be routed to the sample Python application using a tool like curl from the terminal of the interactive widget. However, we'll first need to obtain the IP address of a cluster node where we can route traffic from outside of the cluster. Once the IP address is obtained, a successful call can be made to the service with curl.

Execute the following commands to obtain the IP address and perform the example call with curl:

Shell
IP=$(kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}')
curl http://$IP:30000/

Note: When troubleshooting failed calls, remember that it takes Kubernetes around 15 seconds to deploy the service. So calls made before the service is running will fail. If an issue occurs when sending the request with curl, wait a few seconds and then try again.

We’ll know that the lesson has been completed successfully when the curl command above returns the message, Hello, Educative Learner! This is version 1.0.

Try it yourself

from flask import Flask
app = Flask(__name__)


@app.route("/")
def hello_world():
    return 'Hello, Educative Learner!'
Declarative configuration

Within the deployment.yaml file, a deployment is described that runs a pod containing the kmbeducative/python-sample:1.0 container. This container runs the sample Python application.

Within the service.yaml file, a service is described that routes traffic entering the cluster to the pods running the sample Python application.