Search⌘ K
AI Features

Create an EKS Cluster and Deploy an Application

Takes 120 mins

Amazon Elastic Kubernetes Service is a managed service that provides easy configuration for Kubernetes services. You don’t need to install Kubernetes and its other components; AWS EKS manages these tasks and makes sure you always have the latest versions of services available to use. EKS manages the Kubernetes leader nodes and allows you to control the follower nodes, helping you to focus on the deployment of your application.

In this Cloud Lab, you’ll create a custom VPC with public and private subnets and a NAT gateway to control incoming traffic from the internet. You’ll create an EKS cluster and a node group to create nodes/EC2 instances per your defined size and also install an Elastic Load Balancer Controller and create a load balancer using Ingress. Lastly, you’ll deploy an application and access it using the DNS address of the load balancer.

After completing this Cloud Lab, you’ll have a good understanding of working with EKS clusters, deploying applications on follower nodes, and working with Load Balancer Controller using ekctl and kubectl command line tools.

The following is the high-level architecture diagram of the infrastructure you’ll create in this Cloud Lab:

Visualize the EKS architecture for deploying and managing containerized applications at scale
Visualize the EKS architecture for deploying and managing containerized applications at scale

Why Amazon EKS matters for running Kubernetes on AWS

Kubernetes is the standard for running containerized applications at scale, but managing Kubernetes yourself can be operationally heavy. Amazon EKS reduces that burden by providing a managed Kubernetes control plane, so you can focus more on deploying and operating workloads than on maintaining cluster internals.

If you’re learning cloud-native deployment, EKS is valuable because it sits at the intersection of three practical skill sets:

  • Kubernetes fundamentals (Pods, Deployments, Services, Ingress).

  • AWS networking (VPCs, subnets, security groups, load balancers).

  • Operational basics (upgrades, scaling, observability, access control).

What “deploying an application to EKS” usually includes

Most EKS deployments follow a repeatable flow:

  • Create a cluster and worker capacity: You need a cluster plus compute to run your pods (often managed node groups). In real teams, this step also includes choosing networking, selecting cluster add-ons, and setting up access control.

  • Configure kubectl access and namespaces: Once the cluster exists, you configure access and organize workloads. Namespaces, labels, and resource requests/limits become important quickly as you grow beyond a single demo app.

  • Apply Kubernetes manifests (or Helm charts): You deploy workloads by applying manifests (Deployments, Services, ConfigMaps, Secrets) or using Helm charts. This is where container images, environment variables, and scaling settings come together.

  • Expose the app to users: Inside Kubernetes, a Service can expose a workload within the cluster. For external access, you typically use an Ingress and controller or another load-balancing approach. On AWS, this often maps to AWS-managed load balancers, which is why understanding the “Kubernetes to AWS” integration layer is so useful.

  • Validate, observe, and iterate: After deployment, you validate connectivity, monitor logs, review events, and confirm that health checks, readiness probes, and scaling behavior work as expected.

How to choose the right exposure method for EKS

One common confusion for beginners is “Service vs. Ingress.” A simple way to think about it:

  • Service is how Kubernetes exposes a set of pods (internally or externally, depending on type).

  • Ingress is a routing layer for HTTP/HTTPS that sits in front of Services (often used for multiple routes/domains).

As your deployments grow, an Ingress-based approach becomes more valuable because it gives you:

  • Centralized routing rules

  • More consistent TLS handling

  • Better control over multiple services behind one entry point

Where this skill shows up in real projects

Once you can confidently complete the basics of cluster setup, nodes, deployment, and exposure, you’re ready for the patterns teams actually use.

  • Blue/green or canary rollouts

  • GitOps-based delivery (Argo CD / Flux)

  • Autoscaling (HPA, cluster autoscaler, Karpenter)

  • Observability (metrics, logs, traces)

  • Security hardening (IAM roles, network policies, secrets management)

EKS becomes much easier when you treat it as a repeatable life cycle rather than a one-off cluster setup.