EKS Blueprints Will Cure Your Kubernetes Headache

EKS Blueprints Will Cure Your Kubernetes Headache

Discover how to simplify Kubernetes with reusable, production-grade EKS Blueprints.
11 mins read
Jul 03, 2025
Share

There's no doubt that Kubernetes is a powerful deployment platform: It offers container orchestration, declarative infrastructure, and flexible scaling.

But spinning up a secure, production-ready environment on AWS can often be a time consuming (and downright painful) experience. VPCs, IAM roles, autoscaling groups, ingress controllers, and observability stacks pile up quickly.

Too often, teams reinvent the wheel, writing fragile Terraform from scratch or manually configuring YAML files to meet baseline requirements.

But what if you didn’t have to?

EKS Blueprints are AWS’s answer to this complexity. They’re reusable, modular infrastructure as code (IaC) frameworks that help you launch reliable, production-grade Kubernetes environments with minimal effort.

In this guide, we’ll discuss:

  • Why Kubernetes is still hard — even with managed services

  • What EKS Blueprints truly offer

  • The real-world problems EKS blueprints solve

  • What’s included in a production-grade blueprint

  • How can you get started with Terraform today

Let's begin!

The Kubernetes headache#

Kubernetes has revolutionized container orchestration, but managing it in the real world is still complex and error-prone. Teams often face a steep learning curve trying to configure every aspect — from networking to IAM roles, observability, security, auto scaling, and CI/CD pipelines.

Each piece typically requires custom scripts or YAML configurations, slowing teams down and introducing inconsistencies across environments. Even small misconfigurations can create security gaps or operational issues in production.

So, AWS introduced EKS Blueprints to relieve developers’ burdens. It offers preconfigured, modular, and extensible infrastructure patterns that operate out of the box.

What are EKS Blueprints?#

Developers no longer need deep Kubernetes or AWS expertise to build a secure, scalable, and production-ready cluster.

Think of EKS Blueprints as your Kubernetes starter kit for AWS. They’re collections of reusable infrastructure code — built using Terraform or AWS CDK — that provision a complete EKS environment aligned with AWS best practices.

A typical blueprint includes the following:

  • A secure and scalable EKS cluster

  • Monitoring and logging via tools like Prometheus, Grafana, and Fluent Bit

  • Kubernetes add-ons such as CoreDNS, Karpenter, and AWS Load Balancer Controller

  • Properly configured VPC networking, IAM roles, and ServiceAccounts

EKS Blueprints are designed to be modular — you can start small and scale up, swapping components as needed. They’re also opinionated, with many architectural decisions already made for you, reducing the need for trial and error.

Whether you're new to Kubernetes or managing multiple workloads, EKS Blueprints help enforce consistency, security, and efficiency.

Solving real-world challenges with EKS Blueprints#

Let's consider some common challenges Kubernetes teams face and how EKS Blueprints address them:

  • Bootstraping clusters: Provisioning an EKS cluster isn’t just running eksctl and calling it a day. Behind the scenes, you’re expected to wire up a VPC, define subnet layouts, configure node groups, and enable auto scaling. Blueprints automate everything, bundling infrastructure, IAM, and provisioning logic into reusable Terraform modules.

  • Observability: With EKS Blueprints, tools like Prometheus and Fluent Bit come pre-integrated, so dashboards, logs, and alerts are up and running before your first workload hits production.

  • Security: EKS Blueprints implement IAM best practices using roles for service accounts (IRSA), enforce encrypted traffic between components, and scaffold your cluster with pod-level permissions. The result: fewer mistakes, easier audits, and stronger compliance with frameworks like HIPAA or SOC 2.

  • Multi-tenancy: Even advanced scenarios like multi-tenancy are baked in. Need to run workloads for different teams or customers on the same cluster? Namespaces provide logical isolation, IAM roles are tightly scoped, and network policies can be easily applied. It’s multi-tenancy done right — without the overhead.

Using EKS Blueprints, you can avoid weeks of manually stitching these parts together and get a production-grade setup within hours.

5 layers of a typical EKS blueprint#

A production-grade blueprint isn’t simply a “create cluster” script — it’s a layered architecture designed for growth.

It begins with the infrastructure layer, which defines your VPCs, subnets, and NAT gateways. This layer is your foundation, enabling secure, scalable communication between internal and external systems.

Next comes the EKS cluster layer, which provisions the control plane and worker nodes. Whether using managed node groups or Karpenter for auto scaling, this part ensures your workloads run efficiently and reliably.

Then there’s the add-ons layer, where Helm charts or manifests are used to install ingress controllers, monitoring tools, and storage drivers. Prometheus, Grafana, and the AWS Load Balancer Controller are wired up automatically.

The security layer handles IAM Roles for ServiceAccounts, configures OpenID Connect (OIDC), and enforces Secrets encryption. These integrations aren’t optional—they’re essential, and EKS Blueprints make them seamless.

Finally, some EKS Blueprints include an application layer with sample workloads, GitOps pipelines, or CI/CD integrations. This lets you deploy and test immediately or plug in your real-world app pipeline from day one.

The framework is modular, so you can swap out components, customize configurations, and extend functionality as your needs grow.

Building a production-ready EKS cluster with EKS Blueprints#

In this hands-on tutorial, you’ll learn how to deploy an Amazon EKS cluster using AWS EKS Blueprints for Terraform. You’ll use a prebuilt blueprint pattern to quickly provision a highly available cluster with Fargate nodes, essential add-ons, and a new, isolated VPC.

Prerequisites#

Before you begin, ensure you have the following installed and configured on your local machine:

  • AWS CLI: Configured with credentials with sufficient permissions (you can use an account with AdministratorAccess permissions for this learning exercise).

  • Terraform: Version 1.0.0 or higher.

  • kubectl: Latest stable version.

  • Git: For cloning the EKS Blueprints repository.

  • Helm: For inspecting Kubernetes releases.

For this basic demonstration, Terraform will store its state file (terraform.tfstate) locally on your machine. In any real-world or production environment, using a remote backend (like an S3 bucket with DynamoDB for state locking) for durability, collaboration, and to prevent accidental state loss is crucial. This tutorial skips that setup to keep things simple for a first run, but remember this best practice for actual projects.

Step 1: Set up your project#

The EKS Blueprints for Terraform project provides a variety of prebuilt “patterns” that serve as complete, working examples for different EKS cluster configurations. Choosing the right pattern is the first step toward tailoring your deployment to your needs.

Clone the EKS Blueprints for the Terraform repository#

To get started, clone the EKS Blueprints repository to your local machine.

C++
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git

This provides access to all the predefined patterns locally.

Explore available patterns#

Navigate to the patterns directory within the cloned repository. This is where all the example blueprints reside.

Shell
cd terraform-aws-eks-blueprints/patterns
ls -F # The -F flag adds a slash to directory names

You will see a list of directories, each representing a distinct EKS Blueprint pattern. Some common ones include:

  • fargate-serverless/: Deploys an EKS cluster where all workloads run on Fargate, requiring no EC2 worker nodes.

  • private-public-ingress/: Demonstrates patterns where public and private ingress coexist, useful for internal and external service separation.

  • gitops/: Demonstrates integration with Argo CD for GitOps-based deployments.

  • stateful/: Focuses on deploying stateful workloads, such as databases and storage-backed applications, on EKS.

  • multi-tenancy-with-teams/: Illustrates how to enable multi-team access using isolated namespaces, RBAC boundaries, and team-specific configurations.

Choose the best-fit pattern for your scenario#

Recall our scenario requirements:

  • Core EKS cluster (v1.30)

  • Managed Fargate nodes

  • New isolated VPC

  • Networking (ALB), security (IRSA), observability (Logs)

The fargate-serverless/ pattern is the most suitable starting point given these requirements. It directly addresses the need for managed EC2 worker nodes and Fargate profiles.

cd fargate-serverless
Navigate to the selected blueprint directory

You are now in the directory that contains the Terraform files for this specific EKS Blueprint pattern.

Step 2: Review and understand the blueprint’s configuration#

Before deploying, it’s essential to understand what resources and configurations the chosen blueprint will create. Open the main.tf file in your current directory (fargate-serverless/main.tf), using your favorite text editor.

You’ll see a structure similar to this:

Shell
# main.tf excerpt - Fargate-Only Serverless Pattern using AWS EKS Blueprints
provider "aws" {
region = local.region
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.11"
cluster_name = local.name
cluster_version = "1.30"
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
fargate_profiles = {
app_wildcard = {
selectors = [
{ namespace = "app-*" }
]
}
kube_system = {
name = "kube-system"
selectors = [
{ namespace = "kube-system" }
]
}
}
fargate_profile_defaults = {
iam_role_additional_policies = {
additional = module.eks_blueprints_addons.fargate_fluentbit.iam_policy[0].arn
}
}
tags = local.tags
}
module "eks_blueprints_addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "~> 1.16"
cluster_name = module.eks.cluster_name
cluster_endpoint = module.eks.cluster_endpoint
cluster_version = module.eks.cluster_version
oidc_provider_arn = module.eks.oidc_provider_arn
create_delay_dependencies = [for prof in module.eks.fargate_profiles : prof.fargate_profile_arn]
eks_addons = {
coredns = {
configuration_values = jsonencode({
computeType = "Fargate"
resources = {
limits = {
cpu = "0.25"
memory = "256M"
}
requests = {
cpu = "0.25"
memory = "256M"
}
}
})
}
vpc-cni = {}
kube-proxy = {}
}
enable_fargate_fluentbit = true
fargate_fluentbit = {
flb_log_cw = true
}
enable_aws_load_balancer_controller = true
aws_load_balancer_controller = {
set = [
{
name = "vpcId"
value = module.vpc.vpc_id
},
{
name = "podDisruptionBudget.maxUnavailable"
value = 1
}
]
}
tags = local.tags
}

In this file, the following are the key areas to observe:

  • Cluster and VPC configuration: The core cluster is deployed using the popular terraform-aws-modules/eks/aws module. A new isolated VPC is provisioned via the terraform-aws-modules/vpc/aws, featuring private and public subnets across availability zones. This setup ensures a fully isolated and well-structured network infrastructure.

  • Managed node groups and AWS Fargate profiles: Unlike hybrid configurations, this pattern relies entirely on AWS Fargate. Two Fargate profiles are created:

    • One is scoped to kube-system, for essential Kubernetes components.

    • Another with a wildcard selector for all namespaces starting with app-.

This fully serverless setup eliminates the need for EC2 worker nodes while providing the flexibility to run multiple application environments.

  • The power of add-ons: As with other EKS Blueprints, operational complexity is dramatically reduced by enabling pre-integrated add-ons with simple configuration.

    • For networking, enable_aws_load_balancer_controller = true provisions all necessary resources for exposing services using ALBs. This fulfills our networking requirements.

    • For observability, enabling Fluent Bit (enable_fargate_fluentbit = true) allows log forwarding from Fargate pods to CloudWatch Logs. Metrics server is not explicitly enabled, but Fluent Bit provides basic logging capabilities.

    • Essential networking components, such as vpc-cni, kube-proxy, and coredns, are configured with sensible defaults. The coredns block, in particular, is optimized for running within Fargate’s resource constraints.

EKS Blueprints also configures IAM roles for service accounts (IRSA) and other dependencies, like OIDC provider setup. This ensures that your workloads follow AWS security best practices out of the box with minimal manual effort.

No changes are required for this basic tutorial, but feel free to adjust things like cluster_name if you wish.

Step 3: Initialize Terraform#

Once you’ve reviewed the blueprint’s configuration and are in the pattern directory, initialize Terraform to download the necessary providers and modules.

terraform init
Initialize Terraform

This command will do the following:

  • Download the necessary AWS, Kubernetes, and Helm provider plug-ins.

  • Download the EKS Blueprints module.

  • Create a terraform.tfstate file in your current directory.

Step 4: Plan the deployment#

Before making any changes to your AWS account, always generate and review a Terraform execution plan. This will show you exactly what resources Terraform intends to create, modify, or destroy.

terraform plan
Plan the deployment

Carefully review the output. You will see many resources for the VPC, subnets, EKS cluster, IAM roles, security groups, EC2 instances, and all the enabled Kubernetes add-ons.

Step 5: Apply the configuration#

If the terraform plan output looks correct and you’re ready to proceed, apply the configuration to provision the resources in your AWS account.

terraform apply
Apply the configuration

You will be prompted to confirm by typing yes.

This step will take significant time (typically 15-40 minutes) as AWS provisions the entire EKS cluster, VPC, worker nodes, and deploys all the Kubernetes add-ons. Do not close your terminal or interrupt the process until it is complete.

Step 6: Connect to your EKS cluster#

Once terraform apply completes, Terraform will output useful information—including the command to configure kubectl.

Update your kubeconfig file#

Look for the kubeconfig_command output in your terminal and execute it:

Shell
# Example output: (Copy the exact command from your terminal's output)
aws eks update-kubeconfig --region us-east-1 --name my-ecommerce-eks-blueprint

This command integrates your new EKS cluster’s credentials into your local kubectl configuration.

Verify your worker nodes#

Now that you’ve integrated your EKS cluster’s credentials in your local kubectl configuration, execute this command to verify the worked nodes:

C++
kubectl get nodes

You should see your t3.medium worker nodes (managed node group) in a Ready state.

Verify the running add-on pods#

Run this command to see the workloads running on your cluster.

C++
kubectl get pods -A

You will see numerous pods running across various namespaces (e.g., kube-system, aws-load-balancer-controller). These are the essential add-ons that EKS Blueprints automatically deploy and configure for you, fulfilling your observability, scaling, and networking requirements.

Thanks to EKS Blueprints for Terraform, you have successfully deployed a robust, production-ready EKS cluster with its own VPC, managed EC2 nodes, Fargate profiles, and crucial add-ons with minimal manual configuration.

Step 7: Clean up#

You must destroy your EKS cluster and associated resources once you are finished experimenting to avoid incurring ongoing AWS charges. 

Run this command to delete all the resources you created using Terraform.

terraform destroy
Delete the provisioned infrastructure

You will be prompted to confirm by typing yes.

The terraform destroy process also takes considerable time (often 10-20 minutes) as it systematically removes all the provisioned AWS resources.

Now you know the basics of deploying a secure, scalable EKS cluster using blueprints—without writing everything from scratch. From here on, you can start customizing your environment, deploying your workloads, and exploring GitOps workflows to manage applications like a pro.

When to use EKS Blueprints#

EKS Blueprints are fantastic for most teams, but are not a fix-all solution. The table below can help you understand when to opt for EKS Blueprints and when to opt for the conventional method of EKS deployment.

Use EKS Blueprints

Skip EKS Blueprints

You want a fast, opinionated path to production.

Your infrastructure is entirely bespoke.

You’re building on AWS using native services.

You’re targeting a multi-cloud or hybrid setup.

Your team values convention over configuration.

You require deep customization from day one.

You need to enforce security and GitOps best practices.

You already have a mature Kubernetes deployment pipeline.

You want to onboard new teams or projects quickly.

You’re experimenting and want minimal abstractions.

Wrapping up#

Congrats! You’ve created a production-grade Kubernetes environment with Terraform, powered by AWS EKS Blueprints. But provisioning the infrastructure is only the beginning. Now you can shift your attention from cluster setup to platform maturity.

Start by operationalizing your workloads through GitOps. The built-in ArgoCD integration in EKS Blueprints gives you a head start. Use it to define your services declaratively and automate your deployment pipeline. From there, begin layering in your workloads, isolating teams using namespaces and IRSA. Then, tailor observability or security tools as your environment grows. You might integrate secrets management, policy enforcement, or cost-optimization solutions — all modular on your terms.

Get your repetitions in with these additional learning resources:


Written By:
Fahim ul Haq
Free Edition
AWS’s latest AI updates are a big deal for devs––here's why
From cutting-edge SageMaker upgrades that slash costs and automate training to Amazon Bedrock’s new AI models and optimizations, AWS re:Invent 2024 unveiled game-changing updates for developers.
13 mins read
Mar 7, 2025