Why (and when) you should use Kubernetes

Why (and when) you should use Kubernetes

10 mins read
Oct 31, 2025
Share
editor-page-cover
Content
Container orchestration
Great for multi-cloud adoption
Deploy and update applications at scale for faster time-to-market
Better management of your applications
Overview/additional benefits
When you should use Kubernetes
If your application uses a microservice architecture
If you’re suffering from slow development and deployment
Lower infrastructure costs
Rethinking networking: Gateway API is the future
Security isn’t optional anymore
Cost optimization and FinOps best practices
Handling stateful workloads with confidence
Observability is a must-have, not a nice-to-have
Smarter deployments: Progressive delivery
When you shouldn’t use Kubernetes
Simple, lightweight applications
Culture doesn’t reflect the changes ahead
What to learn next?
Continue reading about Kubernetes and DevOps

Kubernetes is a powerful container management tool that automates the deployment and management of containers. Kubernetes (k8’s) is the next big wave in cloud computing and it’s easy to see why as businesses migrate their infrastructure and architecture to reflect a cloud-native, data-driven era.

Whether you’re a developer, data scientist, product manager, or something else, it won’t hurt to have a little Kubernetes knowledge in your back pocket. It’s one of the most sought after skills by companies of all sizes, so if you’re looking to gain a new skill that will stay with you throughout your career, then learning Kubernetes is a great option.


Learn the fundamentals of Kubernetes.

Get a firm grasp on Kubernetes and the skills to deploy your own clusters and applications.

Practical Guide to Kubernetes


widget

Container orchestration#

Containers are great. They provide you with an easy way to package and deploy services, allow for process isolation, immutability, efficient resource utilization, and are lightweight in creation.

But when it comes to actually running containers in production, you can end up with dozens, even thousands of containers over time. These containers need to be deployed, managed, and connected and updated; if you were to do this manually, you’d need an entire team dedicated to this.

It’s not enough to run containers; you need to be able to:

  • Integrate and orchestrate these modular parts
  • Scale up and scale down based on the demand
  • Make them fault tolerant
  • Provide communication across a cluster

You might ask: aren’t containers supposed to do all that? The answer is that containers are only a low-level piece of the puzzle. The real benefits are obtained with tools that sit on top of containers — like Kubernetes. These tools are today known as container schedulers.


Great for multi-cloud adoption#

With many of today’s businesses gearing towards microservice architecture, it’s no surprise that containers and the tools used to manage them have become so popular.

Microservice architecture makes it easy to split your application into smaller components with containers that can then be run on different cloud environments, giving you the option to choose the best host for your needs.

What’s great about Kubernetes is that it’s built to be used anywhere so you can deploy to public/private/hybrid clouds, enabling you to reach users where they’re at, with greater availability and security. You can see how Kubernetes can help you avoid potential hazards with “vendor lock-in”.


Deploy and update applications at scale for faster time-to-market#

Kubernetes allows teams to keep pace with the requirements of modern software development. Without Kubernetes, large teams would have to manually script their own deployment workflows.

Containers, combined with an orchestration tool, provide management of machines and services for you — improving the reliability of your application while reducing the amount of time and resources spent on DevOps.

Kubernetes has some great features that allow you to deploy applications faster with scalability in mind:

  • Horizontal infrastructure scaling: New servers can be added or removed easily.
  • Auto-scaling: Automatically change the number of running containers, based on CPU utilization or other application-provided metrics.
  • Manual scaling: Manually scale the number of running containers through a command or the interface.
  • Replication controller: The replication controller makes sure your cluster has an equal amount of pods running. If there are too many pods, the replication controller terminates the extra pods. If there are too few, it starts more pods.
  • Health checks and self-healing: Kubernetes can check the health of nodes and containers ensuring your application doesn’t run into any failures. Kubernetes also offers self-healing and auto-replacement so you don’t need to worry about if a container or pod fails.
  • Traffic routing and load balancing: Traffic routing sends requests to the appropriate containers. Kubernetes also comes with built-in load balancers so you can balance resources in order to respond to outages or periods of high traffic.
  • Automated rollouts and rollbacks: Kubernetes handles rollouts for new versions or updates without downtime while monitoring the containers’ health. In case the rollout doesn’t go well, it automatically rolls back.
  • Canary Deployments: Canary deployments enable you to test the new deployment in production in parallel with the previous version.

“Before Kubernetes, our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we’re working on getting it to an hour.”

— Box


Better management of your applications#

Containers allow applications to be broken down into smaller parts which can then be managed through an orchestration tool like Kubernetes. This makes it easy to manage codebases and test specific inputs and outputs.

As mentioned earlier, Kubernetes has built-in features like self-healing and automated rollouts/rollbacks, effectively managing the containers for you.

To go even further, Kubernetes allows for declarative expressions of the desired state as opposed to an execution of a deployment script, meaning that a scheduler can monitor a cluster and perform actions whenever the actual state does not match the desired. You can think of schedulers as operators who are continually monitoring the system and fixing discrepancies between the desired and actual state.


Overview/additional benefits#

  • You can use it to deploy your services, to roll out new releases without downtime, and to scale (or de-scale) those services.
  • It is portable.
  • It can run on a public or private cloud.
  • It can run on-premise or in a hybrid environment.
  • You can move a Kubernetes cluster from one hosting vendor to another without changing (almost) any of the deployment and management processes.
  • Kubernetes can be easily extended to serve nearly any needs. You can choose which modules you’ll use, and you can develop additional features yourself and plug them in.
  • Kubernetes will decide where to run something and how to maintain the state you specify.
  • Kubernetes can place replicas of service on the most appropriate server, restart them when needed, replicate them, and scale them.
  • Self-healing is a feature included in its design from the start. On the other hand, self-adaptation is coming soon as well.
  • Zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing add significant value in Kubernetes.
  • You can use it to mount volumes for stateful applications.
  • It allows you to store confidential information as secrets.
  • You can use it to validate the health of your services.
  • It can load balance requests and monitor resources.
  • It provides service discovery and easy access to logs.

widget

When you should use Kubernetes#


If your application uses a microservice architecture#

If you have transitioned or are looking to transition to a microservice architecture then Kubernetes will suit you well because it’s likely you’re already using software like Docker to containerize your application.


If you’re suffering from slow development and deployment#

If you’re unable to meet customer demands due to slow development time, then Kubernetes might help. Rather than a team of developers spending their time wrapping their heads around the development and deployment lifecycle, Kubernetes (along with Docker) can effectively manage it for you so the team can spend their time on more meaningful work that gets products out the door.

“Our internal teams have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify.”—Spotify


Lower infrastructure costs#

Kubernetes uses an efficient resource management model at the container, pod, and cluster level, helping you lower cloud infrastructure costs by ensuring your clusters always have available resources for running applications.


Rethinking networking: Gateway API is the future#

When the original Kubernetes networking model was created, Ingress was the standard way to manage external traffic into your cluster. That’s still widely used today, but a newer and more powerful option called the Gateway API is quickly becoming the default.

Gateway API gives you more flexibility, more granular control over traffic routing, and a more expressive configuration syntax. It also makes it easier for platform teams and application teams to work independently — a big win if you’re running multiple services owned by different teams.

If you’re starting a new project in 2025, it’s worth looking into Gateway API instead of relying only on Ingress.

Security isn’t optional anymore#

Kubernetes has matured into a critical part of many production environments, which means security is now a first-class concern — not an afterthought.

A few best practices have become industry standards since Kubernetes’ early days:

  • Use Pod Security Admission (PSA) to enforce baseline or restricted security profiles — it replaced the now-removed PodSecurityPolicy.

  • Implement NetworkPolicies to limit which services and namespaces can talk to each other.

  • Consider sandboxed runtimes like gVisor or Kata Containers for stronger isolation in multi-tenant environments.

If your workloads handle sensitive data or run in regulated industries, these features are no longer optional — they’re table stakes.

Cost optimization and FinOps best practices#

It’s true that Kubernetes can help you reduce infrastructure costs — but only if you manage it carefully. Many organizations discover that costs actually go up after migrating to Kubernetes because of inefficient resource usage.

Here’s how teams are optimizing costs today:

  • Use Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) to match resource usage to demand.

  • Consider tools like KEDA for event-driven scaling and Karpenter (on AWS) to optimize node provisioning.

  • Set resource requests and limits wisely to avoid overprovisioning.

  • Implement resource quotas and monitoring tools to track and control spending.

Kubernetes gives you the levers — but you still need to pull them strategically.

Handling stateful workloads with confidence#

In its early days, Kubernetes was primarily about stateless microservices. Today, it’s increasingly common to run databases, message brokers, and other stateful services on clusters too.

Key tools and patterns include:

  • StatefulSets for managing stateful Pods with stable network identities and storage.

  • CSI (Container Storage Interface) drivers for reliable persistent storage.

  • Operators for automating the lifecycle of complex stateful applications like PostgreSQL, Kafka, or Redis.

Running data services in Kubernetes isn’t trivial — but it’s now mainstream, and knowing how to do it safely is part of being production-ready.

Observability is a must-have, not a nice-to-have#

Modern Kubernetes environments are too complex to run without strong observability. That means metrics, logs, and traces — and ideally, all three correlated together.

The current best practice stack looks like this:

  • Prometheus for metrics and alerting.

  • OpenTelemetry for tracing and instrumentation.

  • Grafana or similar tools for visualization and dashboards.

Without these, troubleshooting will be painful, scaling decisions will be blind, and you’ll miss the full value of Kubernetes.

Smarter deployments: Progressive delivery#

Standard rolling updates are fine, but teams running large-scale production systems increasingly rely on progressive delivery techniques to reduce risk. This includes:

  • Canary deployments to test new versions with a small percentage of users first.

  • Blue-green deployments to enable instant rollback.

  • Tools like Argo Rollouts or Flagger to automate and manage these rollout strategies.

Progressive delivery helps you ship faster and more confidently — one of the key promises of cloud-native infrastructure.

When you shouldn’t use Kubernetes#

Simple, lightweight applications#

If your application makes use of a monolithic architecture it may be tough to see the real benefits of containers and a tool used to orchestrate them.

That’s because the very nature of a monolithic architecture is to have every piece of the application intertwined — from IO to the data processing to rendering, whereas containers are used to separate your application into individual components.


Culture doesn’t reflect the changes ahead#

Kubernetes notoriously has a steep learning curve, meaning you’ll be spending a good amount of time educating teams and addressing the challenges of a new solution, etc. If you don’t have a team that’s willing to experiment and take risks then it’s probably not the choice for you.


What to learn next?#

Overall, Kubernetes boasts some pretty great features that can have a positive impact on your developing/DevOps teams and for the business as a whole.

If you’re looking to get started with Kubernetes, you can check out A Practical Guide to Kubernetes, written by Viktor Farcic, a Developer Advocate at CloudBees, a member of the Google Developer Experts.

This course will help you get familiar with all the basics of Kubernetes through hands-on practice. You’ll start with the fundamentals of Kubernetes and what the main components of a cluster look like. You’ll then learn how to use those components to build, test, deploy, upgrade applications, and secure your deployments.


Continue reading about Kubernetes and DevOps#

Frequently Asked Questions

What is the key benefit of Kubernetes?

The key benefit of Kubernetes is its ability to automate the deployment, scaling, and management of containerized applications. Kubernetes provides a platform that abstracts the underlying infrastructure, enabling developers and operators to deploy services without being tied to specific hardware or cloud providers.

Why is Kubernetes better than Docker?

Kubernetes is often considered better than Docker Swarm in terms of orchestration and scaling. Kubernetes offers more extensive and granular control over workloads, a broader range of application support, and a larger and more active community. Its powerful orchestration capabilities allow for efficient scaling, self-healing, load balancing, and rolling updates.


Written By:
Educative