Deploying Applications

Learn how to deploy an application to our cluster hosted on AWS.

Remoteness

Deploying resources to a Kubernetes cluster running in AWS is no different from Deployments anywhere else, including minikube. That’s one of the big advantages of Kubernetes and other container schedulers. We have a layer of abstraction between hosting providers and our applications. As a result, we can deploy (almost) any YAML definition to any Kubernetes cluster, no matter where it is.

It gives us a very high level of freedom and allows us to avoid vendor locking. Sure, we cannot effortlessly switch from one scheduler to another, meaning we are “locked” into the scheduler we chose. Still, it’s better to depend on an open-source project than on a commercial hosting vendor like AWS, GCE, or Azure.

Note: We need to spend time setting up a Kubernetes cluster, and the steps will differ from one hosting provider to another. However, once a cluster is up and running, we can create any Kubernetes resource and (almost) entirely ignore what’s underneath it. The result is the same whether our cluster is AWS, GCE, Azure, on-prem, or anywhere else.

Deploying resources

Let’s get back to the task at hand and create go-demo-2 resources.

Get hands-on with 1200+ tech skills courses.