Search⌘ K

Exploring and Verifying the Output

Explore the process of verifying cluster upgrades in Kubernetes by examining rolling update outputs and node draining with kOps. Understand how to manage primary and worker nodes during updates and learn best practices for testing and validating upgrades in a separate test cluster to ensure stability and minimize risks during production upgrades.

Exploring the Sequence of Events

The rolling update has finally finished, and the output starts with the same information we got when we asked for a preview, so there’s not much to comment.

Shell
I0225 23:03:03.993068 1 instancegroups.go:130] Draining the node: "ip-172-20-40-167...".
node "ip-172-20-40-167..." cordoned
node "ip-172-20-40-167..." cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: etcd-server-events-ip-172-20-40-167..., etcd-server-ip-172-20-40-167..., kube-apiserver-ip-172-20-40-167..., kube-controller-manager-ip-172-20-40-167..., kube-proxy-ip-172-20-40-167..., kube-scheduler-ip-172-20-40-167...
node "ip-172-20-40-167..." drained

Instead of destroying the first node, kOps picks one primary node and drains it. This way, the applications running on it can shut down gracefully. We can see that it drains the following:

  • etcd-server-events
  • etcd-server-ip
  • kube-apiserver
  • kube-controller-manager
  • kube-proxy
  • kube-scheduler
  • Pods running on the server ip-172-20-40-167
...