1.2 Cluster Upgrade
This lab guides you through upgrading a Kubernetes cluster from v1.32.0 to the latest available patch version — with minimal or even zero downtime.
Prerequisites
Before starting the upgrade, make sure:
- You have a working Kubernetes cluster running v1.32.0.
- You can connect to all nodes via SSH.
- kubectl and kubeadm are installed and working on the control plane.
- Your application manifests are saved in Git or YAML files, so you can recover them if needed.
It’s also a good idea to take a backup of etcd before starting the upgrade — just in case.
1. Verify current state
Let’s check the current version of the cluster and its components.
kubectl get nodes
kubectl version
kubeadm version
You should confirm that all nodes are running version 1.32.0.
Also check the version of system pods like the API server, controller-manager, etc.:
kubectl get pods -n kube-system -o wide
This helps confirm which components are running and that they are healthy before starting the upgrade.
2. Prepare for upgrade
First, we make sure the system is ready to install new versions of Kubernetes components.
sudo apt update && sudo apt install -y apt-transport-https curl
sudo apt-mark unhold kubelet kubeadm kubectl
The unhold command allows you to upgrade these packages. If they were held (locked), they wouldn’t update.
Next, check what versions are available:
sudo apt update
apt-cache policy kubeadm
Pick the latest patch version for 1.32.x. For example, if 1.32.7 is shown:
sudo apt install -y kubeadm=1.32.7-1.1
kubeadm version
3. Drain control plane node
Before upgrading the control plane, we need to safely remove all workloads from it.
kubectl drain <control-plane-node> --ignore-daemonsets --delete-emptydir-data
- This command cordons the node (no new pods will be scheduled on it).
- It also evicts current pods (except for DaemonSets).
Don’t worry — your workloads will keep running on the other nodes.
4. Upgrade kubeadm and apply upgrade
Now use kubeadm to plan and apply the upgrade.
sudo kubeadm upgrade plan
This shows the available upgrade paths and any preflight checks.
When ready, apply the upgrade:
sudo kubeadm upgrade apply v1.32.7
This step upgrades the control plane components — API server, scheduler, etc. It will take a few moments.
5. Upgrade kubelet and kubectl
Now upgrade the kubelet and kubectl tools on the same node:
sudo apt install -y kubelet=1.32.7-1.1 kubectl=1.32.7-1.1
Reload systemd and restart the kubelet:
sudo systemctl daemon-reexec
sudo systemctl restart kubelet
Hold the versions again to prevent auto-upgrades:
sudo apt-mark hold kubelet kubeadm kubectl
Finally, bring the control plane node back into the cluster:
kubectl uncordon <control-plane-node>
HINT
kubectl get nodes
This allows new pods to be scheduled there again.
6. Upgrade worker nodes
Now repeat the upgrade on each worker node. These steps are very similar:
Step 1 (from the control plane): Drain the worker node:
# On the controlplane
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
Step 2 (on the worker node):
Unhold packages so they can be upgraded:
# On the node:
sudo apt-mark unhold kubelet kubeadm kubectl
Install new version of kubeadm:
sudo apt install -y kubeadm=1.32.7-1.1
Run the actual upgrade:
sudo kubeadm upgrade node
Then update the kubelet and kubectl:
sudo apt install -y kubelet=1.32.7-1.1 kubectl=1.32.7-1.1
Restart the kubelet:
sudo systemctl daemon-reexec
sudo systemctl restart kubelet
Hold the versions again to prevent auto-upgrades:
sudo apt-mark hold kubelet kubeadm kubectl
Finally, return the node to the cluster:
kubectl uncordon <worker-node>
Repeat these steps for each worker node - in case you have more.
7. Post-upgrade verification
Let’s make sure everything is running smoothly.
Check the node versions:
kubectl get nodes
Check if pods are running normally:
kubectl get pods -A
Check component status: (Note: kubectl get cs is deprecated)
kubectl get cs
Instead, use:
kubectl get --raw='/readyz?verbose'
This gives a detailed view of internal readiness checks.
End of Lab
You’ve successfully upgraded your cluster! 🎉 Take a moment to verify your apps are still running as expected, and notify your team that the cluster is now running the latest version.