Upgrading a Kubernetes cluster with a single control plane node may seem simple, but skipping critical steps like backups or version checks can lead to downtime or data loss. In this step-by-step guide, you’ll learn how to properly back up etcd, upgrade cluster components using kubeadm, and verify your cluster’s health post-upgrade. Whether you’re running Kubernetes on-prem or in the cloud, this guide will help you perform a smooth, secure upgrade with confidence.
Version Skew Policy (Quick Ref)
| Component | Version Rule |
|---|---|
| kubeadm | Same or +1 minor version of control plane |
| kubelet | Same or -1 minor version of control plane |
| kubectl | Can be ±1 minor version of control plane |
Step 1: Back Up the Cluster
1 etcd Backup
export ETCDCTL_API=3
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
etcdctl --endpoints=https://127.0.0.1:2379 \
--cert=$ETCDCTL_CERT --key=$ETCDCTL_KEY --cacert=$ETCDCTL_CACERT \
snapshot save /root/etcd-backup-$(date +%F).db
2 Application Resource Backup
mkdir -p k8s-backup
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
mkdir -p k8s-backup/$ns
for res in deploy svc cm secret ingress pvc; do
kubectl get $res -n $ns -o yaml > k8s-backup/$ns/${res}.yaml 2>/dev/null
done
done
3 Persistent Volume Backup
Depends on your storage:
- Cloud: Use snapshots (EBS, Azure Disk, GCP PD)
- Local: Use
rsync,tar, or Velero
Step 2: Upgrade kubeadm
yum update && apt-get install -y kubeadm=1.33.x-00
Check available upgrade paths:
kubeadm upgrade plan
Step 3: Upgrade the Control Plane
# Drain the node (even though it's the master, it's good practice)
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# Upgrade cluster
kubeadm upgrade apply v1.33.x
Step 4: Upgrade kubelet and kubectl
yum install -y kubelet=1.33.x-00 kubectl=1.33.x-00
systemctl restart kubelet
# Uncordon the master node
kubectl uncordon <node-name>
Step 5: Upgrade Worker Nodes
Repeat these steps for each worker:
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
yum install -y kubeadm=1.33.x-00 kubelet=1.33.x-00 kubectl=1.33.x-00
kubeadm upgrade node
systemctl restart kubelet
kubectl uncordon <worker-node>
Step 6: Post-Upgrade Checks
Validate Cluster State
kubectl get nodes
kubectl get pods -A
Certificate Expiry
kubeadm certs check-expiration
# Optional renewal
kubeadm certs renew all
Check Deprecated APIs
kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis
Step 7: Upgrade CNI, CoreDNS, and kube-proxy
Some components (like CoreDNS) may require upgrades. Use:
kubectl get pods -n kube-system
You’ll be prompted during kubeadm upgrade apply to upgrade these if necessary.
Key Differences from HA Upgrade
| Area | Single Master | HA Cluster |
|---|---|---|
| Control Plane Nodes | 1 | 2+ (upgraded sequentially) |
| etcd | Local only | Stacked or external |
| Failover | Not available | Built-in redundancy |
| Simplicity | Easier | Complex coordination |
Conclusion
Upgrading a single-master Kubernetes cluster is a manageable task — but it’s not one to take lightly. With the right preparation, including proper backups, version checks, and careful sequencing of upgrades, you can avoid common pitfalls and keep your workloads running smoothly.