Contents

Updating Kubernetes on Talos

Upgrading Kubernetes on Talos

Introduction

I pulled the short straw and the builtin talosctl upgrade-k8s did not work for me and after spending some time investigating and failing to find the solution I just decided to go the long way and update things manually.

For Reference: Official Docs v1.10

Upgrade steps

Set environment to the correct cluster

set -gx KUBECONFIG ~/.config/kubeconfig_hcloud
set -gx TALOSCONFIG ~/.config/talosconfig

Ensure everything is running fine.

In case someone wonders about the command, I’m using oc as kubectl replacement because my brain is hardwired through years of working with this thing.

oc get nodes
oc get pods -n kube-system
oc get pods -A |grep -v "Running|Completed"

Upgrade the API Server

talosctl -n 10.0.1.101 patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "registry.k8s.io/kube-apiserver:v1.33.0"}]'
watch oc get pods -n kube-system

Wait until the api server pod is running and the talos-clout-controller pod is running as well.

Repeat for every control plane

Upgrade Kube Controller Manager

talosctl -n 10.0.1.101 patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/controllerManager/image", "value": "registry.k8s.io/kube-controller-manager:v1.33.0"}]'
watch oc get pods -n kube-system

Repeat for every control plane

Upgrade Scheduler

talosctl -n 10.0.1.101 patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/scheduler/image", "value": "registry.k8s.io/kube-scheduler:v1.33.0"}]'
watch oc get pods -n kube-system

Repeat…

kube-proxy update omitted (because kube-proxy disabled) bootstrap resources omitted (because managed by fluxcd in my case)

Upgrade Kubelet

talosctl -n 10.0.1.101 patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v1.33.0"}]'
watch oc get nodes -o wide

check pods, continue if everything is running again

watch oc get pods -A

Repeat for every node, drain worker nodes before restarting kubelet if you want to be careful.