This article is deprecated in favor of the scripts and instructions at: https://github.com/coreos/tls_rotate
Preparation:
- On an etcd node, take a backup of the current state.
- Run bootkube recover to extract the existing state from etcd. Copy this back to your working machine as a precaution, in case the control plane goes down.
- Download the current
kubeconfig
andassets.zip
.
Take a look at the TLS rotation guide and download the scripts and openssl config. Then, before generating the assets, generate a new CA. This could be done via a script:
#!/bin/bash -e
export OUTPUTDIR=~/cert-regen
openssl genrsa -out $DIR/ca.key 4096
openssl req -config openssl.conf \
-new -x509 -days 3650 -sha256 \
-key $DIR/ca.key -extensions v3_ca \
-out $DIR/ca.crt -subj "/CN=kube-ca/O=bootkube"
- Pass the
ca.key
andca.crt
files via env vars for the rest of the certificate generation steps.
export CA_CERT=$HOME/cert-regen/ca.crt
export CA_KEY=$HOME/cert-regen/ca.key
export ETCD_CA_CERT=$HOME/cert-regen/ca.crt
-
Follow the remaining of the steps in the Generating new certs section of the TLS Certificate Rotation in Tectonic document. Ensure the environment variables for
BASE_DOMAIN, CLUSTER_NAME, and
APISERVER_INTERNAL_IP
are set before running thegencerts.sh
script. -
Base64 encode the contents of the
ca.crt
file.
base64 -w 0 ca.crt
- In the
generated/patches
directory, add the following fields with the contents from the base64 encoded ca generated above:identity-grpc-client.patch
:ca-cert
identity-grpc-server.patch
:ca-cert
kube-apiserver-secret.patch
:ca.crt
andoidc-ca.crt
- A new patch file for
kube-system/kube-controller-manager
:ca.crt
- A new patch file for
tectonic-system/tectonic-ca-cert
:ca-cert
Rotating the etcd CA
First, introduce the new etcd CA bundle and restart the API server. This bundle includes both the new and old CA certificates.
kubectl patch -f generated/etcd/patches/etcd-ca.patch \
-p "$( cat generated/etcd/patches/etcd-ca.patch )"
kubectl delete pods -n kube-system -l k8s-app=kube-apiserver
For each of the nodes, copy the new CA bundle and restart etcd.
for ADDR in $ETCD_IPS; do
echo "etcd on $ADDR restarting"
scp -o StrictHostKeyChecking=no generated/etcd/ca_bundle.pem core@$ADDR:/home/core/ca.crt
ssh -A -o StrictHostKeyChecking=no \
core@$ADDR "sudo chown etcd:etcd /home/core/ca.crt; \
sudo cp -r /etc/ssl/etcd /etc/ssl/etcd.bak; \
sudo mv /home/core/ca.crt /etc/ssl/etcd/ca.crt; \
sudo systemctl restart etcd-member"
echo "etcd on $ADDR restarted"
sleep 10
done
Once all etcd instances are seeded with the new CA certificate, rotate the API server’s client certs:
kubectl patch -f generated/etcd/patches/etcd-client-cert.patch \
-p "$( cat generated/etcd/patches/etcd-client-cert.patch )"
kubectl delete pods -n kube-system -l k8s-app=kube-apiserver
Finally, for each etcd instance, rotate the peer and serving certs:
for ADDR in $ETCD_IPS; do
echo "etcd on $ADDR restarting"
scp -o StrictHostKeyChecking=no \
generated/etcd/tls/{peer.crt,peer.key,server.crt,server.key,client.crt,client.key} \
core@$ADDR:/home/core
ssh -A -o StrictHostKeyChecking=no core@$ADDR \
"sudo chown etcd:etcd /home/core/{peer.crt,peer.key,server.crt,server.key,client.crt,client.key}; \
sudo chmod 0400 /home/core/{peer.crt,peer.key,server.crt,server.key,client.crt,client.key}; \
sudo mv /home/core/{peer.crt,peer.key,server.crt,server.key,client.crt,client.key} /etc/ssl/etcd/; \
sudo systemctl restart etcd-member"
echo "etcd on $ADDR restarted"
sleep 10
done
Rotating the control plane certificates
TLS assets for components running on top of Kubernetes can be updated using kubectl, including self-hosted control plane components such as the API server. To rotate those certificates, patch the manifests and roll the deployments.
WARNING: The following commands MUST use kubectl patch and NOT other kubectl creation subcommands.
kubectl patch -f generated/patches/identity-grpc-client.patch \
-p "$( cat generated/patches/identity-grpc-client.patch )"
kubectl patch -f generated/patches/identity-grpc-server.patch \
-p "$( cat generated/patches/identity-grpc-server.patch )"
kubectl patch -f generated/patches/ingress-tls.patch \
-p "$( cat generated/patches/ingress-tls.patch )"
kubectl patch -f generated/patches/tectonic-ca-cert-secret.patch \
-p "$( cat generated/patches/tectonic-ca-cert-secret.patch )"
kubectl patch -f generated/patches/kube-controller-manager-secret.patch \
-p "$( cat generated/patches/kube-controller-manager-secret.patch )"
kubectl patch -f generated/patches/kube-apiserver-secret.patch \
-p "$( cat generated/patches/kube-apiserver-secret.patch )"
Rotate the control plane pods
Scale to 2 replicas of controller-manager & scheduler, then update their deployments. Scaling to 2 replicas allows us to schedule on three nodes while respecting anti-affinity rules.
kubectl scale deployments -n kube-system kube-controller-manager --replicas 2
kubectl scale deployments -n kube-system kube-scheduler --replicas 2
kubectl patch deployments -n kube-system kube-controller-manager \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n kube-system kube-scheduler \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
At this point, there should be three pods for each of those deployments, with one of them stuck in a CrashLoopBackoff
because it cannot connect to the api server.
Proceed to delete the api server pods. Note that the API server may become temporarily unavailable after this action.
kubectl get pods -n kube-system -l k8s-app=kube-apiserver
Once the control plane has stabilized, scale back to 3 replicas on the deployments:
kubectl scale deployments -n kube-system kube-controller-manager --replicas 3
kubectl scale deployments -n kube-system kube-scheduler --replicas 3
Rotate all certs to use the new CA
Restart the kubelet on each node to pick up the new CA:
for IP in $MASTER_IPS $WORKER_IPS; do
echo "Kubelet on $IP restarting"
ssh -A -o StrictHostKeyChecking=no core@$IP "sudo systemctl restart kubelet"
echo "Kubelet on $IP restarted"
sleep 5
done
Patch pods in the kube-system namespace:
kubectl patch statefulsets -n kube-system prometheus-etcd \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n kube-system heapster \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n kube-system kube-dns \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch daemonsets -n kube-system kube-flannel \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch daemonsets -n kube-system fluentd-agent \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch daemonsets -n kube-system kube-proxy \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch daemonsets -n kube-system pod-checkpointer \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Patch all pods in the tectonic-system namespace:
kubectl patch statefulsets -n tectonic-system alertmanager-main \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch statefulsets -n tectonic-system prometheus-k8s \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch daemonsets -n tectonic-system node-agent \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system alm-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system catalog-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system container-linux-update-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system grafana \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system kube-state-metrics \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system kube-version-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system prometheus-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-alm-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-channel-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-cluo-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-console \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-identity \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-ingress-controller \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-monitoring-auth-alertmanager \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-monitoring-auth-grafana \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-monitoring-auth-prometheus \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-prometheus-operator \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
kubectl patch deployments -n tectonic-system tectonic-stats-emitter \
-p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Notes
- Patch application components: anything that uses the kubernetes api will need to be bounced.
- Generate new kubeconfigs
Comments
0 comments
Please sign in to leave a comment.