I want to scale up/down the number of machines to increase/decrease the number of nodes in my Kubernetes cluster. When I add one machine, I’m able to successfully register it with Kubernetes; therefore, a new node is created as expected. However, it is not clear to me how to smoothly shut down the machine later. A good workflow would be:
If I understood correctly, even kubectl drain
(discussion) doesn't do what I expect since it doesn’t start the pods before deleting them (it relies on a replication controller to start the pods afterwards which may cause downtime). Am I missing something?
How should I properly shutdown a machine?
17 Answers
List the nodes and get the <node-name>
you want to drain or (remove from cluster)
kubectl get nodes
1) First drain the node
kubectl drain <node-name>
You might have to ignore daemonsets and local-data in the machine
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
2) Edit instance group for nodes (Only if you are using kops)
kops edit ig nodes
Set the MIN and MAX size to whatever it is -1 Just save the file (nothing extra to be done)
You still might see some pods in the drained node that are related to daemonsets like networking plugin, fluentd for logs, kubedns/coredns etc
3) Finally delete the node
kubectl delete node <node-name>
4) Commit the state for KOPS in s3: (Only if you are using kops)
kops update cluster --yes
OR (if you are using kubeadm)
If you are using kubeadm and would like to reset the machine to a state which was there before running kubeadm join
then run
kubeadm reset
4kubectl get nodes
. We’ll assume the name of node to be removed is “mynode”, replace that going forward with the actual node name.kubectl drain mynode
kubectl delete node mynode
kubeadm reset
Rafael. kubectl drain
does work as you describe. There is some downtime, just as if the machine crashed.
Can you describe your setup? How many replicas do you have, and are you provisioned such that you can't handle any downtime of a single replica?
5If the cluster is created by kops 1.kubectl drain <node-name> now all the pods will be evicted ignore daemeondet: 2.kubectl drain <node-name> --ignore-daemonsets --delete-local-data 3.kops edit ig nodes-3 --state=s3://bucketname set max and min value of instance group to 0 4. kubectl delete node 5. kops update cluster --state=s3://bucketname --yes Rolling update if required: 6. kops rolling-update cluster --state=s3://bucketname --yes validate cluster: 7.kops validate cluster --state=s3://bucketname Now the instance will be terminated.
Remove worker node from Kubernetes
When draining a node we can have the risk that the nodes remain unbalanced and that some processes suffer downtime. The purpose of this method is to maintain the load balance between nodes as much as possible in addition to avoiding downtime.
# Mark the node as unschedulable. echo Mark the node as unschedulable $NODENAME kubectl cordon $NODENAME # Get the list of namespaces running on the node. NAMESPACES=$(kubectl get pods --all-namespaces -o custom-columns=:metadata.namespace --field-selector spec.nodeName=$NODENAME | sort -u | sed -e "/^ *$/d") # forcing a rollout on each of its deployments. # Since the node is unschedulable, Kubernetes allocates # the pods in other nodes automatically. for NAMESPACE in $NAMESPACES do echo deployment restart for $NAMESPACE kubectl rollout restart deployment/name -n $NAMESPACE done # Wait for deployments rollouts to finish. for NAMESPACE in $NAMESPACES do echo deployment status for $NAMESPACE kubectl rollout status deployment/name -n $NAMESPACE done # Drain node to be removed kubectl drain $NODENAME
There exists some strange behaviors for me when kubectl drain
. Here are my extra steps, otherwise DATA WILL LOST in my case!
Short answer: CHECK THAT no PersistentVolume is mounted to this node. If have some PV, see the following descriptions to remove it.
When executing kubectl drain
, I noticed, some Pods are not evicted (they just did not appear in those logs like evicting pod xxx
).
In my case, some are pods with soft anti-affinity (so they do not like to go to the remaining nodes), some are pods of StatefulSet of size 1 and wants to keep at least 1 pod.
If I directly delete that node (using the commands mentioned in other answers), data will get lost because those pods have some PersistentVolumes, and deleting a Node will also delete PersistentVolumes (if using some cloud providers).
Thus, please manually delete those pods one by one. After deleted, kuberentes will re-schedule the pods to other nodes (because this node is SchedulingDisabled).
After deleting all pods (excluding DaemonSets), please CHECK THAT no PersistentVolume is mounted to this node.
Then you can safely delete the node itself :)
ncG1vNJzZmirpJawrLvVnqmfpJ%2Bse6S7zGiorp2jqbawutJoam5vZWyDc3yOoaawZaSkeqi%2BwJycn62cocZuvsSmpq%2BdXZZ6r7vDnmSfqp%2BieqzBwZ6pp52kmsA%3D