When I provision a Kubernetes cluster using kubeadm and I get my nodes tagged as none. It's a know bug in Kubernetes and currently a PR is in-progress. However, I would like to know if there is an option to add a Role name manually for the node?
root@ip-172-31-14-133:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-14-133 Ready master 19m v1.9.3
ip-172-31-6-147 Ready <none> 16m v1.9.3
This worked for me:
kubectl label node cb2.4xyz.couchbase.com node-role.kubernetes.io/worker=worker
NAME STATUS ROLES AGE VERSION
cb2.4xyz.couchbase.com Ready custom,worker 35m v1.11.1
cb3.5xyz.couchbase.com Ready worker 29m v1.11.1
I could not delete/update the old label, but I can live with it.