Two of my cluster nodes gets Kubelet stopped posting node status
in kubectl describe node
sometimes. In logs of that nodes i see this:
Dec 11 12:01:03 alma-kube1 kubelet[946]: E1211 06:01:03.166998 946 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: Get https://192.168.151.52:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/alma-kube1?timeout=10s: read tcp 192.168.170.7:46824->192.168.151.52:6443: use of closed network connection
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167045 946 reflector.go:289] object-"kube-public"/"myregistrykey": watch of *v1.Secret ended with: very short watch: object-"kube-public"/"myregistrykey": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167356 946 reflector.go:289] object-"kube-system"/"kube-router-token-bfzkn": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-router-token-bfzkn": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167418 946 reflector.go:289] object-"kube-public"/"default-token-kcnfl": watch of *v1.Secret ended with: very short watch: object-"kube-public"/"default-token-kcnfl": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:13 alma-kube1 kubelet[946]: E1211 06:01:13.329262 946 kubelet_node_status.go:385] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"Ready\"}]}}" for node "alma-kube1": Patch https://192.168.151.52:6443/api/v1/nodes/alma-kube1/status?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
The problem is that kubelet cannot patch its node status sometimes, more than 250 resources stay on the node, kubelet cannot watch more than 250 streams with kube-apiserver at the same time. Adjust kube-apiserver --http2-max-streams-per-connection
to 1000 to relieve the pain.
Take look on: kubernetes-patch.
EDIT:
Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request:
You can enable multiple authentication methods at once. You should usually use at least two methods:
When multiple authenticator modules are enabled, the first module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in.
Information about tokens you can find here: tokens.
You can also use a service account which is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
Secrets often hold values that span a spectrum of importance, many of which can cause escalations within Kubernetes (e.g. service account tokens) and to external systems. Even if an individual app can reason about the power of the secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid.
To check tokens first you have to list secrets and then describe them ($ kubectl describe secret secret-name
).
To lists secrets execute below command:
$ kubectl get secret
Secrets often hold values that span a spectrum of importance, many of which can cause escalations within Kubernetes (e.g. service account tokens) and to external systems. Even if an individual app can reason about the power of the secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid.
More information you can find here: secret.