Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: [email protected]
name: user@kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: [email protected] name: nsp@kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: [email protected]
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl [email protected] get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous
(which is similar to *NIX's nobody) and not [email protected] (to which you applied your binding).
In your specific case the reason for that is that the username
flag uses HTTP Basic authentication and needs the password
flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username
and password
authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).