I have been trying to follow the getting started guide to EKS.
When I tried to call kubectl get service I got the message: error: You must be logged in to the server (Unauthorized)
Here is what I did:
1. Created the EKS cluster.
2. Created the config file as follows:
apiVersion: v1
clusters:
- cluster:
server: https://*********.yl4.us-west-2.eks.amazonaws.com
certificate-authority-data: *********
name: *********
contexts:
- context:
cluster: *********
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "*********"
- "-r"
- "arn:aws:iam::*****:role/******"
I can get a token when I run heptio-authenticator-aws token -r arn:aws:iam::**********:role/********* -i my-cluster-ame However when I try to access the cluster I keep receiving error: You must be logged in to the server (Unauthorized)
Any idea how to fix this issue?
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
So to add access to other aws users, first you must edit ConfigMap to add an IAM user or role to an Amazon EKS cluster.
You can edit the ConfigMap file by executing:
kubectl edit -n kube-system configmap/aws-auth
, after which you will be granted with editor with which you map new users.
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
mapAccounts: |
- "111122223333"
Mind the mapUsers
where you're adding ops-user together with mapAccounts
label which maps the AWS user account with a username on Kubernetes cluster.
However, no permissions are provided in RBAC by this action alone; you must still create role bindings in your cluster to provide these entities permissions.
As the amazon documentation(iam-docs) states you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap. You can do that by executing fallowing command (kub-docs):
kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=ops-user
which grants the cluster-admin ClusterRole
to a user named ops-user across the entire cluster.