40

Without changes in infrastructure If I execute any kubectl command, ie:

kubectl get nodes

I get the error

You must be logged in to the server (Unauthorized) 

The kubernetes cluster was opperating ok and I did no changes to it... Any ideas how to debug this? kubectl has no -vv od debug flag to provide more information.

If i try

kubectl version

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", (...) error: You must be logged in to the server (the server has asked for the client to provide credentials)

ntg
  • 103
  • 4
Wojtas.Zet
  • 521
  • 1
  • 4
  • 9

18 Answers18

20

In my case the issue started after renewing kubernates certificates, this caused the existing ~/.kube/config to have outdated keys and certificate values in it.

The solution was to replace the values client-certificate-data and client-key-data in file ~/.kube/config with the values from the updated file in /etc/kubernetes/kubelet.conf of the same name.

19

You can copy the client-certificate-data and client-key-data from /etc/kubernetes/admin.conf to your ~/.kube/config file as of more recent versions of Kubernetes. See this answer for determining when your certificates expire.

koehn
  • 361
10

I got the same after updating certificates:

kubeadm alpha certs renew all

And then I had to follow

$ cd ~/.kube

Archive the old config file containing the out of date certificates

$ mv config conf.archive.2021

Copy the new configuration file created using kubeadm

$ cp /etc/kubernetes/admin.conf config

apply permissions to your current admin user and group

$ sudo chown $(id -u):$(id -g) config

Kubernetes version : 1.19

Reference

9

I have faced the similar issue today and the above comments helped me to fix the issue. I am adding more details with my scenario because it might be helpful for the people which have similar settings.

I have a separate user for connecting to my k8s cluster (It's a normal cluster in EC2 instances). I had created the user arunlal with limited access by adding ClusterRoleBindings.

If you get the following error while running API to cluster (in my case Kubectl):

error: You must be logged in to the server (Unauthorized)

Go through the following order.

- First check the cert used in your config file (local client)

I had a separate configuration on my local system, because the main config had the details about the other k8s & okd cluster credentials. So I had created second configuration on my Laptop (/Users/arunlal/.kube/config_two). In this case I have the following aliases:

alias kctl="kubectl --kubeconfig=/Users/arunlal/.kube/config_two"

- From this file you will get the cert that we are using.

[arunlal.as@crybit.com ~] cat /Users/arunlal/.kube/config_two| grep -A 5 users
users:
- name: arunlal
  user:
    client-certificate: /Users/arunlal/.arunlal-keys/arunlal.crt
    client-key: /Users/arunlal/.arunlal-keys/arunlal.key

- Once you get the cert in your client configuration you can check the validity using the openssl command.

    [arunlal.as@crybit.com ~] openssl x509 -noout -dates -in /Users/arunlal/.arunlal-keys/arunlal.crt
    notBefore=Jun 22 23:43:22 2021 GMT
    notAfter=Sep 30 23:43:22 2021 GMT

- Validate the expiry

While creating the user I passed the days as 5, that was the issue. How I created user?

openssl genrsa -out arunlal.key 2048
openssl req -new -key arunlal.key -out arunlal.csr -subj "/CN=arunlal/O=crybit"
openssl x509 -req -in arunlal.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out arunlal.crt -days 5
kubectl config set-credentials arunlal --client-certificate=/root/arunlal-keys/arunlal.crt  --client-key=/root/arunlal-keys/arunlal.key

- To fix, I recreated the cert with more number of days

openssl x509 -req -in arunlal.csr -CA /etc/kubernetes/pki/ca.crt
-CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out arunlal.crt -days 100

- This we need to run from the k8s cluster.

- Replaced the cert locally.

Modified /Users/arunlal/.arunlal-keys/arunlal.crt with new cert.

Hope this will help someone. Thanks!

~ arun

3

The kubeconfig certificate may have changed. If you deployed your cluster using terraform. Do terraform apply to generate a new kubeconfig file.

2

In my case I was using a cluster created by kOps and cluster admin user with credentials (~/.kube/config) generated by: kops export kubeconfig --admin

By default the credentials expire after 18 hours. So another:

kops export kubeconfig --admin

Did it for me.

1

I'm getting "error: You must be logged in to the server (the server has asked for the client to provide credentials)" in Azure Kubernetes service.

I fixed this with trying to connect and executed the command mentioned on the Connect to ClusterName.

az account set --subscription de81a6e3-1784-4732-9282-XXXXXXXX7
az aks get-credentials --resource-group resourceGroupName --name clusterName
Glorfindel
  • 1,213
1

You can also use this command to troubleshoot issues with your current certificate. This command reads the client-certificate-data from your kubeconfig file, decodes it, and then uses OpenSSL to display the details:

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d | openssl x509 -text

Credit goes to this blog post by Scott Lowe for the original information.

1

Fix for microk8s

Backup first:

cp ~/.kube/config ~/.kube/config-backup

or if running as root

cp /root/.kube/config /root/.kube/config-backup

Fix if running as non-root user:

microk8s config > ~/.kube/config

Fix if running as root:

microk8s config > /root/.kube/config

Symptoms were:

kubectl <whatever>
E0617 11:17:58.215313    8464 memcache.go:265] couldn't get current server API group list: Get "https://10.0.2.15:16443/api?timeout=32s": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1")
...
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1")

With --insecure-skip-tls-verify

kubectl <whatever> --insecure-skip-tls-verify
E0617 11:07:10.405468  141846 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
...
error: You must be logged in to the server (the server has asked for the client to provide credentials)
0

Fixed - after all the cert has changed. Check your ~/.kube/config if you have this

Wojtas.Zet
  • 521
  • 1
  • 4
  • 9
0

I fixed the issue by deleting minikube

minikube delete 
minikube start --vm-driver=none
scoulomb
  • 105
0

After renewing your certs you need to apply the new certs to the admin config for kubectl to work.

The following takes a backup of your existing config, and applies the new admin config.

cp /root/.kube/config /root/.kube/.old-$(date --iso)-config
cp /etc/kubernetes/admin.conf /root/.kube/config
0

In my case using AWS EKS what solved the problem was:

aws eks --region <region code> update-kubeconfig --name <cluster name>
Fabiano Soriani
  • 505
  • 1
  • 5
  • 7
0

I had the same problem in AWS eks. I created cluster by cloudformation and then tried to connect from instance with different role.

By default only USER/role that was used to create eks cluster is permitted to connect to eks. It is controlled by kubernetes config map with name: aws-auth (iam role that give permission for instance to connect Kubernetes need to be configured not only in aws IAM but also in eks config map.....).
For example if I created cluster from cloudformations not by jenkins then I was needed to add jenkins role in aws-auth config map to be able to connect jenkins to cluster:

mapRoles: |
  - rolearn: arn:aws:iam::XXXXXXXXXXXX:role/testrole
    username: testrole
    groups:
      - system:masters

This helped me: https://repost.aws/knowledge-center/eks-api-server-unauthorized-error

0

It means in the aws-auth file the role is missing from which you are logged in. In my case, someone else created a cluster and when I created a new bastion host it threw the same error. When I edited the role in the aws-auth file it worked. Now the question is how can I access the aws-auth file? It was a public cluster you can access it through AWS Cloudshell. In Cloudshell type command eksctl edit configmap and edit your role. then you must be able to access the cluster.

Dave M
  • 4,494
0

I encountered this issue. I added the cluster creator's AWS credentials (export AWS_ACCESS_KEY_ID=XXXXX export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXX export AWS_DEFAULT_REGION=ap-south-1) in the ~/.bashrc file. Then it's worked.

Chiya
  • 1
0

I had the exact same problem after a Docker update (v 4.31.0) and if you're using Azure kubernetes service (AKS), the solution for me was to use the following command in the terminal:

az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup

0

Another possible cause (at least when using OCI) - incorrect time/clock on the kubectl system. As kubectl uses oci behind the scenes, oci fails when the time is off.

Richlv
  • 2,426