-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
invalid bearer token, service account token has expired #6555
Comments
I've never seen this on a cluster that doesn't have something wrong with the datastore. Are you using etcd or external SQL? Are there any errors on the server nodes? Are all nodes in the cluster running the same version of k3s? Can you replicate this on the latest 1.23 patch release? |
Yes, using embedded etcd
I can only see the following errors on k3s journal
Yes all server and agent nodes are running the same version v1.23.6+k3s1 |
Is time set correct on all the nodes in the cluster? Are you using ntp or something else to keep them in sync? |
yes all the nodes time is in sync
|
This sounds a lot like rancher/rke2#3425 (comment), but that is in rke2 which packages Canal (which includes Calico) as a supported CNI. Since you've disabled the packaged k3s default Flannel CNI and deployed Calico in its place, it would be on you to verify its configuration and ensure that the token is being renewed. |
Environmental Info:
K3s Version: v1.23.6+k3s1
Node(s) CPU architecture, OS, and Version: Linux dev3-kv-02 5.13.0-52-generic #59~20.04.1-Ubuntu SMP Fri Jun 17 21:11:05 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
Cluster Configuration: 3 servers, 16 agents
Describe the bug:
Multiple pods from different namespace are in the ContainerCreating state including calico-kube-controllers and coredns in the kube-system namespace
kubectl describe pod throws the following error for all the pods stuck in the ContainerCreating state
Warning FailedCreatePodSandBox 55s (x1755 over 6h24m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "986af2c3af9e173a6f4084fcf73795bccf3b996c98f5a80b9f0a04a554cb8a21": plugin type="multus" name="multus-cni-network" failed (add): [cdi/cdi-apiserver-cdb4566f6-vq2zx/d82b198a-de67-4463-8e76-884e022fdc99:k8s-pod-network]: error adding container to network "k8s-pod-network": plugin type="calico" failed (add): error getting ClusterInformation: connection is unauthorized: Unauthorized
Journal logs flooded with the following error
Nov 24 13:00:43 dev3-kv-02 k3s[2648839]: E1124 13:00:43.020907 2648839 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]"
Steps To Reproduce:
Started to see all of sudden, I think Certificates for renewed
Expected behavior:
kube-system pods to be up and running
Actual behavior:
some of them like calico and DNS pods are in the ContainerCreating state
Additional context / logs:
Tried to restart the k3s service on servers using
systemctl restart k3s
and on agents usingsystemctl restart k3s-node
but didn't helpThe text was updated successfully, but these errors were encountered: