-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
none: The connection to the server x:8443 was refused due to evicted apiserver #3611
Comments
The apiserver almost certainly failed here. Do you mind running:
|
Okay, I think I'm hitting this again. It did go away there for a while, but now it's back with (again) no explanation. Running
Running without
Running
and the API server shows as stopped in
|
If I try to start the API server again, I get the following output:
and then if I run
The log ends after this, presumably because the Docker container exited. |
This line here in the above output:
is where everything starts to shutdown, but there's no logs immediately before that that would indicate why it's shutting down. If I try to run minikube again after this:
If I wait a bit and try
Watching the kubelet logs after the cluster has started up, what I'm seeing is:
Then a little bit later, the API server starts to shutdown, coinciding with these logs in kubelet:
|
Okay I think I've figured out the issue. The API server is being evicted due to low ephemeral storage. The full trace of events looks like this:
then it tears down a bunch of volumes and other things ...
it then tries to clean up more pods immediately after ....
we get a bunch of messages like this ...
and then finally, here's the kicker ...
and later ...
From here on out, everything is broken because the API server pod got evicted. |
But it should be noted that this machine does have plenty of space available:
so there's really no reason for the ephemeral storage to be causing this eviction. |
This is what
|
So based on this, it sounds like the defaults for Kubernetes are 10% disk space free for hard eviction. This default makes sense for Kubernetes clusters that are running nodes, but is too excessive for development machines with large hard disks (where 10% of a 500GB SSD is 50GB). So the things that are surprising here and should be addressed:
|
The disk eviction was addressed in #3671 - please try the new minikube version |
I've suddenly encountered an issue where a cluster running minikube with the
none
vm-driver will no longer start up. I didn't do anything specific to cause this - I was just using skaffold on it as usual, and the API server started to fall over.Environment: minikube on Ubuntu bionic w/ Docker 18.09.1
Minikube version (use
minikube version
):v0.33.1
cat ~/.minikube/machines/minikube/config.json | grep DriverName
):"DriverName": "none",
cat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): N/AWhat happened:
Kubernetes cluster now stops running after starting up.
What you expected to happen:
Kubernetes cluster should not randomly stop working after a minute or so.
How to reproduce it (as minimally and precisely as possible):
I'm not really sure how things got into this scenario, so I'm attaching as many logs as I can. On my system just
sudo minikube start --vm-driver=none
will reproduce the issue.Output of the following script (to delete and recreate cluster):
Then I can run
kubectl get pods
and get this:but if I wait about 30 seconds and run the command again, I get this:
Logs from API server:
Logs from kube-controller-man...
Logs from kube-scheduler:
The text was updated successfully, but these errors were encountered: