Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: The connection to the server x:8443 was refused due to evicted apiserver #3611

Closed
hach-que opened this issue Jan 31, 2019 · 9 comments
Closed
Labels
co/none-driver priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. triage/obsolete Bugs that no longer occur in the latest stable release

Comments

@hach-que
Copy link

I've suddenly encountered an issue where a cluster running minikube with the none vm-driver will no longer start up. I didn't do anything specific to cause this - I was just using skaffold on it as usual, and the API server started to fall over.

Environment: minikube on Ubuntu bionic w/ Docker 18.09.1

Minikube version (use minikube version): v0.33.1

  • OS (e.g. from /etc/os-release):
    NAME="Ubuntu"
    VERSION="18.04.1 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.1 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic
    
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "DriverName": "none",
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): N/A
  • Install tools: N/A
  • Others: N/A

What happened:
Kubernetes cluster now stops running after starting up.

What you expected to happen:
Kubernetes cluster should not randomly stop working after a minute or so.

How to reproduce it (as minimally and precisely as possible):
I'm not really sure how things got into this scenario, so I'm attaching as many logs as I can. On my system just sudo minikube start --vm-driver=none will reproduce the issue.

Output of the following script (to delete and recreate cluster):

    export CHANGE_MINIKUBE_NONE_USER=true
    sudo minikube config set WantReportErrorPrompt false
    sudo minikube stop || true
    sudo minikube delete || true
    docker stop $(docker ps | awk '{print $1}') || true
    docker container prune -f || true
    docker image prune -f || true
    sudo kill -KILL $(sudo pidof kube-scheduler) $(sudo pidof kube-controller) $(sudo pidof kube-controller-manager) $(sudo pidof kube-addons.sh) || true
    sudo rm -Rf /etc/kubernetes || true
    sudo rm -Rf /data/minikube || true
    sudo rm -Rf /var/lib/minikube || true
    sudo rm -Rf /var/lib/kubelet || true
    sleep 10
sudo minikube start --vm-driver=none
sudo mv /root/.kube $HOME/.kube || true
    sudo chown -R $USER $HOME/.kube || true
    sudo chgrp -R $USER $HOME/.kube || true
    sudo mv /root/.minikube $HOME/.minikube || true
    sudo chown -R $USER $HOME/.minikube || true
    sudo chgrp -R $USER $HOME/.minikube || true
Stopping local Kubernetes cluster...
Error stopping machine:  Load: minikube: Error loading host from store: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.


minikube failed :( exiting with error code 1
Deleting local Kubernetes cluster...
Errors occurred deleting machine:  Load: minikube: Error loading host from store: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Error response from daemon: No such container: CONTAINER
Total reclaimed space: 0B
Total reclaimed space: 0B

Usage:
 kill [options] <pid> [...]

Options:
 <pid> [...]            send signal to every <pid> listed
 -<signal>, -s, --signal <signal>
                        specify the <signal> to be sent
 -l, --list=[<signal>]  list all signal names, or convert one to a name
 -L, --table            list all signal names in a nice table

 -h, --help     display this help and exit
 -V, --version  output version information and exit

For more details see kill(1).
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
Verifying kubelet health ...
Verifying apiserver health ...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
        The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

        sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.kube
        sudo chgrp -R $USER $HOME/.kube

        sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.minikube
        sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.


Everything looks great. Please enjoy minikube!
mv: cannot stat '/root/.kube': No such file or directory
mv: cannot stat '/root/.minikube': No such file or directory

Then I can run kubectl get pods and get this:

june@june-Virtual-Machine:~/next$ kubectl get pods
No resources found.

but if I wait about 30 seconds and run the command again, I get this:

june@june-Virtual-Machine:~/next$ kubectl get pods
The connection to the server 192.168.68.13:8443 was refused - did you specify the right host or port?

Logs from API server:

june@june-Virtual-Machine:~/next$ docker logs -f d69c76fe5ab7
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0131 02:23:23.477596       1 server.go:557] external host was not specified, using 192.168.68.13
I0131 02:23:23.477660       1 server.go:146] Version: v1.13.2
I0131 02:23:23.739255       1 initialization.go:91] enabled Initializers feature as part of admission plugin setup
I0131 02:23:23.739474       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0131 02:23:23.739492       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0131 02:23:23.739915       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0131 02:23:23.739922       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0131 02:23:24.231282       1 master.go:228] Using reconciler: lease
W0131 02:23:24.846368       1 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0131 02:23:24.925151       1 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0131 02:23:24.932387       1 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0131 02:23:24.942101       1 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0131 02:23:25.001643       1 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2019/01/31 02:23:25 log.go:33: [restful/swagger] listing is available at https://192.168.68.13:8443/swaggerapi
[restful] 2019/01/31 02:23:25 log.go:33: [restful/swagger] https://192.168.68.13:8443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/01/31 02:23:26 log.go:33: [restful/swagger] listing is available at https://192.168.68.13:8443/swaggerapi
[restful] 2019/01/31 02:23:26 log.go:33: [restful/swagger] https://192.168.68.13:8443/swaggerui/ is mapped to folder /swagger-ui/
I0131 02:23:26.110088       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0131 02:23:26.110101       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0131 02:23:28.268575       1 secure_serving.go:116] Serving securely on [::]:8443
I0131 02:23:28.268647       1 autoregister_controller.go:136] Starting autoregister controller
I0131 02:23:28.268652       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0131 02:23:28.268668       1 crd_finalizer.go:242] Starting CRDFinalizer
I0131 02:23:28.268705       1 controller.go:84] Starting OpenAPI AggregationController
I0131 02:23:28.269062       1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0131 02:23:28.269091       1 naming_controller.go:284] Starting NamingConditionController
I0131 02:23:28.269091       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0131 02:23:28.269099       1 establishing_controller.go:73] Starting EstablishingController
I0131 02:23:28.269072       1 available_controller.go:283] Starting AvailableConditionController
I0131 02:23:28.269109       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0131 02:23:28.269079       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0131 02:23:28.269120       1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I0131 02:23:28.269086       1 customresource_discovery_controller.go:203] Starting DiscoveryController
I0131 02:23:28.368832       1 cache.go:39] Caches are synced for autoregister controller
I0131 02:23:28.369216       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0131 02:23:28.369240       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0131 02:23:28.369251       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I0131 02:23:32.342294       1 trace.go:76] Trace[177034971]: "Create /api/v1/namespaces" (started: 2019-01-31 02:23:28.340120008 +0000 UTC m=+4.902530826) (total time: 4.002157187s):
Trace[177034971]: [4.000883795s] [4.000854996s] About to store object in database
I0131 02:23:32.371852       1 trace.go:76] Trace[2053959310]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:28.369253005 +0000 UTC m=+4.931663823) (total time: 4.002579084s):
Trace[2053959310]: [4.000695897s] [4.000663997s] About to store object in database
I0131 02:23:32.374229       1 trace.go:76] Trace[578462960]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:28.369325705 +0000 UTC m=+4.931736523) (total time: 4.004888868s):
Trace[578462960]: [4.000648397s] [4.000526698s] About to store object in database
I0131 02:23:32.374481       1 trace.go:76] Trace[64880682]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:28.369438204 +0000 UTC m=+4.931849122) (total time: 4.005032667s):
Trace[64880682]: [4.000673497s] [4.000576798s] About to store object in database
I0131 02:23:32.375449       1 trace.go:76] Trace[271942094]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:28.369171306 +0000 UTC m=+4.931582224) (total time: 4.006266758s):
Trace[271942094]: [4.000777496s] [4.000598098s] About to store object in database
I0131 02:23:32.376482       1 trace.go:76] Trace[1528798397]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:28.369364704 +0000 UTC m=+4.931775522) (total time: 4.007106653s):
Trace[1528798397]: [4.000584198s] [4.000496598s] About to store object in database
I0131 02:23:32.399805       1 trace.go:76] Trace[2118605659]: "Create /api/v1/nodes" (started: 2019-01-31 02:23:28.297498904 +0000 UTC m=+4.859909722) (total time: 4.102288891s):
Trace[2118605659]: [4.101059999s] [4.1009962s] About to store object in database
I0131 02:23:32.478641       1 trace.go:76] Trace[101230384]: "Create /api/v1/namespaces" (started: 2019-01-31 02:23:28.476259561 +0000 UTC m=+5.038670379) (total time: 4.002348686s):
Trace[101230384]: [4.001272994s] [4.000778297s] About to store object in database
I0131 02:23:33.272027       1 trace.go:76] Trace[298124158]: "Create /apis/scheduling.k8s.io/v1beta1/priorityclasses" (started: 2019-01-31 02:23:29.270239843 +0000 UTC m=+5.832650661) (total time: 4.001762991s):
Trace[298124158]: [4.000721698s] [4.000702998s] About to store object in database
I0131 02:23:33.272307       1 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0131 02:23:33.275929       1 trace.go:76] Trace[321260434]: "Create /api/v1/namespaces" (started: 2019-01-31 02:23:29.270563341 +0000 UTC m=+5.832974159) (total time: 4.005354266s):
Trace[321260434]: [4.000639999s] [4.000621499s] About to store object in database
I0131 02:23:33.278591       1 trace.go:76] Trace[254642712]: "Create /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2019-01-31 02:23:29.276993296 +0000 UTC m=+5.839404214) (total time: 4.001583592s):
Trace[254642712]: [4.000743898s] [4.000722498s] About to store object in database
I0131 02:23:33.279016       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0131 02:23:36.355432       1 trace.go:76] Trace[947826961]: "Create /api/v1/namespaces/default/services" (started: 2019-01-31 02:23:32.345436373 +0000 UTC m=+8.907847191) (total time: 4.009979637s):
Trace[947826961]: [4.0007769s] [4.000730701s] About to store object in database
W0131 02:23:36.369212       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.68.13]
I0131 02:23:36.369757       1 controller.go:608] quota admission added evaluator for: endpoints
I0131 02:23:36.377176       1 trace.go:76] Trace[789502426]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:32.372471985 +0000 UTC m=+8.934882903) (total time: 4.004690173s):
Trace[789502426]: [4.000784701s] [4.000707402s] About to store object in database
I0131 02:23:36.377220       1 trace.go:76] Trace[970609456]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:32.374815368 +0000 UTC m=+8.937226286) (total time: 4.00239089s):
Trace[970609456]: [4.000730002s] [4.000642502s] About to store object in database
I0131 02:23:36.377336       1 trace.go:76] Trace[160959663]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:32.37453927 +0000 UTC m=+8.936950088) (total time: 4.002789387s):
Trace[160959663]: [4.000769001s] [4.000742201s] About to store object in database
I0131 02:23:36.378863       1 trace.go:76] Trace[213871301]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:32.37599546 +0000 UTC m=+8.938406378) (total time: 4.002854887s):
Trace[213871301]: [4.000736801s] [4.000691201s] About to store object in database
I0131 02:23:36.392568       1 trace.go:76] Trace[2089668678]: "Create /apis/apiregistration.k8s.io/v1/apiservices" (started: 2019-01-31 02:23:32.376860154 +0000 UTC m=+8.939270972) (total time: 4.015596498s):
Trace[2089668678]: [4.00240029s] [4.00237529s] About to store object in database
I0131 02:23:36.673444       1 trace.go:76] Trace[1853767381]: "Create /api/v1/namespaces/default/events" (started: 2019-01-31 02:23:32.668483228 +0000 UTC m=+9.230894046) (total time: 4.004943672s):
Trace[1853767381]: [4.000745501s] [4.000691402s] About to store object in database
I0131 02:23:37.283339       1 trace.go:76] Trace[1529670412]: "Create /apis/scheduling.k8s.io/v1beta1/priorityclasses" (started: 2019-01-31 02:23:33.274448917 +0000 UTC m=+9.836859835) (total time: 4.008875846s):
Trace[1529670412]: [4.000644503s] [4.000606103s] About to store object in database
I0131 02:23:37.283487       1 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0131 02:23:37.283495       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0131 02:23:37.286626       1 trace.go:76] Trace[1634766825]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2019-01-31 02:23:33.277369997 +0000 UTC m=+9.839780815) (total time: 4.009241243s):
Trace[1634766825]: [4.002950687s] [4.002902988s] About to store object in database
I0131 02:23:37.286838       1 trace.go:76] Trace[1896298252]: "Create /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2019-01-31 02:23:33.281919865 +0000 UTC m=+9.844330683) (total time: 4.004905973s):
Trace[1896298252]: [4.000714102s] [4.000696402s] About to store object in database
I0131 02:23:37.286967       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0131 02:23:37.293625       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0131 02:23:37.297114       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0131 02:23:37.317360       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0131 02:23:37.319756       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0131 02:23:37.323433       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0131 02:23:37.328339       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0131 02:23:37.335261       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0131 02:23:37.339942       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0131 02:23:37.343393       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0131 02:23:37.350725       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0131 02:23:37.353919       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0131 02:23:37.358925       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0131 02:23:37.362437       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0131 02:23:37.367677       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0131 02:23:37.370664       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0131 02:23:37.375296       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0131 02:23:37.381705       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0131 02:23:37.385081       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0131 02:23:37.389639       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0131 02:23:37.392780       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0131 02:23:37.395822       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0131 02:23:37.402697       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0131 02:23:37.406073       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0131 02:23:37.409047       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0131 02:23:37.412404       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0131 02:23:37.428755       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0131 02:23:37.432120       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0131 02:23:37.438785       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0131 02:23:37.442225       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0131 02:23:37.446879       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0131 02:23:37.451854       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0131 02:23:37.455279       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0131 02:23:37.464688       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0131 02:23:37.472495       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0131 02:23:37.477806       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0131 02:23:37.480960       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0131 02:23:37.483966       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0131 02:23:37.497238       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0131 02:23:37.499592       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0131 02:23:37.502776       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0131 02:23:37.506118       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0131 02:23:37.514977       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0131 02:23:37.520340       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0131 02:23:37.544636       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0131 02:23:37.547495       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0131 02:23:37.551486       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0131 02:23:37.555344       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0131 02:23:37.560024       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0131 02:23:37.564932       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0131 02:23:37.575909       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0131 02:23:37.578874       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0131 02:23:37.582031       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0131 02:23:37.585357       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0131 02:23:37.588073       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0131 02:23:37.593200       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0131 02:23:37.596114       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0131 02:23:37.602791       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0131 02:23:37.608140       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0131 02:23:37.656977       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0131 02:23:37.688659       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0131 02:23:37.728670       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0131 02:23:37.768869       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0131 02:23:37.808822       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0131 02:23:37.848605       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0131 02:23:37.888567       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0131 02:23:37.928640       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0131 02:23:37.968539       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0131 02:23:38.008473       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0131 02:23:38.048878       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0131 02:23:38.088670       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0131 02:23:38.128537       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0131 02:23:38.168540       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0131 02:23:38.208742       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0131 02:23:38.248815       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0131 02:23:38.288393       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0131 02:23:38.329077       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0131 02:23:38.368714       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0131 02:23:38.408511       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0131 02:23:38.448608       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0131 02:23:38.488670       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0131 02:23:38.536012       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0131 02:23:38.568759       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0131 02:23:38.608709       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0131 02:23:38.648894       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0131 02:23:38.689104       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0131 02:23:38.728810       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0131 02:23:38.768648       1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0131 02:23:38.807612       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0131 02:23:38.808886       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0131 02:23:38.848750       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0131 02:23:38.888652       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0131 02:23:38.928817       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0131 02:23:38.968793       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0131 02:23:39.009192       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0131 02:23:39.050079       1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0131 02:23:39.087558       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0131 02:23:39.088737       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0131 02:23:39.128515       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0131 02:23:39.168458       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0131 02:23:39.208766       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0131 02:23:39.248771       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0131 02:23:39.288601       1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0131 02:23:39.303594       1 controller.go:608] quota admission added evaluator for: serviceaccounts
I0131 02:23:40.514965       1 controller.go:608] quota admission added evaluator for: deployments.apps
I0131 02:23:40.543230       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
I0131 02:24:11.329121       1 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0131 02:24:11.329294       1 naming_controller.go:295] Shutting down NamingConditionController
I0131 02:24:11.329306       1 establishing_controller.go:84] Shutting down EstablishingController
I0131 02:24:11.329312       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0131 02:24:11.329319       1 available_controller.go:295] Shutting down AvailableConditionController
I0131 02:24:11.329325       1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0131 02:24:11.329331       1 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0131 02:24:11.329337       1 autoregister_controller.go:160] Shutting down autoregister controller
I0131 02:24:11.329343       1 crd_finalizer.go:254] Shutting down CRDFinalizer
I0131 02:24:11.329474       1 controller.go:90] Shutting down OpenAPI AggregationController
I0131 02:24:11.329638       1 secure_serving.go:156] Stopped listening on [::]:8443
E0131 02:24:11.330989       1 controller.go:172] Get https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:8443: connect: connection refused

Logs from kube-controller-man...

Flag --address has been deprecated, see --bind-address instead.
I0131 02:26:16.463741       1 serving.go:318] Generated self-signed cert in-memory
I0131 02:26:16.720251       1 controllermanager.go:151] Version: v1.13.2
I0131 02:26:16.720478       1 secure_serving.go:116] Serving securely on [::]:10257
I0131 02:26:16.720714       1 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252
I0131 02:26:16.721036       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
E0131 02:26:20.051350       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0131 02:26:22.381142       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0131 02:26:25.247581       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0131 02:26:27.262556       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
I0131 02:26:31.345249       1 leaderelection.go:214] successfully acquired lease kube-system/kube-controller-manager
I0131 02:26:31.345357       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"9ef22331-24ff-11e9-ae51-00155d4b0144", APIVersion:"v1", ResourceVersion:"177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' june-Virtual-Machine_963ad91b-24ff-11e9-a8d0-00155d4b0144 became leader
I0131 02:26:31.355949       1 plugins.go:103] No cloud provider specified.
I0131 02:26:31.356730       1 controller_utils.go:1027] Waiting for caches to sync for tokens controller
I0131 02:26:31.456938       1 controller_utils.go:1034] Caches are synced for tokens controller
I0131 02:26:31.486142       1 controllermanager.go:516] Started "disruption"
I0131 02:26:31.486156       1 disruption.go:288] Starting disruption controller
I0131 02:26:31.486166       1 controller_utils.go:1027] Waiting for caches to sync for disruption controller
I0131 02:26:31.504194       1 controllermanager.go:516] Started "csrsigning"
I0131 02:26:31.504224       1 core.go:151] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0131 02:26:31.504228       1 controllermanager.go:508] Skipping "route"
I0131 02:26:31.504294       1 certificate_controller.go:113] Starting certificate controller
I0131 02:26:31.504300       1 controller_utils.go:1027] Waiting for caches to sync for certificate controller
I0131 02:26:31.527527       1 controllermanager.go:516] Started "deployment"
I0131 02:26:31.527650       1 deployment_controller.go:152] Starting deployment controller
I0131 02:26:31.527655       1 controller_utils.go:1027] Waiting for caches to sync for deployment controller
W0131 02:26:31.557057       1 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
I0131 02:26:31.557370       1 controllermanager.go:516] Started "garbagecollector"
I0131 02:26:31.557506       1 garbagecollector.go:133] Starting garbage collector controller
I0131 02:26:31.557542       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0131 02:26:31.557563       1 graph_builder.go:308] GraphBuilder running
I0131 02:26:31.746474       1 controllermanager.go:516] Started "horizontalpodautoscaling"
I0131 02:26:31.746489       1 horizontal.go:156] Starting HPA controller
I0131 02:26:31.746561       1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0131 02:26:31.998848       1 controllermanager.go:516] Started "ttl"
I0131 02:26:31.998904       1 ttl_controller.go:116] Starting TTL controller
I0131 02:26:31.998911       1 controller_utils.go:1027] Waiting for caches to sync for TTL controller
I0131 02:26:32.256836       1 node_lifecycle_controller.go:272] Sending events to api server.
I0131 02:26:32.256903       1 node_lifecycle_controller.go:312] Controller is using taint based evictions.
I0131 02:26:32.256961       1 taint_manager.go:175] Sending events to api server.
I0131 02:26:32.257022       1 node_lifecycle_controller.go:378] Controller will taint node by condition.
I0131 02:26:32.257062       1 controllermanager.go:516] Started "nodelifecycle"
I0131 02:26:32.257089       1 node_lifecycle_controller.go:423] Starting node controller
I0131 02:26:32.257092       1 controller_utils.go:1027] Waiting for caches to sync for taint controller
I0131 02:26:32.647234       1 controllermanager.go:516] Started "attachdetach"
I0131 02:26:32.647299       1 attach_detach_controller.go:315] Starting attach detach controller
I0131 02:26:32.647304       1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
I0131 02:26:32.898767       1 controllermanager.go:516] Started "persistentvolume-expander"
I0131 02:26:32.898809       1 expand_controller.go:153] Starting expand controller
I0131 02:26:32.898813       1 controller_utils.go:1027] Waiting for caches to sync for expand controller
I0131 02:26:33.149950       1 controllermanager.go:516] Started "replicationcontroller"
I0131 02:26:33.149997       1 replica_set.go:182] Starting replicationcontroller controller
I0131 02:26:33.150002       1 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
I0131 02:26:33.401632       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0131 02:26:33.401694       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0131 02:26:33.401708       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0131 02:26:33.401715       1 shared_informer.go:311] resyncPeriod 57572904252917 is smaller than resyncCheckPeriod 74604462299735 and the informer has already started. Changing it to 74604462299735
I0131 02:26:33.401769       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0131 02:26:33.401802       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
I0131 02:26:33.401812       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0131 02:26:33.401829       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0131 02:26:33.401841       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0131 02:26:33.401852       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0131 02:26:33.401883       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
I0131 02:26:33.401896       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
I0131 02:26:33.401908       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0131 02:26:33.401940       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W0131 02:26:33.401948       1 shared_informer.go:311] resyncPeriod 56248474349224 is smaller than resyncCheckPeriod 74604462299735 and the informer has already started. Changing it to 74604462299735
I0131 02:26:33.401985       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0131 02:26:33.402005       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0131 02:26:33.402017       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0131 02:26:33.402028       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0131 02:26:33.402038       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0131 02:26:33.402071       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0131 02:26:33.402084       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0131 02:26:33.402109       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0131 02:26:33.402121       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
E0131 02:26:33.402128       1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0131 02:26:33.402154       1 controllermanager.go:516] Started "resourcequota"
I0131 02:26:33.402171       1 resource_quota_controller.go:276] Starting resource quota controller
I0131 02:26:33.402178       1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0131 02:26:33.402187       1 resource_quota_monitor.go:301] QuotaMonitor running
I0131 02:26:33.546400       1 controllermanager.go:516] Started "csrcleaner"
W0131 02:26:33.546422       1 controllermanager.go:508] Skipping "nodeipam"
I0131 02:26:33.546433       1 cleaner.go:81] Starting CSR cleaner controller
I0131 02:26:33.799388       1 controllermanager.go:516] Started "clusterrole-aggregation"
I0131 02:26:33.799448       1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0131 02:26:33.799454       1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
E0131 02:26:33.896485       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Secret: Get https://localhost:8443/api/v1/secrets?resourceVersion=242&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896656       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?resourceVersion=243&timeout=9m14s&timeoutSeconds=554&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
F0131 02:26:33.995834       1 client_builder.go:258] Post https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts: dial tcp 127.0.0.1:8443: connect: connection refused

Logs from kube-scheduler:

june@june-Virtual-Machine:~/next$ docker logs -f 0d5600fcd9d8
I0131 02:26:16.499756       1 serving.go:318] Generated self-signed cert in-memory
W0131 02:26:16.904040       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0131 02:26:16.904077       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0131 02:26:16.904086       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0131 02:26:16.905904       1 server.go:150] Version: v1.13.2
I0131 02:26:16.905956       1 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0131 02:26:16.906830       1 authorization.go:47] Authorization is disabled
W0131 02:26:16.906857       1 authentication.go:55] Authentication is disabled
I0131 02:26:16.906865       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on 127.0.0.1:10251
I0131 02:26:16.907227       1 secure_serving.go:116] Serving securely on [::]:10259
E0131 02:26:20.052641       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:20.054740       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:20.054824       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:20.054848       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:20.082448       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:20.083462       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:20.083506       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:20.083762       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:20.085058       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:20.086586       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:21.053698       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:21.055804       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:21.056679       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:21.057856       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:21.085935       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:21.087051       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:21.088211       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:21.089344       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:21.090447       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:21.091551       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:22.054689       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:22.056757       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:22.057812       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:22.058881       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:22.086712       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:22.087789       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:22.088946       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:22.090046       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:22.091177       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:22.092228       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:23.055661       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:23.057483       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:23.058844       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:23.060148       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:23.087498       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:23.088561       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:23.089686       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:23.090803       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:23.091903       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:23.092950       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:24.056691       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:24.058322       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:24.059573       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:24.060830       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:24.088423       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:24.089328       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:24.090413       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:24.091580       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:24.092640       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:24.093704       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:25.057701       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:25.059131       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:25.060338       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:25.061641       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:25.089231       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:25.090273       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:25.091477       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:25.092530       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:25.093732       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:25.094801       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:26.058582       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:26.060042       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:26.061093       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:26.062401       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:26.090037       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:26.091004       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:26.092232       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:26.093306       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:26.094479       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:26.095489       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:27.059532       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:27.060807       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:27.061863       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:27.062991       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:27.090850       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:27.091898       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:27.092960       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:27.094045       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:27.095275       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:27.096254       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:28.060494       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:28.061464       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:28.062515       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:28.063744       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:28.091658       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:28.092631       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:28.093758       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:28.094732       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:28.095973       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:28.097182       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0131 02:26:29.061228       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0131 02:26:29.062349       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0131 02:26:29.063519       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0131 02:26:29.064465       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0131 02:26:29.092528       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0131 02:26:29.093453       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0131 02:26:29.094658       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0131 02:26:29.095870       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0131 02:26:29.096892       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0131 02:26:29.097808       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0131 02:26:30.908944       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0131 02:26:31.009161       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0131 02:26:31.009201       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
I0131 02:26:31.012638       1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
E0131 02:26:33.896941       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896971       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?resourceVersion=1&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896982       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?resourceVersion=1&timeout=8m55s&timeoutSeconds=535&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896998       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?resourceVersion=1&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896994       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?resourceVersion=201&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896951       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?resourceVersion=1&timeout=8m40s&timeoutSeconds=520&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.897001       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?resourceVersion=1&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.897016       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=1&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.896948       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?resourceVersion=217&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:33.897021       1 reflector.go:251] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=1&timeoutSeconds=317&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.897436       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.898531       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.899707       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.900710       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.901962       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.903073       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.904187       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.905982       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.906409       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:34.907569       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.023677       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.898330       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.899062       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.900458       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.901386       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.902418       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.903522       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.904696       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.906490       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.907582       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:35.908686       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.898884       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.899974       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.901175       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.902180       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.903311       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.904450       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.905588       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.906889       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.908044       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:36.909118       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.024224       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.899316       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.900383       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.901536       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.902567       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.903669       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.904973       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.905962       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.907309       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.908378       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:37.909651       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.899852       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.900759       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.902023       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.902943       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.903992       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.905383       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.906393       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.907555       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.908681       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:38.909979       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.024172       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.900376       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.901399       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.902494       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.903525       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.904725       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.905788       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.906777       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.907905       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.909034       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:39.910340       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.900875       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.901983       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.902944       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.903905       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.905136       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.906168       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.907159       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.908268       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.909448       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:40.910765       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.024176       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.901564       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.902368       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.903341       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.904524       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.905576       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.906609       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.907700       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.908846       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.909829       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:41.911264       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.902005       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.903166       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.904457       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.905377       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.906487       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.907732       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.908731       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.909904       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.910972       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:42.912110       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.024186       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.902487       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.903548       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.904999       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.906027       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.907134       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.908162       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.909354       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.910584       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.911696       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:43.912814       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.903016       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.903919       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.905418       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.906472       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.907515       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.908524       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.909799       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.910919       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.912093       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:44.913218       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0131 02:26:45.023620       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'june-Virtual-Machine_96570dce-24ff-11e9-83a0-00155d4b0144 stopped leading'
I0131 02:26:45.023678       1 leaderelection.go:249] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
E0131 02:26:45.023688       1 server.go:261] lost master
lost lease
@tstromberg
Copy link
Contributor

The apiserver almost certainly failed here. Do you mind running:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'

@tstromberg tstromberg changed the title Cluster will no longer start none: The connection to the server x:8443 was refused Feb 12, 2019
@tstromberg tstromberg added triage/needs-information Indicates an issue needs more information in order to work on it. co/none-driver priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Feb 12, 2019
@hach-que
Copy link
Author

Okay, I think I'm hitting this again. It did go away there for a while, but now it's back with (again) no explanation.

Running minikube ssh gives me:

june@june-Virtual-Machine:~/next$ minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'
'none' driver does not support 'minikube ssh' command

Running without minikube ssh gives me:

june@june-Virtual-Machine:~/next$ docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})
"docker logs" requires exactly 1 argument.
See 'docker logs --help'.

Usage:  docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Running docker ps -a -f name=k8s_kube-api --format={{.ID}} gives me an empty list:

june@june-Virtual-Machine:~/next$ docker ps -a -f name=k8s_kube-api --format={{.ID}}
june@june-Virtual-Machine:~/next$

and the API server shows as stopped in minikube status:

june@june-Virtual-Machine:~/next$ minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.69.249

@hach-que
Copy link
Author

If I try to start the API server again, I get the following output:

june@june-Virtual-Machine:~/next$ sudo minikube start --vm-driver=none --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
Verifying kubelet health ...
Verifying apiserver health ...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
        The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

        sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.kube
        sudo chgrp -R $USER $HOME/.kube

        sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.minikube
        sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.


Everything looks great. Please enjoy minikube!

and then if I run docker logs -f $(docker ps -a -f name=k8s_kube-api --format={{.ID}}) while this is happening, I get the following logs before everything dies:

june@june-Virtual-Machine:~$ docker logs -f $(docker ps -a -f name=k8s_kube-api --format={{.ID}})
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0218 22:46:56.825438       1 server.go:557] external host was not specified, using 192.168.69.249
I0218 22:46:56.825505       1 server.go:146] Version: v1.13.2
I0218 22:46:57.049230       1 initialization.go:91] enabled Initializers feature as part of admission plugin setup
I0218 22:46:57.049438       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0218 22:46:57.049464       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0218 22:46:57.050053       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0218 22:46:57.050067       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0218 22:46:58.701365       1 master.go:228] Using reconciler: lease
W0218 22:46:59.318808       1 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0218 22:46:59.404340       1 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0218 22:46:59.411778       1 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0218 22:46:59.422959       1 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0218 22:46:59.474151       1 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2019/02/18 22:46:59 log.go:33: [restful/swagger] listing is available at https://192.168.69.249:8443/swaggerapi
[restful] 2019/02/18 22:46:59 log.go:33: [restful/swagger] https://192.168.69.249:8443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/02/18 22:47:00 log.go:33: [restful/swagger] listing is available at https://192.168.69.249:8443/swaggerapi
[restful] 2019/02/18 22:47:00 log.go:33: [restful/swagger] https://192.168.69.249:8443/swaggerui/ is mapped to folder /swagger-ui/
I0218 22:47:00.621589       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0218 22:47:00.621625       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0218 22:47:02.797084       1 secure_serving.go:116] Serving securely on [::]:8443
I0218 22:47:02.797236       1 controller.go:84] Starting OpenAPI AggregationController
I0218 22:47:02.797256       1 available_controller.go:283] Starting AvailableConditionController
I0218 22:47:02.797261       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0218 22:47:02.797277       1 autoregister_controller.go:136] Starting autoregister controller
I0218 22:47:02.797281       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0218 22:47:02.797685       1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0218 22:47:02.797714       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0218 22:47:02.797997       1 crd_finalizer.go:242] Starting CRDFinalizer
I0218 22:47:02.798033       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0218 22:47:02.798042       1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I0218 22:47:02.826250       1 customresource_discovery_controller.go:203] Starting DiscoveryController
I0218 22:47:02.826278       1 naming_controller.go:284] Starting NamingConditionController
I0218 22:47:02.826285       1 establishing_controller.go:73] Starting EstablishingController
I0218 22:47:02.908955       1 cache.go:39] Caches are synced for autoregister controller
I0218 22:47:02.908982       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0218 22:47:02.908963       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0218 22:47:02.909445       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I0218 22:47:03.408441       1 controller.go:608] quota admission added evaluator for: namespaces
I0218 22:47:03.800410       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0218 22:47:06.852467       1 trace.go:76] Trace[1084459071]: "Create /api/v1/nodes" (started: 2019-02-18 22:47:02.839823837 +0000 UTC m=+6.062414929) (total time: 4.01262433s):
Trace[1084459071]: [4.000828795s] [4.000787195s] About to store object in database
I0218 22:47:08.053833       1 trace.go:76] Trace[1097783628]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2019-02-18 22:47:04.043819065 +0000 UTC m=+7.266410057) (total time: 4.009996517s):
Trace[1097783628]: [4.001137766s] [4.000688069s] About to store object in database
I0218 22:47:08.057421       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0218 22:47:08.065086       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0218 22:47:09.196430       1 controller.go:608] quota admission added evaluator for: serviceaccounts
I0218 22:47:09.207097       1 controller.go:608] quota admission added evaluator for: deployments.apps
I0218 22:47:09.241704       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
I0218 22:47:10.342592       1 trace.go:76] Trace[2083740770]: "Create /api/v1/namespaces/default/events" (started: 2019-02-18 22:47:06.336766634 +0000 UTC m=+9.559357626) (total time: 4.005808489s):
Trace[2083740770]: [4.000864616s] [4.000808317s] About to store object in database
I0218 22:47:18.349758       1 controller.go:608] quota admission added evaluator for: endpoints
I0218 22:47:23.358131       1 controller.go:608] quota admission added evaluator for: controllerrevisions.apps
I0218 22:47:23.903365       1 controller.go:608] quota admission added evaluator for: replicasets.apps
I0218 22:47:56.523121       1 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0218 22:47:56.523165       1 crd_finalizer.go:254] Shutting down CRDFinalizer
I0218 22:47:56.523168       1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0218 22:47:56.523181       1 autoregister_controller.go:160] Shutting down autoregister controller
I0218 22:47:56.523207       1 available_controller.go:295] Shutting down AvailableConditionController
I0218 22:47:56.523294       1 naming_controller.go:295] Shutting down NamingConditionController
I0218 22:47:56.523303       1 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0218 22:47:56.523309       1 establishing_controller.go:84] Shutting down EstablishingController
I0218 22:47:56.523316       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0218 22:47:56.523414       1 controller.go:90] Shutting down OpenAPI AggregationController
I0218 22:47:56.523631       1 secure_serving.go:156] Stopped listening on [::]:8443
E0218 22:47:56.524683       1 controller.go:172] Get https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:8443: connect: connection refused

The log ends after this, presumably because the Docker container exited.

@hach-que
Copy link
Author

This line here in the above output:

I0218 22:47:56.523121       1 controller.go:170] Shutting down kubernetes service endpoint reconciler

is where everything starts to shutdown, but there's no logs immediately before that that would indicate why it's shutting down.

If I try to run minikube again after this:

june@june-Virtual-Machine:~/next$ sudo minikube start --vm-driver=none --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
E0219 09:48:55.441031   47259 start.go:376] Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI


: running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

 output: [init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING FileExisting-ebtables]: ebtables not found in system path
        [WARNING FileExisting-ethtool]: ethtool not found in system path
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
        [WARNING Hostname]: hostname "minikube" could not be reached
        [WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING DirAvailable--data-minikube]: /data/minikube is not empty
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-10251]: Port 10251 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

.: exit status 1


minikube failed :( exiting with error code 1

If I wait a bit and try minikube start again, then I get the normal output:

june@june-Virtual-Machine:~/next$ sudo minikube start --vm-driver=none --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
Verifying kubelet health ...
Verifying apiserver health ...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
        The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

        sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.kube
        sudo chgrp -R $USER $HOME/.kube

        sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.minikube
        sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.


Everything looks great. Please enjoy minikube!

Watching the kubelet logs after the cluster has started up, what I'm seeing is:

Feb 19 09:51:37 june-Virtual-Machine kubelet[48264]: I0219 09:51:37.957111   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:51:37 june-Virtual-Machine kubelet[48264]: I0219 09:51:37.957217   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144), kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a), kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d)
Feb 19 09:51:47 june-Virtual-Machine kubelet[48264]: E0219 09:51:47.957470   48264 eviction_manager.go:561] eviction manager: pod kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) failed to evict timeout waiting to kill pod
Feb 19 09:51:47 june-Virtual-Machine kubelet[48264]: I0219 09:51:47.957506   48264 eviction_manager.go:187] eviction manager: pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) evicted, waiting for pod to be cleaned up
Feb 19 09:51:59 june-Virtual-Machine kubelet[48264]: E0219 09:51:59.051453   48264 configmap.go:244] Error creating atomic writer: stat /var/lib/kubelet/pods/27fd6b4b-33cf-11e9-ae1d-00155d4b0144/volumes/kubernetes.io~configmap/kube-proxy: no such file or directory
Feb 19 09:51:59 june-Virtual-Machine kubelet[48264]: W0219 09:51:59.051491   48264 empty_dir.go:373] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/27fd6b4b-33cf-11e9-ae1d-00155d4b0144/volumes/kubernetes.io~configmap/kube-proxy
Feb 19 09:51:59 june-Virtual-Machine kubelet[48264]: E0219 09:51:59.051537   48264 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/configmap/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy\" (\"27fd6b4b-33cf-11e9-ae1d-00155d4b0144\")" failed. No retries permitted until 2019-02-19 09:53:03.051521508 +1100 AEDT m=+135.997955120 (durationBeforeRetry 1m4s). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy\") pod \"kube-proxy-kfs8p\" (UID: \"27fd6b4b-33cf-11e9-ae1d-00155d4b0144\") : stat /var/lib/kubelet/pods/27fd6b4b-33cf-11e9-ae1d-00155d4b0144/volumes/kubernetes.io~configmap/kube-proxy: no such file or directory"
Feb 19 09:52:17 june-Virtual-Machine kubelet[48264]: W0219 09:52:17.957743   48264 eviction_manager.go:392] eviction manager: timed out waiting for pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) to be cleaned up
Feb 19 09:52:18 june-Virtual-Machine kubelet[48264]: W0219 09:52:18.069640   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:52:18 june-Virtual-Machine kubelet[48264]: I0219 09:52:18.069703   48264 container_gc.go:85] attempting to delete unused containers
Feb 19 09:52:18 june-Virtual-Machine kubelet[48264]: I0219 09:52:18.075185   48264 image_gc_manager.go:317] attempting to delete unused images
Feb 19 09:52:18 june-Virtual-Machine kubelet[48264]: I0219 09:52:18.182082   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:52:18 june-Virtual-Machine kubelet[48264]: I0219 09:52:18.182177   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144), kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488), kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a), kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d)
Feb 19 09:52:28 june-Virtual-Machine kubelet[48264]: E0219 09:52:28.182421   48264 eviction_manager.go:561] eviction manager: pod kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) failed to evict timeout waiting to kill pod
Feb 19 09:52:28 june-Virtual-Machine kubelet[48264]: I0219 09:52:28.182471   48264 eviction_manager.go:187] eviction manager: pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) evicted, waiting for pod to be cleaned up
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: E0219 09:52:58.065917   48264 kubelet.go:1680] Unable to mount volumes for pod "kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"kube-proxy-kfs8p". list of unmounted volumes=[kube-proxy]. list of unattached volumes=[kube-proxy xtables-lock lib-modules kube-proxy-token-5xgsj]; skipping pod
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: E0219 09:52:58.065972   48264 pod_workers.go:190] Error syncing pod 27fd6b4b-33cf-11e9-ae1d-00155d4b0144 ("kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144)"), skipping: timeout expired waiting for volumes to attach or mount for pod "kube-system"/"kube-proxy-kfs8p". list of unmounted volumes=[kube-proxy]. list of unattached volumes=[kube-proxy xtables-lock lib-modules kube-proxy-token-5xgsj]
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: W0219 09:52:58.182694   48264 eviction_manager.go:392] eviction manager: timed out waiting for pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) to be cleaned up
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: W0219 09:52:58.294700   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.294763   48264 container_gc.go:85] attempting to delete unused containers
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.299970   48264 image_gc_manager.go:317] attempting to delete unused images
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.394855   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.394961   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144), kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488), kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a), kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d)
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: W0219 09:52:58.829324   48264 status_manager.go:501] Failed to update status for pod "kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144)": failed to patch status "{}" for pod "kube-system"/"kube-proxy-kfs8p": pods "kube-proxy-kfs8p" not found
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903744   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903714   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-lib-modules") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903819   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "kube-proxy-token-5xgsj" (UniqueName: "kubernetes.io/secret/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy-token-5xgsj") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903849   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-xtables-lock") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903876   48264 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/eff0e399-33cf-11e9-ada4-00155d4b0144-xtables-lock") pod "kube-proxy-wm6cj" (UID: "eff0e399-33cf-11e9-ada4-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903913   48264 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-5xgsj" (UniqueName: "kubernetes.io/secret/eff0e399-33cf-11e9-ada4-00155d4b0144-kube-proxy-token-5xgsj") pod "kube-proxy-wm6cj" (UID: "eff0e399-33cf-11e9-ada4-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903906   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903928   48264 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/eff0e399-33cf-11e9-ada4-00155d4b0144-lib-modules") pod "kube-proxy-wm6cj" (UID: "eff0e399-33cf-11e9-ada4-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903971   48264 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/eff0e399-33cf-11e9-ada4-00155d4b0144-kube-proxy") pod "kube-proxy-wm6cj" (UID: "eff0e399-33cf-11e9-ada4-00155d4b0144")
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903988   48264 reconciler.go:301] Volume detached for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy") on node "minikube" DevicePath ""
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.903997   48264 reconciler.go:301] Volume detached for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-lib-modules") on node "minikube" DevicePath ""
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.904005   48264 reconciler.go:301] Volume detached for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-xtables-lock") on node "minikube" DevicePath ""
Feb 19 09:52:58 june-Virtual-Machine kubelet[48264]: I0219 09:52:58.924893   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy-token-5xgsj" (OuterVolumeSpecName: "kube-proxy-token-5xgsj") pod "27fd6b4b-33cf-11e9-ae1d-00155d4b0144" (UID: "27fd6b4b-33cf-11e9-ae1d-00155d4b0144"). InnerVolumeSpecName "kube-proxy-token-5xgsj". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 19 09:52:59 june-Virtual-Machine kubelet[48264]: I0219 09:52:59.004313   48264 reconciler.go:301] Volume detached for volume "kube-proxy-token-5xgsj" (UniqueName: "kubernetes.io/secret/27fd6b4b-33cf-11e9-ae1d-00155d4b0144-kube-proxy-token-5xgsj") on node "minikube" DevicePath ""
Feb 19 09:52:59 june-Virtual-Machine kubelet[48264]: W0219 09:52:59.217556   48264 kubelet_getters.go:284] Path "/var/lib/kubelet/pods/27fd6b4b-33cf-11e9-ae1d-00155d4b0144/volumes" does not exist
Feb 19 09:52:59 june-Virtual-Machine kubelet[48264]: I0219 09:52:59.803088   48264 eviction_manager.go:563] eviction manager: pod kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) is evicted successfully
Feb 19 09:52:59 june-Virtual-Machine kubelet[48264]: I0219 09:52:59.803120   48264 eviction_manager.go:187] eviction manager: pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) evicted, waiting for pod to be cleaned up
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: W0219 09:53:00.803304   48264 kubelet_getters.go:284] Path "/var/lib/kubelet/pods/27fd6b4b-33cf-11e9-ae1d-00155d4b0144/volumes" does not exist
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: I0219 09:53:00.803364   48264 eviction_manager.go:400] eviction manager: pods kube-proxy-kfs8p_kube-system(27fd6b4b-33cf-11e9-ae1d-00155d4b0144) successfully cleaned up
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: W0219 09:53:00.894391   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: I0219 09:53:00.894435   48264 container_gc.go:85] attempting to delete unused containers
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: I0219 09:53:00.901800   48264 image_gc_manager.go:317] attempting to delete unused images
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: I0219 09:53:00.997793   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:53:00 june-Virtual-Machine kubelet[48264]: I0219 09:53:00.997890   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488), kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a), kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d), kube-proxy-wm6cj_kube-system(eff0e399-33cf-11e9-ada4-00155d4b0144)
Feb 19 09:53:01 june-Virtual-Machine kubelet[48264]: E0219 09:53:01.216028   48264 kuberuntime_container.go:71] Can't make a ref to pod "kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488)", container kube-addon-manager: selfLink was empty, can't make reference
Feb 19 09:53:01 june-Virtual-Machine kubelet[48264]: I0219 09:53:01.383485   48264 eviction_manager.go:563] eviction manager: pod kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488) is evicted successfully
Feb 19 09:53:01 june-Virtual-Machine kubelet[48264]: I0219 09:53:01.383518   48264 eviction_manager.go:187] eviction manager: pods kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488) evicted, waiting for pod to be cleaned up
Feb 19 09:53:01 june-Virtual-Machine kubelet[48264]: W0219 09:53:01.814554   48264 pod_container_deletor.go:75] Container "fda9bc0d73181676da42fcf5b555516fabc006e26d12a1b8e0f0e4ace08b7bdd" not found in pod's containers
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.112499   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-kubeconfig") pod "5c72fb06dcdda608211b70d63c0ca488" (UID: "5c72fb06dcdda608211b70d63c0ca488")
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.112545   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-addons") pod "5c72fb06dcdda608211b70d63c0ca488" (UID: "5c72fb06dcdda608211b70d63c0ca488")
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.112609   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-addons" (OuterVolumeSpecName: "addons") pod "5c72fb06dcdda608211b70d63c0ca488" (UID: "5c72fb06dcdda608211b70d63c0ca488"). InnerVolumeSpecName "addons". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.112639   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "5c72fb06dcdda608211b70d63c0ca488" (UID: "5c72fb06dcdda608211b70d63c0ca488"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.212826   48264 reconciler.go:301] Volume detached for volume "addons" (UniqueName: "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-addons") on node "minikube" DevicePath ""
Feb 19 09:53:03 june-Virtual-Machine kubelet[48264]: I0219 09:53:03.212873   48264 reconciler.go:301] Volume detached for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/5c72fb06dcdda608211b70d63c0ca488-kubeconfig") on node "minikube" DevicePath ""

Then a little bit later, the API server starts to shutdown, coinciding with these logs in kubelet:

Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: W0219 09:53:31.385067   48264 eviction_manager.go:392] eviction manager: timed out waiting for pods kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488) to be cleaned up
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: W0219 09:53:31.485985   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.486047   48264 container_gc.go:85] attempting to delete unused containers
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.526017   48264 image_gc_manager.go:317] attempting to delete unused images
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.530670   48264 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:9c16409588eb19394b90703bdb5bcfb7c08fe75308a5db30b95ca8f6bd6bdc85" to free 78384272 bytes
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.678614   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.678693   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a), kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d), kube-proxy-wm6cj_kube-system(eff0e399-33cf-11e9-ada4-00155d4b0144)
Feb 19 09:53:32 june-Virtual-Machine kubelet[48264]: I0219 09:53:32.012270   48264 eviction_manager.go:563] eviction manager: pod kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a) is evicted successfully
Feb 19 09:53:32 june-Virtual-Machine kubelet[48264]: I0219 09:53:32.012305   48264 eviction_manager.go:187] eviction manager: pods kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a) evicted, waiting for pod to be cleaned up
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: W0219 09:53:33.021736   48264 pod_container_deletor.go:75] Container "3258e7c4143d4d75478c9b12c31a956086f072947b16a679107c0811c66196f7" not found in pod's containers
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: E0219 09:53:33.214109   48264 kuberuntime_container.go:71] Can't make a ref to pod "kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a)", container kube-controller-manager: selfLink was empty, can't make reference
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.874874   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-pki") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.874913   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-local-share-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.874928   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-ca-certs") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.874984   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-flexvolume-dir" (OuterVolumeSpecName: "flexvolume-dir") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "flexvolume-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.874995   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-flexvolume-dir") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875009   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875012   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-pki" (OuterVolumeSpecName: "etc-pki") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "etc-pki". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875038   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875039   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-local-share-ca-certificates" (OuterVolumeSpecName: "usr-local-share-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "usr-local-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875054   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-k8s-certs") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875058   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-ca-certificates" (OuterVolumeSpecName: "etc-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "etc-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875069   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-share-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875074   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875082   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-kubeconfig") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a")
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875099   48264 reconciler.go:301] Volume detached for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-pki") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875105   48264 reconciler.go:301] Volume detached for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-k8s-certs") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875112   48264 reconciler.go:301] Volume detached for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-local-share-ca-certificates") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875117   48264 reconciler.go:301] Volume detached for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-ca-certs") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875115   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875124   48264 reconciler.go:301] Volume detached for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-flexvolume-dir") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875130   48264 reconciler.go:301] Volume detached for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-etc-ca-certificates") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.875134   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "fc2c9369f315dd926a74d8623dbe3f3a" (UID: "fc2c9369f315dd926a74d8623dbe3f3a"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.975333   48264 reconciler.go:301] Volume detached for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-kubeconfig") on node "minikube" DevicePath ""
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: I0219 09:53:33.975387   48264 reconciler.go:301] Volume detached for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fc2c9369f315dd926a74d8623dbe3f3a-usr-share-ca-certificates") on node "minikube" DevicePath ""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.012568   48264 eviction_manager.go:400] eviction manager: pods kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a) successfully cleaned up
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: W0219 09:53:49.125690   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.125753   48264 container_gc.go:85] attempting to delete unused containers
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.129327   48264 image_gc_manager.go:317] attempting to delete unused images
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.134025   48264 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:b9027a78d94c15a4aba54d45476c6f295c0db8f9dcb6fca34c8beff67d90a374" to free 146227986 bytes
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.245791   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.245857   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d), kube-proxy-wm6cj_kube-system(eff0e399-33cf-11e9-ada4-00155d4b0144)
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258581   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258794   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258911   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259028   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: W0219 09:53:49.259164   48264 status_manager.go:501] Failed to update status for pod "kube-apiserver-minikube_kube-system(cd008142-33cf-11e9-ada4-00155d4b0144)": failed to patch status "{\"status\":{\"conditions\":null,\"containerStatuses\":null,\"hostIP\":null,\"message\":\"The node was low on resource: ephemeral-storage. Container kube-apiserver was using 116Ki, which exceeds its request of 0. \",\"phase\":\"Failed\",\"podIP\":null,\"qosClass\":null,\"reason\":\"Evicted\"}}" for pod "kube-system"/"kube-apiserver-minikube": Patch https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube/status: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259183   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259302   48264 reflector.go:251] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=743&timeoutSeconds=357&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259325   48264 reflector.go:251] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?resourceVersion=453&timeoutSeconds=489&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259343   48264 reflector.go:251] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=533&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259363   48264 reflector.go:251] object-"kube-system"/"kube-proxy-token-5xgsj": Failed to watch *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-5xgsj&resourceVersion=523&timeout=5m38s&timeoutSeconds=338&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259383   48264 reflector.go:251] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&resourceVersion=753&timeoutSeconds=435&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.390435   48264 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/kube-system/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.418945   48264 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?resourceVersion=0&timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.419208   48264 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.419369   48264 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.419495   48264 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.419595   48264 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.419621   48264 kubelet_node_status.go:367] Unable to update node status: update node status exceeds retry count
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.601532   48264 eviction_manager.go:563] eviction manager: pod kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5) is evicted successfully
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.601566   48264 eviction_manager.go:187] eviction manager: pods kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5) evicted, waiting for pod to be cleaned up
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: W0219 09:53:50.078091   48264 pod_container_deletor.go:75] Container "e15435962cb44d5dad5b8cc76e881e235e78715cc3514bfb03b7a42e39a23e22" not found in pod's containers
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: E0219 09:53:50.259792   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: E0219 09:53:50.261067   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: E0219 09:53:50.262156   48264 reflector.go:134] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: E0219 09:53:50.263235   48264 reflector.go:134] object-"kube-system"/"kube-proxy-token-5xgsj": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-5xgsj&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:50 june-Virtual-Machine kubelet[48264]: E0219 09:53:50.264387   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: E0219 09:53:51.260431   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: E0219 09:53:51.261653   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: E0219 09:53:51.262689   48264 reflector.go:134] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: E0219 09:53:51.263695   48264 reflector.go:134] object-"kube-system"/"kube-proxy-token-5xgsj": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-5xgsj&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: E0219 09:53:51.264934   48264 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.411890   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-pki") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.411927   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-ca-certs") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.411945   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-share-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.411982   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412005   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412011   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-pki" (OuterVolumeSpecName: "etc-pki") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "etc-pki". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412020   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-k8s-certs") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412070   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-local-share-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412086   48264 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5")
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412092   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-local-share-ca-certificates" (OuterVolumeSpecName: "usr-local-share-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "usr-local-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412096   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412102   48264 reconciler.go:301] Volume detached for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-ca-certs") on node "minikube" DevicePath ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412116   48264 reconciler.go:301] Volume detached for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-share-ca-certificates") on node "minikube" DevicePath ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412122   48264 reconciler.go:301] Volume detached for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-pki") on node "minikube" DevicePath ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.412117   48264 operation_generator.go:687] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-ca-certificates" (OuterVolumeSpecName: "etc-ca-certificates") pod "bb894e67e861174e73018877d23cb6b5" (UID: "bb894e67e861174e73018877d23cb6b5"). InnerVolumeSpecName "etc-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.512317   48264 reconciler.go:301] Volume detached for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-k8s-certs") on node "minikube" DevicePath ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.512372   48264 reconciler.go:301] Volume detached for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-usr-local-share-ca-certificates") on node "minikube" DevicePath ""
Feb 19 09:53:51 june-Virtual-Machine kubelet[48264]: I0219 09:53:51.512380   48264 reconciler.go:301] Volume detached for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/bb894e67e861174e73018877d23cb6b5-etc-ca-certificates") on node "minikube" DevicePath ""

@hach-que
Copy link
Author

Okay I think I've figured out the issue. The API server is being evicted due to low ephemeral storage. The full trace of events looks like this:

Feb 19 09:53:31 june-Virtual-Machine kubelet[48264]: I0219 09:53:31.678614   48264 eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 19 09:53:32 june-Virtual-Machine kubelet[48264]: I0219 09:53:32.012270   48264 eviction_manager.go:563] eviction manager: pod kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a) is evicted successfully
Feb 19 09:53:33 june-Virtual-Machine kubelet[48264]: E0219 09:53:33.214109   48264 kuberuntime_container.go:71] Can't make a ref to pod "kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a)", container kube-controller-manager: selfLink was empty, can't make reference

then it tears down a bunch of volumes and other things ...

eviction_manager.go:400] eviction manager: pods kube-controller-manager-minikube_kube-system(fc2c9369f315dd926a74d8623dbe3f3a) successfully cleaned up

it then tries to clean up more pods immediately after ....

Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: W0219 09:53:49.125690   48264 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: I0219 09:53:49.245857   48264 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5), etcd-minikube_kube-system(f86606b9d2272ed4e4c8796a376034c2), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d), kube-proxy-wm6cj_kube-system(eff0e399-33cf-11e9-ada4-00155d4b0144)

we get a bunch of messages like this ...

Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258581   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258794   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.258911   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: E0219 09:53:49.259028   48264 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""
Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: W0219 09:53:49.259164   48264

and then finally, here's the kicker ...

Feb 19 09:53:49 june-Virtual-Machine kubelet[48264]: W0219 09:53:49.259164   48264 status_manager.go:501] Failed to update status for pod "kube-apiserver-minikube_kube-system(cd008142-33cf-11e9-ada4-00155d4b0144)": failed to patch status "{\"status\":{\"conditions\":null,\"containerStatuses\":null,\"hostIP\":null,\"message\":\"The node was low on resource: ephemeral-storage. Container kube-apiserver was using 116Ki, which exceeds its request of 0. \",\"phase\":\"Failed\",\"podIP\":null,\"qosClass\":null,\"reason\":\"Evicted\"}}" for pod "kube-system"/"kube-apiserver-minikube": Patch https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube/status: http2: server sent GOAWAY and closed the connection; LastStreamID=303, ErrCode=NO_ERROR, debug=""

and later ...

eviction_manager.go:563] eviction manager: pod kube-apiserver-minikube_kube-system(bb894e67e861174e73018877d23cb6b5) is evicted successfully

From here on out, everything is broken because the API server pod got evicted.

@hach-que
Copy link
Author

But it should be noted that this machine does have plenty of space available:

june@june-Virtual-Machine:~/next$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            912M     0  912M   0% /dev
tmpfs           189M  1.3M  188M   1% /run
/dev/sda1       117G  108G  8.3G  93% /
tmpfs           942M     0  942M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           942M     0  942M   0% /sys/fs/cgroup
/dev/loop0       91M   91M     0 100% /snap/core/6350
/dev/loop1      141M  141M     0 100% /snap/gnome-3-26-1604/70
/dev/loop2       13M   13M     0 100% /snap/gnome-characters/139
/dev/loop3       15M   15M     0 100% /snap/gnome-logs/37
/dev/loop6      3.8M  3.8M     0 100% /snap/gnome-system-monitor/51
/dev/loop5       15M   15M     0 100% /snap/gnome-logs/45
/dev/loop7      2.3M  2.3M     0 100% /snap/gnome-calculator/260
/dev/loop8       13M   13M     0 100% /snap/gnome-characters/103
/dev/loop9       35M   35M     0 100% /snap/gtk-common-themes/818
/dev/loop11      92M   92M     0 100% /snap/core/6259
/dev/loop13     2.3M  2.3M     0 100% /snap/gnome-calculator/238
/dev/loop12     3.8M  3.8M     0 100% /snap/gnome-system-monitor/57
/dev/loop14      35M   35M     0 100% /snap/gtk-common-themes/808
/dev/loop15     141M  141M     0 100% /snap/gnome-3-26-1604/74
/dev/loop16     2.4M  2.4M     0 100% /snap/gnome-calculator/180
/dev/sda15      105M  3.4M  102M   4% /boot/efi
tmpfs           189M   48K  189M   1% /run/user/1000
/dev/loop17      35M   35M     0 100% /snap/gtk-common-themes/1122
/dev/loop18     141M  141M     0 100% /snap/gnome-3-26-1604/78
/dev/loop4       91M   91M     0 100% /snap/core/6405

so there's really no reason for the ephemeral storage to be causing this eviction.

@hach-que
Copy link
Author

This is what kubectl describe nodes has to say about the machine:

june@june-Virtual-Machine:~/next$ kubectl describe nodes
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=minikube
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 19 Feb 2019 09:41:53 +1100
Taints:             node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 19 Feb 2019 10:08:54 +1100   Tue, 19 Feb 2019 09:41:45 +1100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Tue, 19 Feb 2019 10:08:54 +1100   Tue, 19 Feb 2019 09:42:03 +1100   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Tue, 19 Feb 2019 10:08:54 +1100   Tue, 19 Feb 2019 09:41:45 +1100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 19 Feb 2019 10:08:54 +1100   Tue, 19 Feb 2019 09:41:45 +1100   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.69.249
  Hostname:    minikube
Capacity:
 cpu:                4
 ephemeral-storage:  121746496Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             15988700Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  112201570528
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             15886300Ki
 pods:               110
System Info:
 Machine ID:                 4adc0f28c3354b5cabf3360b8d9351ad
 System UUID:                CE41C452-6A53-4081-906A-D9B0FAB8AB95
 Boot ID:                    ae63809d-1481-4f91-a301-987d4e846e3a
 Kernel Version:             4.15.0-45-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.1
 Kubelet Version:            v1.13.2
 Kube-Proxy Version:         v1.13.2
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  kube-system                kube-addon-manager-minikube         5m (0%)       0 (0%)      50Mi (0%)        0 (0%)         18m
  kube-system                kube-apiserver-minikube             250m (6%)     0 (0%)      0 (0%)           0 (0%)         17m
  kube-system                kube-controller-manager-minikube    200m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
  kube-system                kube-proxy-wm6cj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  kube-system                kube-scheduler-minikube             100m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                555m (13%)  0 (0%)
  memory             50Mi (0%)   0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                  From                  Message
  ----    ------                   ----                 ----                  -------
  Normal  NodeHasSufficientMemory  27m (x8 over 27m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     27m (x7 over 27m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 22m                  kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  22m (x8 over 22m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)    kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     22m (x7 over 22m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  22m                  kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 21m                  kube-proxy, minikube  Starting kube-proxy.
  Normal  NodeAllocatableEnforced  18m                  kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 18m                  kubelet, minikube     Starting kubelet.
  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)    kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     18m (x7 over 18m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  18m (x8 over 18m)    kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  Starting                 15m                  kube-proxy, minikube  Starting kube-proxy.
  Normal  Starting                 111s                 kubelet, minikube     Starting kubelet.
  Normal  NodeAllocatableEnforced  111s                 kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  110s (x8 over 111s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    110s (x8 over 111s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     110s (x7 over 111s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID

@hach-que
Copy link
Author

So based on this, it sounds like the defaults for Kubernetes are 10% disk space free for hard eviction.

This default makes sense for Kubernetes clusters that are running nodes, but is too excessive for development machines with large hard disks (where 10% of a 500GB SSD is 50GB).

So the things that are surprising here and should be addressed:

  • That Kubernetes can evict the essential API server pod without having any other replicas running or having anywhere else to place a new replica. This critical pod should really always have a mandatory minimum of 1 replica, and eviction should not be able to kill off an API server pod unless there's another one already successfully running elsewhere.
  • The defaults for eviction in Minikube are far higher than expected for a development environment.
  • This issue was excessively difficult to track down. I think it's reasonably practical for Minikube to check if the disk space requirement is met before the cluster is launched, and error out if the machine in it's current state would cause the DiskPressure taint to kick in.

@afbjorklund
Copy link
Collaborator

The disk eviction was addressed in #3671 - please try the new minikube version

@tstromberg tstromberg added the triage/obsolete Bugs that no longer occur in the latest stable release label Mar 8, 2019
@tstromberg tstromberg changed the title none: The connection to the server x:8443 was refused none: The connection to the server x:8443 was refused due to evicted apiserver Mar 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. triage/obsolete Bugs that no longer occur in the latest stable release
Projects
None yet
Development

No branches or pull requests

3 participants