diff --git a/v1.20/openstack-magnum/PRODUCT.yaml b/v1.20/openstack-magnum/PRODUCT.yaml new file mode 100644 index 0000000000..5977bc1af9 --- /dev/null +++ b/v1.20/openstack-magnum/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: OpenStack Foundation +name: Magnum +version: 11.0.0 +website_url: https://docs.openstack.org/magnum/latest/ +repo_url: https://opendev.org/openstack/magnum/ +documentation_url: https://docs.openstack.org/magnum/latest/user/#kubernetes +type: installer +description: Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Kubernetes available as a first class resources in OpenStack. +product_logo_url: https://www.openstack.org/themes/openstack/images/project-mascots/Magnum/OpenStack_Project_Magnum_vertical.eps diff --git a/v1.20/openstack-magnum/README.md b/v1.20/openstack-magnum/README.md new file mode 100644 index 0000000000..9dd3280313 --- /dev/null +++ b/v1.20/openstack-magnum/README.md @@ -0,0 +1,55 @@ +# OpenStack Magnum (Kubernetes) Conformance + +## Create Kubernetes Cluster + +Setup an OpenStack environment with devstack at master branch. Then create your personal keypair by command: + + openstack keypair create my-key --public-key ~/.ssh/id_rsa.pub + +Now you can create a Kubernetes cluster template by command: + + openstack coe cluster template create k8s --network-driver calico --flavor ds4G --master-flavor ds4G --coe kubernetes --external-network public --image fedora-coreos-33.20210117.3.2-openstack.x86_64 + +Then you can create a Kubernetes cluster by below command: + + openstack coe cluster create k8s-calico-coreos --cluster-template k8s --labels=kube_tag=v1.20.2-rancher1 --node-count=2 + +After the cluster is created, run the following command to obtain the configuration/certificate files: + + eval $(openstack coe cluster config k8s-calico-coreos) + +## Conformance Test + +Install Sonobuoy: + + VERSION=0.20.0 && \ + curl -L "https://github.com/vmware-tanzu/sonobuoy/releases/download/v${VERSION}/sonobuoy_${VERSION}_linux_amd64.tar.gz" --output sonobuoy.tar.gz && \ + mkdir -p tmp && \ + tar -xzf sonobuoy.tar.gz -C tmp/ && \ + chmod +x tmp/sonobuoy && \ + sudo mv tmp/sonobuoy /usr/local/bin/sonobuoy && \ + rm -rf sonobuoy.tar.gz tmp + +Now run the test: + + sonobuoy run --mode=certified-conformance + +Check status: + + sonobuoy logs + +Follow logs: + + sonobuoy logs -f | grep PASSED + +Retrieve results: + + outfile=$(sonobuoy retrieve); mkdir ./results; tar xzf $outfile -C ./results; cp results/plugins/e2e/results/global/* ./ + +Cleaning up: + + sonobuoy delete --all; rm -rf results/ *.tar.gz + +## Testing + +Once the configuration has been created, then you can follow the conformance suite [instructions](https://github.com/cncf/k8s-conformance/blob/master/instructions.md#running) to run the conformance test. diff --git a/v1.20/openstack-magnum/e2e.log b/v1.20/openstack-magnum/e2e.log new file mode 100644 index 0000000000..87b64dd65d --- /dev/null +++ b/v1.20/openstack-magnum/e2e.log @@ -0,0 +1,13991 @@ +I0212 09:27:31.317909 22 test_context.go:436] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-585161109 +I0212 09:27:31.317958 22 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready +I0212 09:27:31.318261 22 e2e.go:129] Starting e2e run "0d107ceb-b101-441e-983d-10a9dcbe166d" on Ginkgo node 1 +{"msg":"Test Suite starting","total":311,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1613122049 - Will randomize all specs +Will run 311 of 5667 specs + +Feb 12 09:27:31.342: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:27:31.353: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Feb 12 09:27:31.425: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Feb 12 09:27:31.483: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Feb 12 09:27:31.483: INFO: expected 7 pod replicas in namespace 'kube-system', 7 are Running and Ready. +Feb 12 09:27:31.483: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Feb 12 09:27:31.504: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Feb 12 09:27:31.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-cinder-nodeplugin' (0 seconds elapsed) +Feb 12 09:27:31.505: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'k8s-keystone-auth' (0 seconds elapsed) +Feb 12 09:27:31.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'npd' (0 seconds elapsed) +Feb 12 09:27:31.505: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'openstack-cloud-controller-manager' (0 seconds elapsed) +Feb 12 09:27:31.505: INFO: e2e test version: v1.20.2 +Feb 12 09:27:31.510: INFO: kube-apiserver version: v1.20.2 +Feb 12 09:27:31.510: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:27:31.521: INFO: Cluster IP family: ipv4 +SSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:27:31.521: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +Feb 12 09:27:31.562: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled +Feb 12 09:27:31.575: INFO: PSP annotation exists on dry run pod: "e2e-test-privileged-psp"; assuming PodSecurityPolicy is enabled +Feb 12 09:27:31.590: INFO: Found ClusterRoles; assuming RBAC is enabled. +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6828 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.17.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.17.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.17.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.17.10_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6828.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.17.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.17.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.17.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.17.10_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 09:27:35.815: INFO: Unable to read wheezy_udp@dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.825: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.830: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.841: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.845: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.856: INFO: Unable to read jessie_udp@dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.875: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.887: INFO: Unable to read jessie_udp@PodARecord from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.891: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe: the server could not find the requested resource (get pods dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe) +Feb 12 09:27:35.924: INFO: Lookups using dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe failed for: [wheezy_udp@dns-test-service.dns-6828.svc.cluster.local wheezy_tcp@dns-test-service.dns-6828.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-test-service.dns-6828.svc.cluster.local jessie_tcp@dns-test-service.dns-6828.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6828.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 09:27:40.993: INFO: DNS probes using dns-6828/dns-test-3a9321d3-b887-4e42-8a29-8704f9dc6cfe succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:27:41.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6828" for this suite. + +• [SLOW TEST:9.564 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":311,"completed":1,"skipped":7,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:27:41.090: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8547 +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:27:41.248: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:27:42.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-8547" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":311,"completed":2,"skipped":67,"failed":0} + +------------------------------ +[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:27:42.596: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-7776 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Feb 12 09:27:42.784: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 09:28:42.850: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:28:42.858: INFO: Starting informer... +STEP: Starting pods... +Feb 12 09:28:43.099: INFO: Pod1 is running on k8s-calico-coreos-yo5lpoxhpdlk-node-1. Tainting Node +Feb 12 09:28:45.337: INFO: Pod2 is running on k8s-calico-coreos-yo5lpoxhpdlk-node-1. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Feb 12 09:28:53.714: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Feb 12 09:29:23.711: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:29:23.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-7776" for this suite. + +• [SLOW TEST:101.159 seconds] +[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":311,"completed":3,"skipped":67,"failed":0} +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:29:23.755: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4805 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Ensuring resource quota status captures service creation +STEP: Deleting a Service +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:29:35.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4805" for this suite. + +• [SLOW TEST:11.275 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":311,"completed":4,"skipped":67,"failed":0} +SS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:29:35.031: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3457 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-3457 +STEP: creating service affinity-nodeport in namespace services-3457 +STEP: creating replication controller affinity-nodeport in namespace services-3457 +I0212 09:29:35.220235 22 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3457, replica count: 3 +I0212 09:29:38.276500 22 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 09:29:38.364: INFO: Creating new exec pod +Feb 12 09:29:43.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' +Feb 12 09:29:44.108: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Feb 12 09:29:44.108: INFO: stdout: "" +Feb 12 09:29:44.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 10.254.185.91 80' +Feb 12 09:29:44.530: INFO: stderr: "+ nc -zv -t -w 2 10.254.185.91 80\nConnection to 10.254.185.91 80 port [tcp/http] succeeded!\n" +Feb 12 09:29:44.530: INFO: stdout: "" +Feb 12 09:29:44.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.115 32696' +Feb 12 09:29:44.982: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.115 32696\nConnection to 10.0.0.115 32696 port [tcp/32696] succeeded!\n" +Feb 12 09:29:44.982: INFO: stdout: "" +Feb 12 09:29:44.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.234 32696' +Feb 12 09:29:45.404: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.234 32696\nConnection to 10.0.0.234 32696 port [tcp/32696] succeeded!\n" +Feb 12 09:29:45.404: INFO: stdout: "" +Feb 12 09:29:45.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.246 32696' +Feb 12 09:29:45.813: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.246 32696\nConnection to 172.24.4.246 32696 port [tcp/32696] succeeded!\n" +Feb 12 09:29:45.813: INFO: stdout: "" +Feb 12 09:29:45.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.157 32696' +Feb 12 09:29:46.235: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.157 32696\nConnection to 172.24.4.157 32696 port [tcp/32696] succeeded!\n" +Feb 12 09:29:46.235: INFO: stdout: "" +Feb 12 09:29:46.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3457 exec execpod-affinitycbrhg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.0.115:32696/ ; done' +Feb 12 09:29:46.810: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32696/\n" +Feb 12 09:29:46.810: INFO: stdout: "\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2\naffinity-nodeport-j5zr2" +Feb 12 09:29:46.810: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.810: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.810: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Received response from host: affinity-nodeport-j5zr2 +Feb 12 09:29:46.811: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-3457, will wait for the garbage collector to delete the pods +Feb 12 09:29:46.897: INFO: Deleting ReplicationController affinity-nodeport took: 9.590664ms +Feb 12 09:29:47.897: INFO: Terminating ReplicationController affinity-nodeport pods took: 1.000334945s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:30:02.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3457" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:27.618 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":5,"skipped":69,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:30:02.664: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-994 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name projected-secret-test-map-7185f526-dac5-46f1-8d10-11305c2091fc +STEP: Creating a pod to test consume secrets +Feb 12 09:30:02.869: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077" in namespace "projected-994" to be "Succeeded or Failed" +Feb 12 09:30:02.876: INFO: Pod "pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077": Phase="Pending", Reason="", readiness=false. Elapsed: 7.066651ms +Feb 12 09:30:04.882: INFO: Pod "pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012669061s +Feb 12 09:30:06.895: INFO: Pod "pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02641204s +STEP: Saw pod success +Feb 12 09:30:06.895: INFO: Pod "pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077" satisfied condition "Succeeded or Failed" +Feb 12 09:30:06.900: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077 container projected-secret-volume-test: +STEP: delete the pod +Feb 12 09:30:06.997: INFO: Waiting for pod pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077 to disappear +Feb 12 09:30:07.003: INFO: Pod pod-projected-secrets-e8ce7647-20ce-44c1-be5a-7142a314e077 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:30:07.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-994" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":6,"skipped":111,"failed":0} +SS +------------------------------ +[k8s.io] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:30:07.023: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2952 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test env composition +Feb 12 09:30:07.271: INFO: Waiting up to 5m0s for pod "var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b" in namespace "var-expansion-2952" to be "Succeeded or Failed" +Feb 12 09:30:07.277: INFO: Pod "var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.924456ms +Feb 12 09:30:09.284: INFO: Pod "var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012757185s +Feb 12 09:30:11.298: INFO: Pod "var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026295593s +STEP: Saw pod success +Feb 12 09:30:11.298: INFO: Pod "var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b" satisfied condition "Succeeded or Failed" +Feb 12 09:30:11.302: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b container dapi-container: +STEP: delete the pod +Feb 12 09:30:11.332: INFO: Waiting for pod var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b to disappear +Feb 12 09:30:11.342: INFO: Pod var-expansion-04da2790-29ff-4099-bc34-0e27e53aef4b no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:30:11.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2952" for this suite. +•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":311,"completed":7,"skipped":113,"failed":0} +S +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:30:11.353: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4107 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:30:11.544: INFO: Create a RollingUpdate DaemonSet +Feb 12 09:30:11.551: INFO: Check that daemon pods launch on every node of the cluster +Feb 12 09:30:11.557: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:11.561: INFO: Number of nodes with available pods: 0 +Feb 12 09:30:11.561: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:30:12.571: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:12.575: INFO: Number of nodes with available pods: 0 +Feb 12 09:30:12.575: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:30:13.576: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:13.581: INFO: Number of nodes with available pods: 0 +Feb 12 09:30:13.581: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:30:14.597: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:14.613: INFO: Number of nodes with available pods: 2 +Feb 12 09:30:14.613: INFO: Number of running nodes: 2, number of available pods: 2 +Feb 12 09:30:14.614: INFO: Update the DaemonSet to trigger a rollout +Feb 12 09:30:14.656: INFO: Updating DaemonSet daemon-set +Feb 12 09:30:22.687: INFO: Roll back the DaemonSet before rollout is complete +Feb 12 09:30:22.699: INFO: Updating DaemonSet daemon-set +Feb 12 09:30:22.700: INFO: Make sure DaemonSet rollback is complete +Feb 12 09:30:22.706: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:22.706: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:22.712: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:23.720: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:23.720: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:23.730: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:24.727: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:24.727: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:24.738: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:25.721: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:25.721: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:25.726: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:26.720: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:26.721: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:26.725: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:27.720: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:27.720: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:27.726: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:28.718: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:28.718: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:28.725: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:29.724: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:29.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:29.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:30.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:30.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:30.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:31.724: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:31.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:31.731: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:32.726: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:32.726: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:32.733: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:33.725: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:33.725: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:33.731: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:34.728: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:34.728: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:34.733: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:35.726: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:35.726: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:35.733: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:36.725: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:36.725: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:36.730: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:37.719: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:37.719: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:37.725: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:38.753: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:38.753: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:38.758: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:39.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:39.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:39.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:40.739: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:40.739: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:40.769: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:41.722: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:41.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:41.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:42.728: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:42.728: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:42.734: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:43.727: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:43.727: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:43.734: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:44.721: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:44.721: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:44.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:45.724: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:45.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:45.731: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:46.726: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:46.726: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:46.733: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:47.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:47.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:47.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:48.727: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:48.727: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:48.733: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:49.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:49.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:49.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:50.724: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:50.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:50.730: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:51.722: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:51.722: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:51.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:52.726: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:52.727: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:52.734: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:53.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:53.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:53.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:54.725: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:54.725: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:54.731: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:55.724: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:55.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:55.731: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:56.789: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:56.789: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:56.817: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:57.722: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:57.722: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:57.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:58.728: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:58.729: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:58.735: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:30:59.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:30:59.724: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:30:59.730: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:31:00.723: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:31:00.723: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:31:00.729: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:31:01.722: INFO: Wrong image for pod: daemon-set-2m4x7. Expected: 10.60.253.37/magnum/httpd:2.4.38-alpine, got: foo:non-existent. +Feb 12 09:31:01.722: INFO: Pod daemon-set-2m4x7 is not available +Feb 12 09:31:01.728: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:31:02.724: INFO: Pod daemon-set-6lbmr is not available +Feb 12 09:31:02.730: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4107, will wait for the garbage collector to delete the pods +Feb 12 09:31:02.807: INFO: Deleting DaemonSet.extensions daemon-set took: 12.379355ms +Feb 12 09:31:03.808: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000557255s +Feb 12 09:31:33.817: INFO: Number of nodes with available pods: 0 +Feb 12 09:31:33.817: INFO: Number of running nodes: 0, number of available pods: 0 +Feb 12 09:31:33.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"564031"},"items":null} + +Feb 12 09:31:33.832: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"564031"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:31:33.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4107" for this suite. + +• [SLOW TEST:82.509 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":311,"completed":8,"skipped":114,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:31:33.865: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-5971 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test override all +Feb 12 09:31:34.043: INFO: Waiting up to 5m0s for pod "client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0" in namespace "containers-5971" to be "Succeeded or Failed" +Feb 12 09:31:34.048: INFO: Pod "client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.449738ms +Feb 12 09:31:36.060: INFO: Pod "client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017252614s +STEP: Saw pod success +Feb 12 09:31:36.060: INFO: Pod "client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0" satisfied condition "Succeeded or Failed" +Feb 12 09:31:36.067: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0 container agnhost-container: +STEP: delete the pod +Feb 12 09:31:36.106: INFO: Waiting for pod client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0 to disappear +Feb 12 09:31:36.112: INFO: Pod client-containers-3e1d6258-4ea0-4eae-9b8c-787f01da2ae0 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:31:36.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-5971" for this suite. +•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":311,"completed":9,"skipped":131,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:31:36.130: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3103 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-upd-3c76550f-aada-45a2-8423-578ad78f8f7a +STEP: Creating the pod +STEP: Updating configmap configmap-test-upd-3c76550f-aada-45a2-8423-578ad78f8f7a +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:31:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3103" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":10,"skipped":144,"failed":0} +SSSSSSSSSSSS +------------------------------ +[k8s.io] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:31:40.410: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4774 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:31:40.583: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-387dd6d1-dfe2-4e17-9de5-acc3dcfc242d" in namespace "security-context-test-4774" to be "Succeeded or Failed" +Feb 12 09:31:40.597: INFO: Pod "busybox-privileged-false-387dd6d1-dfe2-4e17-9de5-acc3dcfc242d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.750308ms +Feb 12 09:31:42.611: INFO: Pod "busybox-privileged-false-387dd6d1-dfe2-4e17-9de5-acc3dcfc242d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027114182s +Feb 12 09:31:42.611: INFO: Pod "busybox-privileged-false-387dd6d1-dfe2-4e17-9de5-acc3dcfc242d" satisfied condition "Succeeded or Failed" +Feb 12 09:31:42.626: INFO: Got logs for pod "busybox-privileged-false-387dd6d1-dfe2-4e17-9de5-acc3dcfc242d": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:31:42.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-4774" for this suite. +•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":11,"skipped":156,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:31:42.641: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-8329 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Performing setup for networking test in namespace pod-network-test-8329 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Feb 12 09:31:42.797: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Feb 12 09:31:42.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 09:31:44.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 09:31:46.868: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:48.862: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:50.868: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:52.869: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:54.866: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:56.869: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 09:31:58.861: INFO: The status of Pod netserver-0 is Running (Ready = true) +Feb 12 09:31:58.870: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Feb 12 09:32:00.897: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Feb 12 09:32:00.897: INFO: Breadth first check of 10.100.92.241 on host 10.0.0.115... +Feb 12 09:32:00.899: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.100.45.30:9080/dial?request=hostname&protocol=udp&host=10.100.92.241&port=8081&tries=1'] Namespace:pod-network-test-8329 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 09:32:00.900: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:32:01.151: INFO: Waiting for responses: map[] +Feb 12 09:32:01.151: INFO: reached 10.100.92.241 after 0/1 tries +Feb 12 09:32:01.151: INFO: Breadth first check of 10.100.45.49 on host 10.0.0.234... +Feb 12 09:32:01.158: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.100.45.30:9080/dial?request=hostname&protocol=udp&host=10.100.45.49&port=8081&tries=1'] Namespace:pod-network-test-8329 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 09:32:01.158: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:32:01.401: INFO: Waiting for responses: map[] +Feb 12 09:32:01.401: INFO: reached 10.100.45.49 after 0/1 tries +Feb 12 09:32:01.401: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:01.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8329" for this suite. + +• [SLOW TEST:18.779 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":311,"completed":12,"skipped":201,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:01.421: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6789 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-5b5d8a62-0eae-4464-8ffe-2fa325a00b8a +STEP: Creating a pod to test consume configMaps +Feb 12 09:32:01.618: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b" in namespace "projected-6789" to be "Succeeded or Failed" +Feb 12 09:32:01.637: INFO: Pod "pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.003434ms +Feb 12 09:32:03.655: INFO: Pod "pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035953278s +Feb 12 09:32:05.671: INFO: Pod "pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052675886s +STEP: Saw pod success +Feb 12 09:32:05.671: INFO: Pod "pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b" satisfied condition "Succeeded or Failed" +Feb 12 09:32:05.675: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b container agnhost-container: +STEP: delete the pod +Feb 12 09:32:05.705: INFO: Waiting for pod pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b to disappear +Feb 12 09:32:05.710: INFO: Pod pod-projected-configmaps-711ca435-1cf1-4cca-a459-f0f0b857d28b no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:05.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6789" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":13,"skipped":204,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:05.723: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8551 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir volume type on tmpfs +Feb 12 09:32:05.954: INFO: Waiting up to 5m0s for pod "pod-5344af12-9429-447e-9ceb-95338c3b966d" in namespace "emptydir-8551" to be "Succeeded or Failed" +Feb 12 09:32:05.959: INFO: Pod "pod-5344af12-9429-447e-9ceb-95338c3b966d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272748ms +Feb 12 09:32:07.970: INFO: Pod "pod-5344af12-9429-447e-9ceb-95338c3b966d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015572906s +Feb 12 09:32:09.979: INFO: Pod "pod-5344af12-9429-447e-9ceb-95338c3b966d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02496148s +STEP: Saw pod success +Feb 12 09:32:09.980: INFO: Pod "pod-5344af12-9429-447e-9ceb-95338c3b966d" satisfied condition "Succeeded or Failed" +Feb 12 09:32:09.984: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-5344af12-9429-447e-9ceb-95338c3b966d container test-container: +STEP: delete the pod +Feb 12 09:32:10.018: INFO: Waiting for pod pod-5344af12-9429-447e-9ceb-95338c3b966d to disappear +Feb 12 09:32:10.022: INFO: Pod pod-5344af12-9429-447e-9ceb-95338c3b966d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:10.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8551" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":14,"skipped":222,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:10.038: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-3176 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create set of events +Feb 12 09:32:10.257: INFO: created test-event-1 +Feb 12 09:32:10.260: INFO: created test-event-2 +Feb 12 09:32:10.265: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Feb 12 09:32:10.269: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Feb 12 09:32:10.281: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-api-machinery] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:10.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3176" for this suite. +•{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":311,"completed":15,"skipped":255,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:10.298: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename limitrange +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-1436 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Feb 12 09:32:10.460: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Feb 12 09:32:10.472: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Feb 12 09:32:10.472: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Feb 12 09:32:10.489: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Feb 12 09:32:10.489: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Feb 12 09:32:10.502: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Feb 12 09:32:10.502: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Feb 12 09:32:17.575: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:17.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-1436" for this suite. + +• [SLOW TEST:7.303 seconds] +[sig-scheduling] LimitRange +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":311,"completed":16,"skipped":278,"failed":0} +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:17.602: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8482 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:20.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8482" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":311,"completed":17,"skipped":278,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:20.834: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4126 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4126" for this suite. + +• [SLOW TEST:7.205 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":311,"completed":18,"skipped":293,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:28.039: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3248 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: running the image 10.60.253.37/magnum/httpd:2.4.38-alpine +Feb 12 09:32:28.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3248 run e2e-test-httpd-pod --image=10.60.253.37/magnum/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' +Feb 12 09:32:28.419: INFO: stderr: "" +Feb 12 09:32:28.419: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Feb 12 09:32:33.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3248 get pod e2e-test-httpd-pod -o json' +Feb 12 09:32:33.653: INFO: stderr: "" +Feb 12 09:32:33.653: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"10.100.92.246/32\",\n \"cni.projectcalico.org/podIPs\": \"10.100.92.246/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2021-02-12T09:32:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-12T09:32:28Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:cni.projectcalico.org/podIP\": {},\n \"f:cni.projectcalico.org/podIPs\": {}\n }\n }\n },\n \"manager\": \"calico\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-12T09:32:29Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.100.92.246\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-12T09:32:30Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3248\",\n \"resourceVersion\": \"564564\",\n \"uid\": \"755964a8-84ea-4cc3-9220-a2f759563abf\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"10.60.253.37/magnum/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-b4q6s\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-calico-coreos-yo5lpoxhpdlk-node-0\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-b4q6s\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-b4q6s\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-12T09:32:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-12T09:32:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-12T09:32:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-12T09:32:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://caff52c9bc2e6ecd38dfe386e7b7c61af5fb6b2d1e408905b944c0b90991854f\",\n \"image\": \"10.60.253.37/magnum/httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-02-12T09:32:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.0.0.115\",\n \"phase\": \"Running\",\n \"podIP\": \"10.100.92.246\",\n \"podIPs\": [\n {\n \"ip\": \"10.100.92.246\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-02-12T09:32:28Z\"\n }\n}\n" +STEP: replace the image in the pod +Feb 12 09:32:33.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3248 replace -f -' +Feb 12 09:32:34.214: INFO: stderr: "" +Feb 12 09:32:34.214: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image 10.60.253.37/magnum/busybox:1.29 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +Feb 12 09:32:34.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3248 delete pods e2e-test-httpd-pod' +Feb 12 09:32:42.544: INFO: stderr: "" +Feb 12 09:32:42.544: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:42.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3248" for this suite. + +• [SLOW TEST:14.523 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":311,"completed":19,"skipped":296,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:42.563: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-695 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 09:32:43.377: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 09:32:45.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719163, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719163, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719163, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719163, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 09:32:48.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:32:48.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-695" for this suite. +STEP: Destroying namespace "webhook-695-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.080 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":311,"completed":20,"skipped":311,"failed":0} +SSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:32:48.646: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4284 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod busybox-ac9a035c-2776-4702-b854-d7004a0099cc in namespace container-probe-4284 +Feb 12 09:32:50.840: INFO: Started pod busybox-ac9a035c-2776-4702-b854-d7004a0099cc in namespace container-probe-4284 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 09:32:50.844: INFO: Initial restart count of pod busybox-ac9a035c-2776-4702-b854-d7004a0099cc is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:36:52.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4284" for this suite. + +• [SLOW TEST:243.806 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":21,"skipped":322,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:36:52.456: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9574 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:37:52.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9574" for this suite. + +• [SLOW TEST:60.198 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":311,"completed":22,"skipped":353,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:37:52.656: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3279 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3279.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3279.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 09:37:56.895: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d: the server could not find the requested resource (get pods dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d) +Feb 12 09:37:56.899: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d: the server could not find the requested resource (get pods dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d) +Feb 12 09:37:56.910: INFO: Unable to read jessie_udp@PodARecord from pod dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d: the server could not find the requested resource (get pods dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d) +Feb 12 09:37:56.914: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d: the server could not find the requested resource (get pods dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d) +Feb 12 09:37:56.914: INFO: Lookups using dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 09:38:01.956: INFO: DNS probes using dns-3279/dns-test-8017bce4-b6ba-4524-81d3-78457c702a8d succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:01.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3279" for this suite. + +• [SLOW TEST:9.348 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":311,"completed":23,"skipped":369,"failed":0} +SSS +------------------------------ +[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:02.005: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4493 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [k8s.io] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:02.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4493" for this suite. +•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":311,"completed":24,"skipped":372,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:02.205: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-4712 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create set of pod templates +Feb 12 09:38:02.429: INFO: created test-podtemplate-1 +Feb 12 09:38:02.434: INFO: created test-podtemplate-2 +Feb 12 09:38:02.437: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Feb 12 09:38:02.442: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Feb 12 09:38:02.460: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:02.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-4712" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":311,"completed":25,"skipped":380,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:02.476: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1466 +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Feb 12 09:38:06.670: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1466 PodName:pod-sharedvolume-ad18fb1c-28b3-442a-940e-41dccdb896bd ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 09:38:06.670: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:38:06.944: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:06.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1466" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":311,"completed":26,"skipped":399,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:06.962: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7675 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward api env vars +Feb 12 09:38:07.134: INFO: Waiting up to 5m0s for pod "downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de" in namespace "downward-api-7675" to be "Succeeded or Failed" +Feb 12 09:38:07.144: INFO: Pod "downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.037547ms +Feb 12 09:38:09.154: INFO: Pod "downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020000136s +Feb 12 09:38:11.166: INFO: Pod "downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031699873s +STEP: Saw pod success +Feb 12 09:38:11.167: INFO: Pod "downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de" satisfied condition "Succeeded or Failed" +Feb 12 09:38:11.170: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de container dapi-container: +STEP: delete the pod +Feb 12 09:38:11.317: INFO: Waiting for pod downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de to disappear +Feb 12 09:38:11.327: INFO: Pod downward-api-a73ed7f0-7987-4cc4-a28d-a5e25f2f95de no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:11.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7675" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":311,"completed":27,"skipped":420,"failed":0} + +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:11.342: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7304 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Feb 12 09:38:11.510: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Feb 12 09:38:31.877: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 09:38:37.327: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:38:57.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7304" for this suite. + +• [SLOW TEST:45.986 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":311,"completed":28,"skipped":420,"failed":0} +S +------------------------------ +[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:38:57.330: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-3875 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Feb 12 09:38:57.496: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 09:39:57.636: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:39:57.660: INFO: Starting informer... +STEP: Starting pod... +Feb 12 09:39:57.885: INFO: Pod is running on k8s-calico-coreos-yo5lpoxhpdlk-node-1. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Feb 12 09:39:57.916: INFO: Pod wasn't evicted. Proceeding +Feb 12 09:39:57.916: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Feb 12 09:41:12.964: INFO: Pod wasn't evicted. Test successful +[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:41:12.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-3875" for this suite. + +• [SLOW TEST:135.674 seconds] +[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":311,"completed":29,"skipped":421,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:41:13.007: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5107 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Feb 12 09:41:19.372: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:19.378: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:21.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:21.392: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:23.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:23.388: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:25.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:25.391: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:27.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:27.386: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:29.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:29.396: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:31.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:31.392: INFO: Pod pod-with-poststart-exec-hook still exists +Feb 12 09:41:33.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Feb 12 09:41:33.394: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:41:33.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-5107" for this suite. + +• [SLOW TEST:20.405 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":311,"completed":30,"skipped":445,"failed":0} +SSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:41:33.414: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-822 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +Feb 12 09:41:33.568: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:41:37.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-822" for this suite. +•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":311,"completed":31,"skipped":455,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:41:37.205: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5275 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod pod-subpath-test-downwardapi-bblj +STEP: Creating a pod to test atomic-volume-subpath +Feb 12 09:41:37.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bblj" in namespace "subpath-5275" to be "Succeeded or Failed" +Feb 12 09:41:37.393: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006662ms +Feb 12 09:41:39.404: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 2.020455156s +Feb 12 09:41:41.421: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 4.037136969s +Feb 12 09:41:43.434: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 6.050854475s +Feb 12 09:41:45.446: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 8.062892309s +Feb 12 09:41:47.460: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 10.07649388s +Feb 12 09:41:49.472: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 12.08799373s +Feb 12 09:41:51.477: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 14.093499522s +Feb 12 09:41:53.494: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 16.110712621s +Feb 12 09:41:55.508: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 18.123966116s +Feb 12 09:41:57.525: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 20.140983489s +Feb 12 09:41:59.536: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Running", Reason="", readiness=true. Elapsed: 22.152219265s +Feb 12 09:42:01.549: INFO: Pod "pod-subpath-test-downwardapi-bblj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.165099491s +STEP: Saw pod success +Feb 12 09:42:01.549: INFO: Pod "pod-subpath-test-downwardapi-bblj" satisfied condition "Succeeded or Failed" +Feb 12 09:42:01.553: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-subpath-test-downwardapi-bblj container test-container-subpath-downwardapi-bblj: +STEP: delete the pod +Feb 12 09:42:01.597: INFO: Waiting for pod pod-subpath-test-downwardapi-bblj to disappear +Feb 12 09:42:01.601: INFO: Pod pod-subpath-test-downwardapi-bblj no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-bblj +Feb 12 09:42:01.602: INFO: Deleting pod "pod-subpath-test-downwardapi-bblj" in namespace "subpath-5275" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:42:01.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5275" for this suite. + +• [SLOW TEST:24.418 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":311,"completed":32,"skipped":502,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:42:01.624: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1406 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 09:42:02.872: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 09:42:04.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719722, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719722, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719722, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748719722, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 09:42:07.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:42:07.913: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:42:09.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1406" for this suite. +STEP: Destroying namespace "webhook-1406-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:7.599 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":311,"completed":33,"skipped":511,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:42:09.224: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5894 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-5894 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a new StatefulSet +Feb 12 09:42:09.422: INFO: Found 0 stateful pods, waiting for 3 +Feb 12 09:42:19.436: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:42:19.436: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:42:19.436: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from 10.60.253.37/magnum/httpd:2.4.38-alpine to 10.60.253.37/magnum/httpd:2.4.39-alpine +Feb 12 09:42:19.475: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Feb 12 09:42:29.522: INFO: Updating stateful set ss2 +Feb 12 09:42:29.534: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:42:39.553: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:42:49.552: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:42:59.554: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:43:09.550: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:43:19.555: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:43:29.552: INFO: Waiting for Pod statefulset-5894/ss2-2 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +STEP: Restoring Pods to the correct revision when they are deleted +Feb 12 09:43:39.616: INFO: Found 2 stateful pods, waiting for 3 +Feb 12 09:43:49.632: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:43:49.632: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:43:49.632: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Feb 12 09:43:49.671: INFO: Updating stateful set ss2 +Feb 12 09:43:49.680: INFO: Waiting for Pod statefulset-5894/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:43:59.699: INFO: Waiting for Pod statefulset-5894/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:44:09.723: INFO: Updating stateful set ss2 +Feb 12 09:44:09.736: INFO: Waiting for StatefulSet statefulset-5894/ss2 to complete update +Feb 12 09:44:09.737: INFO: Waiting for Pod statefulset-5894/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:44:19.750: INFO: Waiting for StatefulSet statefulset-5894/ss2 to complete update +Feb 12 09:44:19.750: INFO: Waiting for Pod statefulset-5894/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:44:29.755: INFO: Waiting for StatefulSet statefulset-5894/ss2 to complete update +Feb 12 09:44:29.755: INFO: Waiting for Pod statefulset-5894/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 09:44:39.750: INFO: Deleting all statefulset in ns statefulset-5894 +Feb 12 09:44:39.756: INFO: Scaling statefulset ss2 to 0 +Feb 12 09:45:39.790: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 09:45:39.797: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:45:39.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5894" for this suite. + +• [SLOW TEST:210.610 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":311,"completed":34,"skipped":541,"failed":0} +SSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:45:39.835: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename runtimeclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-9365 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Feb 12 09:45:40.033: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Feb 12 09:45:40.060: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:45:40.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-9365" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":311,"completed":35,"skipped":548,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:45:40.103: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5860 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod test-webserver-f3ca72a6-210b-4b49-bcfc-f6b515e7c52a in namespace container-probe-5860 +Feb 12 09:45:42.293: INFO: Started pod test-webserver-f3ca72a6-210b-4b49-bcfc-f6b515e7c52a in namespace container-probe-5860 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 09:45:42.297: INFO: Initial restart count of pod test-webserver-f3ca72a6-210b-4b49-bcfc-f6b515e7c52a is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:49:44.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5860" for this suite. + +• [SLOW TEST:244.258 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":36,"skipped":569,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:49:44.364: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3026 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 09:49:45.018: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 09:49:47.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748720185, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748720185, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748720185, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748720185, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 09:49:50.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:49:50.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3026" for this suite. +STEP: Destroying namespace "webhook-3026-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:5.812 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":311,"completed":37,"skipped":602,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:49:50.184: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-7712 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 09:49:50.373: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Feb 12 09:49:50.389: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:50.400: INFO: Number of nodes with available pods: 0 +Feb 12 09:49:50.400: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:49:51.411: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:51.417: INFO: Number of nodes with available pods: 0 +Feb 12 09:49:51.417: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:49:52.411: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:52.416: INFO: Number of nodes with available pods: 0 +Feb 12 09:49:52.416: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 09:49:53.412: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:53.416: INFO: Number of nodes with available pods: 2 +Feb 12 09:49:53.416: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Feb 12 09:49:53.456: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:53.456: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:53.462: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:54.472: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:54.472: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:54.478: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:55.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:55.470: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:55.475: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:56.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:56.470: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:56.470: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:49:56.475: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:57.475: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:57.475: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:57.475: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:49:57.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:58.468: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:58.468: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:58.468: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:49:58.473: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:49:59.472: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:59.472: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:49:59.472: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:49:59.478: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:00.477: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:00.477: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:00.477: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:50:00.482: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:01.471: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:01.471: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:01.471: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:50:01.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:02.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:02.471: INFO: Wrong image for pod: daemon-set-pjjz6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:02.471: INFO: Pod daemon-set-pjjz6 is not available +Feb 12 09:50:02.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:03.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:03.474: INFO: Pod daemon-set-p666q is not available +Feb 12 09:50:03.480: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:04.473: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:04.474: INFO: Pod daemon-set-p666q is not available +Feb 12 09:50:04.480: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:05.472: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:05.478: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:06.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:06.475: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:07.471: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:07.471: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:07.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:08.468: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:08.469: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:08.473: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:09.471: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:09.471: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:09.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:10.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:10.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:10.479: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:11.471: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:11.471: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:11.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:12.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:12.475: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:12.480: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:13.475: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:13.475: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:13.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:14.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:14.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:14.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:15.471: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:15.471: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:15.475: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:16.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:16.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:16.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:17.475: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:17.475: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:17.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:18.518: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:18.518: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:18.525: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:19.473: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:19.473: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:19.479: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:20.472: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:20.472: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:20.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:21.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:21.470: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:21.476: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:22.473: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:22.473: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:22.478: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:23.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:23.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:23.480: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:24.470: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:24.471: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:24.476: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:25.475: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:25.475: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:25.480: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:26.490: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:26.491: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:26.501: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:27.476: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:27.476: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:27.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:28.477: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:28.477: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:28.483: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:29.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:29.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:29.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:30.476: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:30.476: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:30.482: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:31.491: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:31.494: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:31.507: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:32.468: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:32.468: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:32.474: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:33.474: INFO: Wrong image for pod: daemon-set-9gdfs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: 10.60.253.37/magnum/httpd:2.4.38-alpine. +Feb 12 09:50:33.474: INFO: Pod daemon-set-9gdfs is not available +Feb 12 09:50:33.479: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:34.473: INFO: Pod daemon-set-7lr8s is not available +Feb 12 09:50:34.477: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +STEP: Check that daemon pods are still running on every node of the cluster. +Feb 12 09:50:34.481: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:34.484: INFO: Number of nodes with available pods: 1 +Feb 12 09:50:34.484: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 09:50:35.497: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:35.502: INFO: Number of nodes with available pods: 1 +Feb 12 09:50:35.502: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 09:50:36.497: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 09:50:36.501: INFO: Number of nodes with available pods: 2 +Feb 12 09:50:36.501: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7712, will wait for the garbage collector to delete the pods +Feb 12 09:50:36.592: INFO: Deleting DaemonSet.extensions daemon-set took: 12.54294ms +Feb 12 09:50:37.592: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000394188s +Feb 12 09:50:43.810: INFO: Number of nodes with available pods: 0 +Feb 12 09:50:43.810: INFO: Number of running nodes: 0, number of available pods: 0 +Feb 12 09:50:43.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"568032"},"items":null} + +Feb 12 09:50:43.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"568032"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:50:43.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7712" for this suite. + +• [SLOW TEST:53.673 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":311,"completed":38,"skipped":615,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:50:43.858: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7404 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-7404 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a new StatefulSet +Feb 12 09:50:44.050: INFO: Found 0 stateful pods, waiting for 3 +Feb 12 09:50:54.072: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:50:54.073: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:50:54.073: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 09:50:54.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7404 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 09:50:54.787: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 09:50:54.787: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 09:50:54.787: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from 10.60.253.37/magnum/httpd:2.4.38-alpine to 10.60.253.37/magnum/httpd:2.4.39-alpine +Feb 12 09:51:04.859: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Feb 12 09:51:14.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7404 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 09:51:15.323: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 09:51:15.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 09:51:15.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 09:51:25.364: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:51:25.364: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:25.364: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:35.379: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:51:35.380: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:35.380: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:45.397: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:51:45.398: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:45.398: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:55.394: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:51:55.394: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:51:55.394: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:52:05.381: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:52:05.381: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:52:15.388: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:52:15.388: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:52:25.391: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:52:25.391: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-6dd45f5f74 update revision ss2-94fd986df +Feb 12 09:52:35.391: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +STEP: Rolling back to a previous revision +Feb 12 09:52:45.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7404 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 09:52:45.860: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 09:52:45.860: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 09:52:45.860: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 09:52:55.917: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Feb 12 09:53:05.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7404 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 09:53:06.385: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 09:53:06.385: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 09:53:06.385: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 09:53:06.432: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:06.432: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:06.432: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:06.432: INFO: Waiting for Pod statefulset-7404/ss2-2 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:16.453: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:16.454: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:16.454: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:16.454: INFO: Waiting for Pod statefulset-7404/ss2-2 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:26.467: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:26.467: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:26.467: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:26.467: INFO: Waiting for Pod statefulset-7404/ss2-2 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:36.468: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:36.468: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:36.468: INFO: Waiting for Pod statefulset-7404/ss2-1 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:46.458: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:46.458: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:53:56.454: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:53:56.455: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:54:06.457: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:54:06.457: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:54:16.461: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:54:16.461: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:54:26.467: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +Feb 12 09:54:26.467: INFO: Waiting for Pod statefulset-7404/ss2-0 to have revision ss2-94fd986df update revision ss2-6dd45f5f74 +Feb 12 09:54:36.463: INFO: Waiting for StatefulSet statefulset-7404/ss2 to complete update +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 09:54:46.462: INFO: Deleting all statefulset in ns statefulset-7404 +Feb 12 09:54:46.466: INFO: Scaling statefulset ss2 to 0 +Feb 12 09:56:36.505: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 09:56:36.511: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:56:36.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7404" for this suite. + +• [SLOW TEST:352.689 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":311,"completed":39,"skipped":617,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:56:36.549: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1735 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod busybox-fd0fae7d-3f0e-4ebf-aee9-45f48da06a11 in namespace container-probe-1735 +Feb 12 09:56:38.755: INFO: Started pod busybox-fd0fae7d-3f0e-4ebf-aee9-45f48da06a11 in namespace container-probe-1735 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 09:56:38.757: INFO: Initial restart count of pod busybox-fd0fae7d-3f0e-4ebf-aee9-45f48da06a11 is 0 +Feb 12 09:57:29.068: INFO: Restart count of pod container-probe-1735/busybox-fd0fae7d-3f0e-4ebf-aee9-45f48da06a11 is now 1 (50.310712561s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:57:29.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1735" for this suite. + +• [SLOW TEST:52.550 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":40,"skipped":647,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:57:29.102: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3150 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:57:29.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3150" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":311,"completed":41,"skipped":650,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:57:29.319: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-548 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Feb 12 09:57:29.493: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 09:58:29.563: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create pods that use 2/3 of node resources. +Feb 12 09:58:29.602: INFO: Created pod: pod0-sched-preemption-low-priority +Feb 12 09:58:29.641: INFO: Created pod: pod1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:58:45.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-548" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:76.490 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":311,"completed":42,"skipped":677,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:58:45.810: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2210 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:59:03.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2210" for this suite. + +• [SLOW TEST:17.296 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":311,"completed":43,"skipped":681,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:59:03.109: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3957 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-map-8ad75811-8f47-42ae-a67a-0390f2899823 +STEP: Creating a pod to test consume secrets +Feb 12 09:59:03.311: INFO: Waiting up to 5m0s for pod "pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac" in namespace "secrets-3957" to be "Succeeded or Failed" +Feb 12 09:59:03.317: INFO: Pod "pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094232ms +Feb 12 09:59:05.327: INFO: Pod "pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015799511s +Feb 12 09:59:07.345: INFO: Pod "pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033670706s +STEP: Saw pod success +Feb 12 09:59:07.345: INFO: Pod "pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac" satisfied condition "Succeeded or Failed" +Feb 12 09:59:07.349: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac container secret-volume-test: +STEP: delete the pod +Feb 12 09:59:07.492: INFO: Waiting for pod pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac to disappear +Feb 12 09:59:07.496: INFO: Pod pod-secrets-4607fc03-c76d-44dc-b504-ddf34b31e7ac no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:59:07.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3957" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":44,"skipped":684,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:59:07.516: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7817 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 09:59:07.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a" in namespace "projected-7817" to be "Succeeded or Failed" +Feb 12 09:59:07.729: INFO: Pod "downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.863352ms +Feb 12 09:59:09.740: INFO: Pod "downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035112474s +STEP: Saw pod success +Feb 12 09:59:09.741: INFO: Pod "downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a" satisfied condition "Succeeded or Failed" +Feb 12 09:59:09.747: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a container client-container: +STEP: delete the pod +Feb 12 09:59:09.771: INFO: Waiting for pod downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a to disappear +Feb 12 09:59:09.775: INFO: Pod downwardapi-volume-cc87ffa2-3189-400d-b57f-be7954cbce1a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 09:59:09.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7817" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":45,"skipped":705,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 09:59:09.785: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-9054 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Feb 12 09:59:10.303: INFO: Pod name wrapped-volume-race-bbd5218a-af74-4c69-aaec-76d00bc0751f: Found 0 pods out of 5 +Feb 12 09:59:15.331: INFO: Pod name wrapped-volume-race-bbd5218a-af74-4c69-aaec-76d00bc0751f: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-bbd5218a-af74-4c69-aaec-76d00bc0751f in namespace emptydir-wrapper-9054, will wait for the garbage collector to delete the pods +Feb 12 09:59:27.456: INFO: Deleting ReplicationController wrapped-volume-race-bbd5218a-af74-4c69-aaec-76d00bc0751f took: 9.267779ms +Feb 12 09:59:28.557: INFO: Terminating ReplicationController wrapped-volume-race-bbd5218a-af74-4c69-aaec-76d00bc0751f pods took: 1.100651697s +STEP: Creating RC which spawns configmap-volume pods +Feb 12 09:59:42.788: INFO: Pod name wrapped-volume-race-84ccedd6-d48a-4d49-bcfa-1f494a13c57e: Found 0 pods out of 5 +Feb 12 09:59:47.818: INFO: Pod name wrapped-volume-race-84ccedd6-d48a-4d49-bcfa-1f494a13c57e: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-84ccedd6-d48a-4d49-bcfa-1f494a13c57e in namespace emptydir-wrapper-9054, will wait for the garbage collector to delete the pods +Feb 12 09:59:59.951: INFO: Deleting ReplicationController wrapped-volume-race-84ccedd6-d48a-4d49-bcfa-1f494a13c57e took: 9.099903ms +Feb 12 10:00:01.052: INFO: Terminating ReplicationController wrapped-volume-race-84ccedd6-d48a-4d49-bcfa-1f494a13c57e pods took: 1.100577521s +STEP: Creating RC which spawns configmap-volume pods +Feb 12 10:01:02.719: INFO: Pod name wrapped-volume-race-671127c8-98a0-47df-81b9-9471d46b93f8: Found 0 pods out of 5 +Feb 12 10:01:07.740: INFO: Pod name wrapped-volume-race-671127c8-98a0-47df-81b9-9471d46b93f8: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-671127c8-98a0-47df-81b9-9471d46b93f8 in namespace emptydir-wrapper-9054, will wait for the garbage collector to delete the pods +Feb 12 10:01:17.865: INFO: Deleting ReplicationController wrapped-volume-race-671127c8-98a0-47df-81b9-9471d46b93f8 took: 10.950321ms +Feb 12 10:01:18.966: INFO: Terminating ReplicationController wrapped-volume-race-671127c8-98a0-47df-81b9-9471d46b93f8 pods took: 1.100522814s +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:02.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-9054" for this suite. + +• [SLOW TEST:173.124 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":311,"completed":46,"skipped":707,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:02.915: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4064 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Feb 12 10:02:03.088: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:13.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4064" for this suite. + +• [SLOW TEST:10.825 seconds] +[k8s.io] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":311,"completed":47,"skipped":725,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:13.747: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-6319 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Feb 12 10:02:16.980: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:18.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6319" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":311,"completed":48,"skipped":738,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:18.029: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-6041 +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:02:18.193: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:18.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-6041" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":311,"completed":49,"skipped":753,"failed":0} +SSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:18.818: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-2024 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting the auto-created API token +STEP: reading a file in the container +Feb 12 10:02:23.543: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2024 pod-service-account-9422b18e-4760-41b9-a030-f38f990bdcb9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Feb 12 10:02:24.478: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2024 pod-service-account-9422b18e-4760-41b9-a030-f38f990bdcb9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Feb 12 10:02:24.880: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2024 pod-service-account-9422b18e-4760-41b9-a030-f38f990bdcb9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:25.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2024" for this suite. + +• [SLOW TEST:6.511 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":311,"completed":50,"skipped":758,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:25.331: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-296 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:02:25.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-296" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":311,"completed":51,"skipped":805,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:02:25.620: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-1763 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9483 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-628 +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:36.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-1763" for this suite. +STEP: Destroying namespace "nsdeletetest-9483" for this suite. +Feb 12 10:03:36.166: INFO: Namespace nsdeletetest-9483 was already deleted +STEP: Destroying namespace "nsdeletetest-628" for this suite. + +• [SLOW TEST:70.553 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":311,"completed":52,"skipped":808,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:36.176: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-9317 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Feb 12 10:03:36.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9317 0e98adc2-9b3a-4f9b-9670-3654016365eb 571538 0 2021-02-12 10:03:36 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-12 10:03:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:03:36.379: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9317 0e98adc2-9b3a-4f9b-9670-3654016365eb 571539 0 2021-02-12 10:03:36 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-12 10:03:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:36.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-9317" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":311,"completed":53,"skipped":822,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:36.392: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1358 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a replication controller +Feb 12 10:03:36.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 create -f -' +Feb 12 10:03:37.163: INFO: stderr: "" +Feb 12 10:03:37.163: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Feb 12 10:03:37.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:03:37.313: INFO: stderr: "" +Feb 12 10:03:37.313: INFO: stdout: "update-demo-nautilus-7t544 update-demo-nautilus-nhsn4 " +Feb 12 10:03:37.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods update-demo-nautilus-7t544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:03:37.485: INFO: stderr: "" +Feb 12 10:03:37.485: INFO: stdout: "" +Feb 12 10:03:37.485: INFO: update-demo-nautilus-7t544 is created but not running +Feb 12 10:03:42.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:03:42.664: INFO: stderr: "" +Feb 12 10:03:42.664: INFO: stdout: "update-demo-nautilus-7t544 update-demo-nautilus-nhsn4 " +Feb 12 10:03:42.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods update-demo-nautilus-7t544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:03:42.826: INFO: stderr: "" +Feb 12 10:03:42.826: INFO: stdout: "true" +Feb 12 10:03:42.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods update-demo-nautilus-7t544 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:03:42.974: INFO: stderr: "" +Feb 12 10:03:42.974: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:03:42.974: INFO: validating pod update-demo-nautilus-7t544 +Feb 12 10:03:42.985: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:03:42.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:03:42.985: INFO: update-demo-nautilus-7t544 is verified up and running +Feb 12 10:03:42.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods update-demo-nautilus-nhsn4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:03:43.140: INFO: stderr: "" +Feb 12 10:03:43.140: INFO: stdout: "true" +Feb 12 10:03:43.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods update-demo-nautilus-nhsn4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:03:43.299: INFO: stderr: "" +Feb 12 10:03:43.299: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:03:43.299: INFO: validating pod update-demo-nautilus-nhsn4 +Feb 12 10:03:43.309: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:03:43.309: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:03:43.309: INFO: update-demo-nautilus-nhsn4 is verified up and running +STEP: using delete to clean up resources +Feb 12 10:03:43.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 delete --grace-period=0 --force -f -' +Feb 12 10:03:43.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 10:03:43.472: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Feb 12 10:03:43.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:03:43.644: INFO: stderr: "No resources found in kubectl-1358 namespace.\n" +Feb 12 10:03:43.644: INFO: stdout: "" +Feb 12 10:03:43.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:03:43.799: INFO: stderr: "" +Feb 12 10:03:43.799: INFO: stdout: "update-demo-nautilus-7t544\nupdate-demo-nautilus-nhsn4\n" +Feb 12 10:03:44.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:03:44.468: INFO: stderr: "No resources found in kubectl-1358 namespace.\n" +Feb 12 10:03:44.468: INFO: stdout: "" +Feb 12 10:03:44.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-1358 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:03:44.657: INFO: stderr: "" +Feb 12 10:03:44.657: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1358" for this suite. + +• [SLOW TEST:8.282 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":311,"completed":54,"skipped":825,"failed":0} +SS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:44.675: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5965 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:03:44.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b" in namespace "downward-api-5965" to be "Succeeded or Failed" +Feb 12 10:03:44.948: INFO: Pod "downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011082ms +Feb 12 10:03:46.960: INFO: Pod "downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018753554s +Feb 12 10:03:48.973: INFO: Pod "downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031786891s +STEP: Saw pod success +Feb 12 10:03:48.974: INFO: Pod "downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b" satisfied condition "Succeeded or Failed" +Feb 12 10:03:48.978: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b container client-container: +STEP: delete the pod +Feb 12 10:03:49.070: INFO: Waiting for pod downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b to disappear +Feb 12 10:03:49.079: INFO: Pod downwardapi-volume-e0fad51f-c660-4368-9d5f-9efd0a2c8b6b no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:49.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5965" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":55,"skipped":827,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:49.096: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8840 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating the pod +Feb 12 10:03:53.863: INFO: Successfully updated pod "labelsupdate7b814d8d-2e65-4451-a48c-f241a83a92f0" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:55.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8840" for this suite. + +• [SLOW TEST:6.825 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":56,"skipped":853,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:55.929: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-2412 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Feb 12 10:03:59.132: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:03:59.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-2412" for this suite. +•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":311,"completed":57,"skipped":901,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:03:59.168: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2986 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:03:59.862: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:04:01.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721039, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721039, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721039, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721039, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:04:04.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2986" for this suite. +STEP: Destroying namespace "webhook-2986-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:5.895 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":311,"completed":58,"skipped":936,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:05.064: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-752 +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir volume type on node default medium +Feb 12 10:04:05.280: INFO: Waiting up to 5m0s for pod "pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69" in namespace "emptydir-752" to be "Succeeded or Failed" +Feb 12 10:04:05.290: INFO: Pod "pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69": Phase="Pending", Reason="", readiness=false. Elapsed: 9.406952ms +Feb 12 10:04:07.299: INFO: Pod "pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018401343s +Feb 12 10:04:09.310: INFO: Pod "pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029896308s +STEP: Saw pod success +Feb 12 10:04:09.311: INFO: Pod "pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69" satisfied condition "Succeeded or Failed" +Feb 12 10:04:09.315: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69 container test-container: +STEP: delete the pod +Feb 12 10:04:09.459: INFO: Waiting for pod pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69 to disappear +Feb 12 10:04:09.467: INFO: Pod pod-d546b6c6-46bb-4888-b21b-4d1edfacbd69 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:09.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-752" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":59,"skipped":970,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:09.491: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2865 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating pod +Feb 12 10:04:13.698: INFO: Pod pod-hostip-03833e45-eebb-45b5-b659-c2a529b18a07 has hostIP: 10.0.0.234 +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:13.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2865" for this suite. +•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":311,"completed":60,"skipped":986,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:13.711: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-3350 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-3350" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":311,"completed":61,"skipped":1014,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:18.017: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4956 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service nodeport-test with type=NodePort in namespace services-4956 +STEP: creating replication controller nodeport-test in namespace services-4956 +I0212 10:04:18.203799 22 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4956, replica count: 2 +I0212 10:04:21.255607 22 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 10:04:21.256: INFO: Creating new exec pod +Feb 12 10:04:24.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' +Feb 12 10:04:24.789: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Feb 12 10:04:24.789: INFO: stdout: "" +Feb 12 10:04:24.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 10.254.157.112 80' +Feb 12 10:04:25.288: INFO: stderr: "+ nc -zv -t -w 2 10.254.157.112 80\nConnection to 10.254.157.112 80 port [tcp/http] succeeded!\n" +Feb 12 10:04:25.288: INFO: stdout: "" +Feb 12 10:04:25.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.115 32026' +Feb 12 10:04:25.703: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.115 32026\nConnection to 10.0.0.115 32026 port [tcp/32026] succeeded!\n" +Feb 12 10:04:25.703: INFO: stdout: "" +Feb 12 10:04:25.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.234 32026' +Feb 12 10:04:26.109: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.234 32026\nConnection to 10.0.0.234 32026 port [tcp/32026] succeeded!\n" +Feb 12 10:04:26.109: INFO: stdout: "" +Feb 12 10:04:26.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.246 32026' +Feb 12 10:04:26.514: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.246 32026\nConnection to 172.24.4.246 32026 port [tcp/32026] succeeded!\n" +Feb 12 10:04:26.515: INFO: stdout: "" +Feb 12 10:04:26.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-4956 exec execpodfrhqs -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.157 32026' +Feb 12 10:04:26.937: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.157 32026\nConnection to 172.24.4.157 32026 port [tcp/32026] succeeded!\n" +Feb 12 10:04:26.937: INFO: stdout: "" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:26.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4956" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:8.938 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":311,"completed":62,"skipped":1023,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:26.958: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1305 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:04:27.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb" in namespace "downward-api-1305" to be "Succeeded or Failed" +Feb 12 10:04:27.154: INFO: Pod "downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.755512ms +Feb 12 10:04:29.166: INFO: Pod "downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025019921s +STEP: Saw pod success +Feb 12 10:04:29.166: INFO: Pod "downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb" satisfied condition "Succeeded or Failed" +Feb 12 10:04:29.176: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb container client-container: +STEP: delete the pod +Feb 12 10:04:29.298: INFO: Waiting for pod downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb to disappear +Feb 12 10:04:29.303: INFO: Pod downwardapi-volume-f928aedd-5984-418e-a911-dc002bdba8fb no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:04:29.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1305" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":311,"completed":63,"skipped":1081,"failed":0} +SSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:04:29.320: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-1389 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Feb 12 10:04:29.491: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 10:05:29.564: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create pods that use 2/3 of node resources. +Feb 12 10:05:29.618: INFO: Created pod: pod0-sched-preemption-low-priority +Feb 12 10:05:29.671: INFO: Created pod: pod1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:05:55.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-1389" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:86.558 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":311,"completed":64,"skipped":1089,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:05:55.878: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7750 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:05:56.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd" in namespace "projected-7750" to be "Succeeded or Failed" +Feb 12 10:05:56.069: INFO: Pod "downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.531722ms +Feb 12 10:05:58.083: INFO: Pod "downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020988238s +STEP: Saw pod success +Feb 12 10:05:58.083: INFO: Pod "downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd" satisfied condition "Succeeded or Failed" +Feb 12 10:05:58.087: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd container client-container: +STEP: delete the pod +Feb 12 10:05:58.120: INFO: Waiting for pod downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd to disappear +Feb 12 10:05:58.126: INFO: Pod downwardapi-volume-8a518a33-9f42-47f0-89cd-5f47b3397fcd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:05:58.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7750" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":65,"skipped":1096,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:05:58.142: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7660 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:05:58.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d" in namespace "downward-api-7660" to be "Succeeded or Failed" +Feb 12 10:05:58.422: INFO: Pod "downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586947ms +Feb 12 10:06:00.447: INFO: Pod "downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031686551s +STEP: Saw pod success +Feb 12 10:06:00.447: INFO: Pod "downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d" satisfied condition "Succeeded or Failed" +Feb 12 10:06:00.451: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d container client-container: +STEP: delete the pod +Feb 12 10:06:00.484: INFO: Waiting for pod downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d to disappear +Feb 12 10:06:00.489: INFO: Pod downwardapi-volume-6e8ec818-5960-42b7-86db-0aeee925e83d no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:00.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7660" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":66,"skipped":1111,"failed":0} +SSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:00.501: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1739 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test override arguments +Feb 12 10:06:00.673: INFO: Waiting up to 5m0s for pod "client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab" in namespace "containers-1739" to be "Succeeded or Failed" +Feb 12 10:06:00.683: INFO: Pod "client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.772498ms +Feb 12 10:06:02.700: INFO: Pod "client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026082631s +STEP: Saw pod success +Feb 12 10:06:02.700: INFO: Pod "client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab" satisfied condition "Succeeded or Failed" +Feb 12 10:06:02.706: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab container agnhost-container: +STEP: delete the pod +Feb 12 10:06:02.735: INFO: Waiting for pod client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab to disappear +Feb 12 10:06:02.741: INFO: Pod client-containers-cce4416a-dd47-4e62-a266-5fb64140aaab no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1739" for this suite. +•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":311,"completed":67,"skipped":1115,"failed":0} +SSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:02.756: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-4119 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:03.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-4119" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":68,"skipped":1120,"failed":0} +SS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:03.016: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8675 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:06:03.190: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Feb 12 10:06:08.203: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Feb 12 10:06:08.203: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Feb 12 10:06:10.220: INFO: Creating deployment "test-rollover-deployment" +Feb 12 10:06:10.240: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Feb 12 10:06:12.259: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Feb 12 10:06:12.269: INFO: Ensure that both replica sets have 1 created replica +Feb 12 10:06:12.282: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Feb 12 10:06:12.292: INFO: Updating deployment test-rollover-deployment +Feb 12 10:06:12.292: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Feb 12 10:06:14.343: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Feb 12 10:06:14.384: INFO: Make sure deployment "test-rollover-deployment" is complete +Feb 12 10:06:14.391: INFO: all replica sets need to contain the pod-template-hash label +Feb 12 10:06:14.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721174, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} +Feb 12 10:06:16.406: INFO: all replica sets need to contain the pod-template-hash label +Feb 12 10:06:16.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721174, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} +Feb 12 10:06:18.407: INFO: all replica sets need to contain the pod-template-hash label +Feb 12 10:06:18.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721174, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} +Feb 12 10:06:20.411: INFO: all replica sets need to contain the pod-template-hash label +Feb 12 10:06:20.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721174, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} +Feb 12 10:06:22.408: INFO: all replica sets need to contain the pod-template-hash label +Feb 12 10:06:22.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721174, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721170, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} +Feb 12 10:06:24.407: INFO: +Feb 12 10:06:24.407: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 10:06:24.420: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-8675 d9be9b2a-8c80-460d-aaf7-04c5470ccff1 572705 2 2021-02-12 10:06:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-12 10:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 10:06:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d30358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-12 10:06:10 +0000 UTC,LastTransitionTime:2021-02-12 10:06:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-02-12 10:06:24 +0000 UTC,LastTransitionTime:2021-02-12 10:06:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Feb 12 10:06:24.424: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-8675 dc02a83f-9c73-46b9-ab25-d0767b5bc640 572694 2 2021-02-12 10:06:12 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d9be9b2a-8c80-460d-aaf7-04c5470ccff1 0xc003af72e7 0xc003af72e8}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:06:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9be9b2a-8c80-460d-aaf7-04c5470ccff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003af7378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:06:24.425: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Feb 12 10:06:24.425: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8675 dfa33030-b384-40dd-9c6b-832d39e5592a 572704 2 2021-02-12 10:06:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d9be9b2a-8c80-460d-aaf7-04c5470ccff1 0xc003af71d7 0xc003af71d8}] [] [{e2e.test Update apps/v1 2021-02-12 10:06:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 10:06:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9be9b2a-8c80-460d-aaf7-04c5470ccff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003af7278 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:06:24.425: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-8675 ee0accd7-dff3-40ec-9169-983077034d62 572650 2 2021-02-12 10:06:10 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d9be9b2a-8c80-460d-aaf7-04c5470ccff1 0xc003af73e7 0xc003af73e8}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9be9b2a-8c80-460d-aaf7-04c5470ccff1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003af7478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:06:24.429: INFO: Pod "test-rollover-deployment-668db69979-xbl4r" is available: +&Pod{ObjectMeta:{test-rollover-deployment-668db69979-xbl4r test-rollover-deployment-668db69979- deployment-8675 9df3b0bd-3801-4a0f-9c33-0fd584259963 572672 0 2021-02-12 10:06:12 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[cni.projectcalico.org/podIP:10.100.92.214/32 cni.projectcalico.org/podIPs:10.100.92.214/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 dc02a83f-9c73-46b9-ab25-d0767b5bc640 0xc003af7977 0xc003af7978}] [] [{kube-controller-manager Update v1 2021-02-12 10:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc02a83f-9c73-46b9-ab25-d0767b5bc640\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:06:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:06:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl9kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl9kw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl9kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:06:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:06:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:06:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.214,StartTime:2021-02-12 10:06:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:06:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:docker://b9c652836f33e9ee0ed1991c84b5712a408d989078e3d2efd98fdf98014ca34e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:24.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8675" for this suite. + +• [SLOW TEST:21.427 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":311,"completed":69,"skipped":1122,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:24.455: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3288 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:06:24.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389" in namespace "downward-api-3288" to be "Succeeded or Failed" +Feb 12 10:06:24.662: INFO: Pod "downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010707ms +Feb 12 10:06:26.669: INFO: Pod "downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013387463s +STEP: Saw pod success +Feb 12 10:06:26.669: INFO: Pod "downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389" satisfied condition "Succeeded or Failed" +Feb 12 10:06:26.673: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389 container client-container: +STEP: delete the pod +Feb 12 10:06:26.695: INFO: Waiting for pod downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389 to disappear +Feb 12 10:06:26.701: INFO: Pod downwardapi-volume-5e2eec4e-7ba1-4eb7-82e4-c19b14cd8389 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:26.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3288" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":70,"skipped":1173,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:26.714: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5765 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-f6864ef4-c8de-4264-9f1d-a9c09e5f73bb +STEP: Creating a pod to test consume configMaps +Feb 12 10:06:26.911: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7" in namespace "projected-5765" to be "Succeeded or Failed" +Feb 12 10:06:26.918: INFO: Pod "pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26461ms +Feb 12 10:06:28.939: INFO: Pod "pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027688908s +STEP: Saw pod success +Feb 12 10:06:28.939: INFO: Pod "pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7" satisfied condition "Succeeded or Failed" +Feb 12 10:06:28.944: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7 container agnhost-container: +STEP: delete the pod +Feb 12 10:06:28.968: INFO: Waiting for pod pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7 to disappear +Feb 12 10:06:28.973: INFO: Pod pod-projected-configmaps-36cc3274-fddb-4ba6-8a80-40c107cd38c7 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:28.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5765" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":71,"skipped":1212,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:28.992: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-695 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:06:30.268: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:06:33.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:33.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-695" for this suite. +STEP: Destroying namespace "webhook-695-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":311,"completed":72,"skipped":1236,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:33.585: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8194 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-1771 +STEP: Creating secret with name secret-test-76d0dab1-1586-4703-b8ae-4366b5ae0b93 +STEP: Creating a pod to test consume secrets +Feb 12 10:06:33.990: INFO: Waiting up to 5m0s for pod "pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37" in namespace "secrets-8194" to be "Succeeded or Failed" +Feb 12 10:06:33.998: INFO: Pod "pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37": Phase="Pending", Reason="", readiness=false. Elapsed: 7.360813ms +Feb 12 10:06:36.002: INFO: Pod "pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011557677s +Feb 12 10:06:38.012: INFO: Pod "pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021101546s +STEP: Saw pod success +Feb 12 10:06:38.012: INFO: Pod "pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37" satisfied condition "Succeeded or Failed" +Feb 12 10:06:38.015: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37 container secret-volume-test: +STEP: delete the pod +Feb 12 10:06:38.047: INFO: Waiting for pod pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37 to disappear +Feb 12 10:06:38.051: INFO: Pod pod-secrets-b9936f90-1dde-41a4-b098-735658ab0b37 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:38.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8194" for this suite. +STEP: Destroying namespace "secret-namespace-1771" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":311,"completed":73,"skipped":1243,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:38.073: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1958 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name projected-secret-test-8eddf9c8-99b2-4bf4-8b52-e4bb5fcfdb3b +STEP: Creating a pod to test consume secrets +Feb 12 10:06:38.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876" in namespace "projected-1958" to be "Succeeded or Failed" +Feb 12 10:06:38.326: INFO: Pod "pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876": Phase="Pending", Reason="", readiness=false. Elapsed: 5.264705ms +Feb 12 10:06:40.340: INFO: Pod "pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019177005s +Feb 12 10:06:42.358: INFO: Pod "pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037127938s +STEP: Saw pod success +Feb 12 10:06:42.358: INFO: Pod "pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876" satisfied condition "Succeeded or Failed" +Feb 12 10:06:42.361: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876 container projected-secret-volume-test: +STEP: delete the pod +Feb 12 10:06:42.393: INFO: Waiting for pod pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876 to disappear +Feb 12 10:06:42.398: INFO: Pod pod-projected-secrets-8d915431-8f6f-466e-a5a3-eb0f0801e876 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:06:42.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1958" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":74,"skipped":1254,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:06:42.411: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7965 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a replication controller +Feb 12 10:06:42.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 create -f -' +Feb 12 10:06:43.024: INFO: stderr: "" +Feb 12 10:06:43.024: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Feb 12 10:06:43.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:06:43.202: INFO: stderr: "" +Feb 12 10:06:43.202: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +Feb 12 10:06:43.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-7n94m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:06:43.379: INFO: stderr: "" +Feb 12 10:06:43.379: INFO: stdout: "" +Feb 12 10:06:43.379: INFO: update-demo-nautilus-7n94m is created but not running +Feb 12 10:06:48.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:06:48.544: INFO: stderr: "" +Feb 12 10:06:48.545: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +Feb 12 10:06:48.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-7n94m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:06:48.730: INFO: stderr: "" +Feb 12 10:06:48.730: INFO: stdout: "true" +Feb 12 10:06:48.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-7n94m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:06:48.882: INFO: stderr: "" +Feb 12 10:06:48.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:06:48.882: INFO: validating pod update-demo-nautilus-7n94m +Feb 12 10:06:48.895: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:06:48.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:06:48.896: INFO: update-demo-nautilus-7n94m is verified up and running +Feb 12 10:06:48.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:06:49.073: INFO: stderr: "" +Feb 12 10:06:49.073: INFO: stdout: "true" +Feb 12 10:06:49.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:06:49.226: INFO: stderr: "" +Feb 12 10:06:49.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:06:49.226: INFO: validating pod update-demo-nautilus-bn8j7 +Feb 12 10:06:49.237: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:06:49.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:06:49.237: INFO: update-demo-nautilus-bn8j7 is verified up and running +STEP: scaling down the replication controller +Feb 12 10:06:49.245: INFO: scanned /root for discovery docs: +Feb 12 10:06:49.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Feb 12 10:06:50.449: INFO: stderr: "" +Feb 12 10:06:50.449: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Feb 12 10:06:50.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:06:50.611: INFO: stderr: "" +Feb 12 10:06:50.611: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:06:55.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:06:55.788: INFO: stderr: "" +Feb 12 10:06:55.788: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:00.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:00.965: INFO: stderr: "" +Feb 12 10:07:00.965: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:05.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:06.232: INFO: stderr: "" +Feb 12 10:07:06.232: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:11.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:11.401: INFO: stderr: "" +Feb 12 10:07:11.401: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:16.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:16.566: INFO: stderr: "" +Feb 12 10:07:16.567: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:21.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:21.777: INFO: stderr: "" +Feb 12 10:07:21.778: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:26.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:26.957: INFO: stderr: "" +Feb 12 10:07:26.957: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:31.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:32.140: INFO: stderr: "" +Feb 12 10:07:32.140: INFO: stdout: "update-demo-nautilus-7n94m update-demo-nautilus-bn8j7 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Feb 12 10:07:37.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:37.305: INFO: stderr: "" +Feb 12 10:07:37.305: INFO: stdout: "update-demo-nautilus-bn8j7 " +Feb 12 10:07:37.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:07:37.450: INFO: stderr: "" +Feb 12 10:07:37.450: INFO: stdout: "true" +Feb 12 10:07:37.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:07:37.601: INFO: stderr: "" +Feb 12 10:07:37.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:07:37.601: INFO: validating pod update-demo-nautilus-bn8j7 +Feb 12 10:07:37.613: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:07:37.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:07:37.613: INFO: update-demo-nautilus-bn8j7 is verified up and running +STEP: scaling up the replication controller +Feb 12 10:07:37.619: INFO: scanned /root for discovery docs: +Feb 12 10:07:37.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Feb 12 10:07:38.830: INFO: stderr: "" +Feb 12 10:07:38.831: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Feb 12 10:07:38.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:38.980: INFO: stderr: "" +Feb 12 10:07:38.980: INFO: stdout: "update-demo-nautilus-bn8j7 update-demo-nautilus-tm5mf " +Feb 12 10:07:38.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:07:39.144: INFO: stderr: "" +Feb 12 10:07:39.144: INFO: stdout: "true" +Feb 12 10:07:39.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:07:39.301: INFO: stderr: "" +Feb 12 10:07:39.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:07:39.301: INFO: validating pod update-demo-nautilus-bn8j7 +Feb 12 10:07:39.307: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:07:39.307: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:07:39.307: INFO: update-demo-nautilus-bn8j7 is verified up and running +Feb 12 10:07:39.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-tm5mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:07:39.461: INFO: stderr: "" +Feb 12 10:07:39.461: INFO: stdout: "" +Feb 12 10:07:39.461: INFO: update-demo-nautilus-tm5mf is created but not running +Feb 12 10:07:44.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Feb 12 10:07:44.638: INFO: stderr: "" +Feb 12 10:07:44.638: INFO: stdout: "update-demo-nautilus-bn8j7 update-demo-nautilus-tm5mf " +Feb 12 10:07:44.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:07:44.786: INFO: stderr: "" +Feb 12 10:07:44.786: INFO: stdout: "true" +Feb 12 10:07:44.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-bn8j7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:07:45.017: INFO: stderr: "" +Feb 12 10:07:45.017: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:07:45.017: INFO: validating pod update-demo-nautilus-bn8j7 +Feb 12 10:07:45.032: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:07:45.032: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:07:45.032: INFO: update-demo-nautilus-bn8j7 is verified up and running +Feb 12 10:07:45.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-tm5mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Feb 12 10:07:45.229: INFO: stderr: "" +Feb 12 10:07:45.229: INFO: stdout: "true" +Feb 12 10:07:45.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods update-demo-nautilus-tm5mf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Feb 12 10:07:45.373: INFO: stderr: "" +Feb 12 10:07:45.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Feb 12 10:07:45.373: INFO: validating pod update-demo-nautilus-tm5mf +Feb 12 10:07:45.383: INFO: got data: { + "image": "nautilus.jpg" +} + +Feb 12 10:07:45.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Feb 12 10:07:45.383: INFO: update-demo-nautilus-tm5mf is verified up and running +STEP: using delete to clean up resources +Feb 12 10:07:45.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 delete --grace-period=0 --force -f -' +Feb 12 10:07:45.531: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 10:07:45.531: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Feb 12 10:07:45.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:45.688: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:45.689: INFO: stdout: "" +Feb 12 10:07:45.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:45.847: INFO: stderr: "" +Feb 12 10:07:45.847: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:46.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:46.552: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:46.552: INFO: stdout: "" +Feb 12 10:07:46.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:46.737: INFO: stderr: "" +Feb 12 10:07:46.737: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:46.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:47.020: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:47.020: INFO: stdout: "" +Feb 12 10:07:47.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:47.184: INFO: stderr: "" +Feb 12 10:07:47.184: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:47.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:47.503: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:47.503: INFO: stdout: "" +Feb 12 10:07:47.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:47.666: INFO: stderr: "" +Feb 12 10:07:47.666: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:47.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:48.004: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:48.004: INFO: stdout: "" +Feb 12 10:07:48.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:48.157: INFO: stderr: "" +Feb 12 10:07:48.157: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:48.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:48.502: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:48.502: INFO: stdout: "" +Feb 12 10:07:48.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:48.676: INFO: stderr: "" +Feb 12 10:07:48.676: INFO: stdout: "update-demo-nautilus-bn8j7\nupdate-demo-nautilus-tm5mf\n" +Feb 12 10:07:48.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get rc,svc -l name=update-demo --no-headers' +Feb 12 10:07:49.030: INFO: stderr: "No resources found in kubectl-7965 namespace.\n" +Feb 12 10:07:49.031: INFO: stdout: "" +Feb 12 10:07:49.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7965 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 10:07:49.239: INFO: stderr: "" +Feb 12 10:07:49.239: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:07:49.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7965" for this suite. + +• [SLOW TEST:66.847 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":311,"completed":75,"skipped":1257,"failed":0} +SSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:07:49.259: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-828 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Feb 12 10:07:51.455: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:07:51.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-828" for this suite. +•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":76,"skipped":1265,"failed":0} +SSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a volume subpath [sig-storage] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:07:51.480: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6308 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [sig-storage] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test substitution in volume subpath +Feb 12 10:07:51.660: INFO: Waiting up to 5m0s for pod "var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66" in namespace "var-expansion-6308" to be "Succeeded or Failed" +Feb 12 10:07:51.664: INFO: Pod "var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024046ms +Feb 12 10:07:53.674: INFO: Pod "var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01391431s +STEP: Saw pod success +Feb 12 10:07:53.674: INFO: Pod "var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66" satisfied condition "Succeeded or Failed" +Feb 12 10:07:53.678: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66 container dapi-container: +STEP: delete the pod +Feb 12 10:07:53.709: INFO: Waiting for pod var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66 to disappear +Feb 12 10:07:53.717: INFO: Pod var-expansion-96fcf060-2354-441f-bc84-88e1e1c08c66 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:07:53.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6308" for this suite. +•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":311,"completed":77,"skipped":1269,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:07:53.731: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5420 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:07:55.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:07:57.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721275, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721275, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721275, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721275, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:08:00.942: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:01.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5420" for this suite. +STEP: Destroying namespace "webhook-5420-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:7.389 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":311,"completed":78,"skipped":1283,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:01.122: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-8786 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Feb 12 10:08:05.364: INFO: &Pod{ObjectMeta:{send-events-5d0455f4-d672-433e-a072-d44e4140b515 events-8786 ca4119f1-b763-4a9e-8173-4c8cc0fbddbd 573454 0 2021-02-12 10:08:01 +0000 UTC map[name:foo time:307934563] map[cni.projectcalico.org/podIP:10.100.45.30/32 cni.projectcalico.org/podIPs:10.100.45.30/32 kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-02-12 10:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:08:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.30\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7tzkq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7tzkq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7tzkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:08:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:08:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.30,StartTime:2021-02-12 10:08:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:08:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:docker://d3c553f1f83a1ba5bca380c02fc9095cce516fd532157977cc4c29a33bef86af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Feb 12 10:08:07.378: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Feb 12 10:08:09.395: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [k8s.io] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:09.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-8786" for this suite. + +• [SLOW TEST:8.299 seconds] +[k8s.io] [sig-node] Events +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":311,"completed":79,"skipped":1308,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:09.424: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-1099 +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:08:09.594: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:15.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-1099" for this suite. + +• [SLOW TEST:6.512 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":311,"completed":80,"skipped":1309,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:15.938: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5207 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Feb 12 10:08:20.665: INFO: Successfully updated pod "pod-update-ff7ef0c8-4e2a-400c-b9fe-f845ed56921c" +STEP: verifying the updated pod is in kubernetes +Feb 12 10:08:20.672: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:20.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5207" for this suite. +•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":311,"completed":81,"skipped":1338,"failed":0} + +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:20.685: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-7334 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7334 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7334;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7334 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7334;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7334.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7334.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7334.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7334.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7334.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7334.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7334.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7334.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7334.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.134.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.134.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.134.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.134.202_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7334 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7334;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7334 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7334;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7334.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7334.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7334.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7334.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7334.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7334.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7334.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7334.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7334.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7334.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.134.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.134.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.134.254.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.254.134.202_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:08:25.016: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.021: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.027: INFO: Unable to read wheezy_udp@dns-test-service.dns-7334 from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.033: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7334 from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.037: INFO: Unable to read wheezy_udp@dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.041: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.046: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.052: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.065: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.069: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.081: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.085: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.089: INFO: Unable to read jessie_udp@dns-test-service.dns-7334 from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-7334 from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.100: INFO: Unable to read jessie_udp@dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.107: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7334.svc from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.122: INFO: Unable to read jessie_udp@PodARecord from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.126: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788: the server could not find the requested resource (get pods dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788) +Feb 12 10:08:25.134: INFO: Lookups using dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7334 wheezy_tcp@dns-test-service.dns-7334 wheezy_udp@dns-test-service.dns-7334.svc wheezy_tcp@dns-test-service.dns-7334.svc wheezy_udp@_http._tcp.dns-test-service.dns-7334.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7334.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7334 jessie_tcp@dns-test-service.dns-7334 jessie_udp@dns-test-service.dns-7334.svc jessie_tcp@dns-test-service.dns-7334.svc jessie_udp@_http._tcp.dns-test-service.dns-7334.svc jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 10:08:30.254: INFO: DNS probes using dns-7334/dns-test-dffc96b0-265e-4877-ac6f-cb5d86450788 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:30.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7334" for this suite. + +• [SLOW TEST:9.682 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":311,"completed":82,"skipped":1338,"failed":0} +SS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:30.367: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9353 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test substitution in container's args +Feb 12 10:08:30.613: INFO: Waiting up to 5m0s for pod "var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b" in namespace "var-expansion-9353" to be "Succeeded or Failed" +Feb 12 10:08:30.619: INFO: Pod "var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330722ms +Feb 12 10:08:32.627: INFO: Pod "var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013715759s +STEP: Saw pod success +Feb 12 10:08:32.627: INFO: Pod "var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b" satisfied condition "Succeeded or Failed" +Feb 12 10:08:32.647: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b container dapi-container: +STEP: delete the pod +Feb 12 10:08:32.741: INFO: Waiting for pod var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b to disappear +Feb 12 10:08:32.745: INFO: Pod var-expansion-f42b5a39-618e-4725-9dd3-3a596e2bdb1b no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9353" for this suite. +•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":311,"completed":83,"skipped":1340,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:32.760: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1303 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Feb 12 10:08:42.990: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:42.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0212 10:08:42.989868 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 10:08:42.990009 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 10:08:42.990035 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +STEP: Destroying namespace "gc-1303" for this suite. + +• [SLOW TEST:10.243 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":311,"completed":84,"skipped":1369,"failed":0} +S +------------------------------ +[sig-api-machinery] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:43.004: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-191 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-b35ef24b-8e45-4173-8980-181c5bec6da1 +STEP: Creating a pod to test consume secrets +Feb 12 10:08:43.189: INFO: Waiting up to 5m0s for pod "pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a" in namespace "secrets-191" to be "Succeeded or Failed" +Feb 12 10:08:43.196: INFO: Pod "pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.140336ms +Feb 12 10:08:45.205: INFO: Pod "pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015528717s +STEP: Saw pod success +Feb 12 10:08:45.205: INFO: Pod "pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a" satisfied condition "Succeeded or Failed" +Feb 12 10:08:45.207: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a container secret-env-test: +STEP: delete the pod +Feb 12 10:08:45.255: INFO: Waiting for pod pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a to disappear +Feb 12 10:08:45.260: INFO: Pod pod-secrets-dc198de7-f9c4-4c32-9d91-bb72fbdb759a no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:45.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-191" for this suite. +•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":311,"completed":85,"skipped":1370,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:45.272: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6521 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: starting the proxy server +Feb 12 10:08:45.431: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6521 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:45.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6521" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":311,"completed":86,"skipped":1371,"failed":0} +SSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:45.618: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2569 +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-upd-f72fe708-9b4f-4d51-9fe1-41f666187a3b +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:49.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2569" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":87,"skipped":1375,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:49.883: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-4715 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting the auto-created API token +Feb 12 10:08:50.607: INFO: created pod pod-service-account-defaultsa +Feb 12 10:08:50.607: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Feb 12 10:08:50.623: INFO: created pod pod-service-account-mountsa +Feb 12 10:08:50.623: INFO: pod pod-service-account-mountsa service account token volume mount: true +Feb 12 10:08:50.651: INFO: created pod pod-service-account-nomountsa +Feb 12 10:08:50.651: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Feb 12 10:08:50.662: INFO: created pod pod-service-account-defaultsa-mountspec +Feb 12 10:08:50.662: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Feb 12 10:08:50.672: INFO: created pod pod-service-account-mountsa-mountspec +Feb 12 10:08:50.672: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Feb 12 10:08:50.678: INFO: created pod pod-service-account-nomountsa-mountspec +Feb 12 10:08:50.678: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Feb 12 10:08:50.695: INFO: created pod pod-service-account-defaultsa-nomountspec +Feb 12 10:08:50.695: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Feb 12 10:08:50.704: INFO: created pod pod-service-account-mountsa-nomountspec +Feb 12 10:08:50.704: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Feb 12 10:08:50.716: INFO: created pod pod-service-account-nomountsa-nomountspec +Feb 12 10:08:50.716: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:08:50.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4715" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":311,"completed":88,"skipped":1407,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:08:50.738: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9521 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Feb 12 10:08:57.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:08:57.023: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:08:59.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:08:59.036: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:01.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:01.035: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:03.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:03.032: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:05.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:05.029: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:07.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:07.027: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:09.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:09.036: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:11.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:11.039: INFO: Pod pod-with-prestop-exec-hook still exists +Feb 12 10:09:13.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Feb 12 10:09:13.035: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:09:13.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9521" for this suite. + +• [SLOW TEST:22.393 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":311,"completed":89,"skipped":1422,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:09:13.132: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1351 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-d724b588-df43-49d1-a625-f46ddaa38bca +STEP: Creating a pod to test consume configMaps +Feb 12 10:09:13.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef" in namespace "projected-1351" to be "Succeeded or Failed" +Feb 12 10:09:13.325: INFO: Pod "pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.999345ms +Feb 12 10:09:15.336: INFO: Pod "pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019899333s +Feb 12 10:09:17.342: INFO: Pod "pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026503685s +STEP: Saw pod success +Feb 12 10:09:17.343: INFO: Pod "pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef" satisfied condition "Succeeded or Failed" +Feb 12 10:09:17.347: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef container projected-configmap-volume-test: +STEP: delete the pod +Feb 12 10:09:17.379: INFO: Waiting for pod pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef to disappear +Feb 12 10:09:17.386: INFO: Pod pod-projected-configmaps-b77e5add-5acb-424c-b5b6-8ce9b9582bef no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:09:17.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1351" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":90,"skipped":1425,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:09:17.399: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3221 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-3221 +STEP: creating service affinity-clusterip in namespace services-3221 +STEP: creating replication controller affinity-clusterip in namespace services-3221 +I0212 10:09:17.584074 22 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3221, replica count: 3 +I0212 10:09:20.636085 22 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0212 10:09:23.637072 22 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 10:09:23.662: INFO: Creating new exec pod +Feb 12 10:09:28.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3221 exec execpod-affinityt7bzp -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' +Feb 12 10:09:29.240: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Feb 12 10:09:29.240: INFO: stdout: "" +Feb 12 10:09:29.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3221 exec execpod-affinityt7bzp -- /bin/sh -x -c nc -zv -t -w 2 10.254.186.172 80' +Feb 12 10:09:29.647: INFO: stderr: "+ nc -zv -t -w 2 10.254.186.172 80\nConnection to 10.254.186.172 80 port [tcp/http] succeeded!\n" +Feb 12 10:09:29.647: INFO: stdout: "" +Feb 12 10:09:29.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-3221 exec execpod-affinityt7bzp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.254.186.172:80/ ; done' +Feb 12 10:09:30.210: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.186.172:80/\n" +Feb 12 10:09:30.211: INFO: stdout: "\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm\naffinity-clusterip-8r7lm" +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Received response from host: affinity-clusterip-8r7lm +Feb 12 10:09:30.211: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-3221, will wait for the garbage collector to delete the pods +Feb 12 10:09:30.301: INFO: Deleting ReplicationController affinity-clusterip took: 6.90453ms +Feb 12 10:09:31.301: INFO: Terminating ReplicationController affinity-clusterip pods took: 1.000274897s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:10:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3221" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:76.460 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":91,"skipped":1435,"failed":0} +SSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:10:33.860: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2046 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:10:36.056: INFO: Deleting pod "var-expansion-99058b1f-b9fa-4944-843c-37e2864cc713" in namespace "var-expansion-2046" +Feb 12 10:10:36.069: INFO: Wait up to 5m0s for pod "var-expansion-99058b1f-b9fa-4944-843c-37e2864cc713" to be fully deleted +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:11:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2046" for this suite. + +• [SLOW TEST:60.255 seconds] +[k8s.io] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":311,"completed":92,"skipped":1443,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:11:34.122: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename tables +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-1025 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:11:34.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-1025" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":311,"completed":93,"skipped":1457,"failed":0} +SSSSSSSSS +------------------------------ +[k8s.io] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:11:34.312: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8440 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod liveness-25803fbb-a533-4b75-83d2-83b358a932b7 in namespace container-probe-8440 +Feb 12 10:11:36.498: INFO: Started pod liveness-25803fbb-a533-4b75-83d2-83b358a932b7 in namespace container-probe-8440 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 10:11:36.504: INFO: Initial restart count of pod liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is 0 +Feb 12 10:11:48.573: INFO: Restart count of pod container-probe-8440/liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is now 1 (12.069552104s elapsed) +Feb 12 10:12:08.698: INFO: Restart count of pod container-probe-8440/liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is now 2 (32.194177666s elapsed) +Feb 12 10:12:28.825: INFO: Restart count of pod container-probe-8440/liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is now 3 (52.321543183s elapsed) +Feb 12 10:12:48.978: INFO: Restart count of pod container-probe-8440/liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is now 4 (1m12.474368872s elapsed) +Feb 12 10:13:09.097: INFO: Restart count of pod container-probe-8440/liveness-25803fbb-a533-4b75-83d2-83b358a932b7 is now 5 (1m32.592631594s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:13:09.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8440" for this suite. + +• [SLOW TEST:94.818 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":311,"completed":94,"skipped":1466,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:13:09.131: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4191 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:13:09.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-4191 version' +Feb 12 10:13:09.479: INFO: stderr: "" +Feb 12 10:13:09.479: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.2\", GitCommit:\"faecb196815e248d3ecfb03c680a4507229c2a56\", GitTreeState:\"clean\", BuildDate:\"2021-01-13T13:28:09Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.2\", GitCommit:\"faecb196815e248d3ecfb03c680a4507229c2a56\", GitTreeState:\"clean\", BuildDate:\"2021-01-13T13:20:00Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:13:09.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4191" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":311,"completed":95,"skipped":1482,"failed":0} +SSSSSSSSS +------------------------------ +[k8s.io] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:13:09.492: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2943 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Feb 12 10:13:09.676: INFO: observed Pod pod-test in namespace pods-2943 in phase Pending conditions [] +Feb 12 10:13:09.687: INFO: observed Pod pod-test in namespace pods-2943 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC }] +Feb 12 10:13:09.713: INFO: observed Pod pod-test in namespace pods-2943 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC }] +Feb 12 10:13:10.752: INFO: observed Pod pod-test in namespace pods-2943 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:13:09 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Feb 12 10:13:12.142: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: getting the PodStatus +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Feb 12 10:13:12.179: INFO: observed event type ADDED +Feb 12 10:13:12.179: INFO: observed event type MODIFIED +Feb 12 10:13:12.180: INFO: observed event type MODIFIED +Feb 12 10:13:12.180: INFO: observed event type MODIFIED +Feb 12 10:13:12.180: INFO: observed event type MODIFIED +Feb 12 10:13:12.180: INFO: observed event type MODIFIED +Feb 12 10:13:12.181: INFO: observed event type MODIFIED +Feb 12 10:13:12.181: INFO: observed event type MODIFIED +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:13:12.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2943" for this suite. +•{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":311,"completed":96,"skipped":1491,"failed":0} +SSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:13:12.196: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-7346 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Feb 12 10:13:12.370: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 10:14:12.437: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:14:12.442: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-3148 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Feb 12 10:14:14.664: INFO: found a healthy node: k8s-calico-coreos-yo5lpoxhpdlk-node-1 +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:14:20.866: INFO: pods created so far: [1 1 1] +Feb 12 10:14:20.866: INFO: length of pods created so far: 3 +Feb 12 10:14:38.895: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:14:45.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-3148" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:14:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-7346" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:93.855 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":311,"completed":97,"skipped":1494,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:14:46.057: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-485 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-485 +STEP: creating service affinity-nodeport-transition in namespace services-485 +STEP: creating replication controller affinity-nodeport-transition in namespace services-485 +I0212 10:14:46.305849 22 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-485, replica count: 3 +I0212 10:14:49.356779 22 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 10:14:49.381: INFO: Creating new exec pod +Feb 12 10:14:54.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' +Feb 12 10:14:55.079: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Feb 12 10:14:55.079: INFO: stdout: "" +Feb 12 10:14:55.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 10.254.105.58 80' +Feb 12 10:14:55.493: INFO: stderr: "+ nc -zv -t -w 2 10.254.105.58 80\nConnection to 10.254.105.58 80 port [tcp/http] succeeded!\n" +Feb 12 10:14:55.494: INFO: stdout: "" +Feb 12 10:14:55.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.115 32512' +Feb 12 10:14:55.933: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.115 32512\nConnection to 10.0.0.115 32512 port [tcp/32512] succeeded!\n" +Feb 12 10:14:55.933: INFO: stdout: "" +Feb 12 10:14:55.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.234 32512' +Feb 12 10:14:56.362: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.234 32512\nConnection to 10.0.0.234 32512 port [tcp/32512] succeeded!\n" +Feb 12 10:14:56.362: INFO: stdout: "" +Feb 12 10:14:56.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.246 32512' +Feb 12 10:14:56.841: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.246 32512\nConnection to 172.24.4.246 32512 port [tcp/32512] succeeded!\n" +Feb 12 10:14:56.841: INFO: stdout: "" +Feb 12 10:14:56.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.157 32512' +Feb 12 10:14:57.241: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.157 32512\nConnection to 172.24.4.157 32512 port [tcp/32512] succeeded!\n" +Feb 12 10:14:57.241: INFO: stdout: "" +Feb 12 10:14:57.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.0.115:32512/ ; done' +Feb 12 10:14:57.859: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n" +Feb 12 10:14:57.859: INFO: stdout: "\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-7x8f6\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-7x8f6\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-g9dn7\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-7x8f6\naffinity-nodeport-transition-7x8f6\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-7x8f6" +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-7x8f6 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-7x8f6 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-g9dn7 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-7x8f6 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-7x8f6 +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:57.859: INFO: Received response from host: affinity-nodeport-transition-7x8f6 +Feb 12 10:14:57.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-485 exec execpod-affinitydqf84 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.0.115:32512/ ; done' +Feb 12 10:14:58.405: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:32512/\n" +Feb 12 10:14:58.405: INFO: stdout: "\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f\naffinity-nodeport-transition-5bq7f" +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Received response from host: affinity-nodeport-transition-5bq7f +Feb 12 10:14:58.405: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-485, will wait for the garbage collector to delete the pods +Feb 12 10:14:58.499: INFO: Deleting ReplicationController affinity-nodeport-transition took: 21.875315ms +Feb 12 10:14:59.502: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 1.003079335s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:15:33.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-485" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:47.800 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":98,"skipped":1504,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:15:33.858: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8912 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-map-1aae07e5-f97f-4b7f-a8ed-9370723eb8fe +STEP: Creating a pod to test consume configMaps +Feb 12 10:15:34.048: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971" in namespace "projected-8912" to be "Succeeded or Failed" +Feb 12 10:15:34.053: INFO: Pod "pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971": Phase="Pending", Reason="", readiness=false. Elapsed: 5.386892ms +Feb 12 10:15:36.065: INFO: Pod "pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017235362s +STEP: Saw pod success +Feb 12 10:15:36.065: INFO: Pod "pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971" satisfied condition "Succeeded or Failed" +Feb 12 10:15:36.069: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971 container agnhost-container: +STEP: delete the pod +Feb 12 10:15:36.161: INFO: Waiting for pod pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971 to disappear +Feb 12 10:15:36.168: INFO: Pod pod-projected-configmaps-3e1eab68-9b1e-4a35-b705-6804639a6971 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:15:36.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8912" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":99,"skipped":1509,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:15:36.189: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4668 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0666 on node default medium +Feb 12 10:15:36.358: INFO: Waiting up to 5m0s for pod "pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0" in namespace "emptydir-4668" to be "Succeeded or Failed" +Feb 12 10:15:36.365: INFO: Pod "pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405245ms +Feb 12 10:15:38.377: INFO: Pod "pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018482725s +STEP: Saw pod success +Feb 12 10:15:38.377: INFO: Pod "pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0" satisfied condition "Succeeded or Failed" +Feb 12 10:15:38.381: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0 container test-container: +STEP: delete the pod +Feb 12 10:15:38.408: INFO: Waiting for pod pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0 to disappear +Feb 12 10:15:38.412: INFO: Pod pod-736e16a2-938e-4c49-8b7f-ea89c2fb54e0 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:15:38.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4668" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":100,"skipped":1524,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:15:38.428: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7817 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:15:38.603: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Feb 12 10:15:43.619: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Feb 12 10:15:43.620: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 10:15:43.656: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-7817 1d20f3b7-a426-42c0-9988-91b075f0a6ac 575851 1 2021-02-12 10:15:43 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-02-12 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003ad2948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Feb 12 10:15:43.663: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-7817 07d8353e-4282-4148-881c-581525295403 575855 1 2021-02-12 10:15:43 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 1d20f3b7-a426-42c0-9988-91b075f0a6ac 0xc003ad2d77 0xc003ad2d78}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d20f3b7-a426-42c0-9988-91b075f0a6ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003ad2e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:15:43.663: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Feb 12 10:15:43.663: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7817 22d75d91-dc6e-4b0c-8ab0-90488bcc0432 575853 1 2021-02-12 10:15:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 1d20f3b7-a426-42c0-9988-91b075f0a6ac 0xc003ad2c67 0xc003ad2c68}] [] [{e2e.test Update apps/v1 2021-02-12 10:15:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"1d20f3b7-a426-42c0-9988-91b075f0a6ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003ad2d08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:15:43.670: INFO: Pod "test-cleanup-controller-px8mg" is available: +&Pod{ObjectMeta:{test-cleanup-controller-px8mg test-cleanup-controller- deployment-7817 40902bf3-def8-4f2f-ab5a-3c19e1dad18e 575828 0 2021-02-12 10:15:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/podIP:10.100.45.9/32 cni.projectcalico.org/podIPs:10.100.45.9/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-controller 22d75d91-dc6e-4b0c-8ab0-90488bcc0432 0xc003ad3227 0xc003ad3228}] [] [{kube-controller-manager Update v1 2021-02-12 10:15:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"22d75d91-dc6e-4b0c-8ab0-90488bcc0432\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:15:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:15:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fgmh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fgmh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fgmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:15:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:15:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.9,StartTime:2021-02-12 10:15:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:15:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://488959eb9f2a64803f664e4f102a5ad3a01a2a1657de2fa609b10eaad63c9b03,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:15:43.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7817" for this suite. + +• [SLOW TEST:5.262 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":311,"completed":101,"skipped":1578,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:15:43.691: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6206 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-6206 +STEP: creating service affinity-clusterip-transition in namespace services-6206 +STEP: creating replication controller affinity-clusterip-transition in namespace services-6206 +I0212 10:15:43.884762 22 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6206, replica count: 3 +I0212 10:15:46.935449 22 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 10:15:46.956: INFO: Creating new exec pod +Feb 12 10:15:51.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6206 exec execpod-affinitybzqwg -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' +Feb 12 10:15:52.409: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Feb 12 10:15:52.409: INFO: stdout: "" +Feb 12 10:15:52.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6206 exec execpod-affinitybzqwg -- /bin/sh -x -c nc -zv -t -w 2 10.254.35.252 80' +Feb 12 10:15:52.822: INFO: stderr: "+ nc -zv -t -w 2 10.254.35.252 80\nConnection to 10.254.35.252 80 port [tcp/http] succeeded!\n" +Feb 12 10:15:52.822: INFO: stdout: "" +Feb 12 10:15:52.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6206 exec execpod-affinitybzqwg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.254.35.252:80/ ; done' +Feb 12 10:15:53.404: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n" +Feb 12 10:15:53.405: INFO: stdout: "\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-rxm8q\naffinity-clusterip-transition-rxm8q\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-rxm8q\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-rxm8q\naffinity-clusterip-transition-rxm8q\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-p8nts\naffinity-clusterip-transition-rxm8q" +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-p8nts +Feb 12 10:15:53.405: INFO: Received response from host: affinity-clusterip-transition-rxm8q +Feb 12 10:15:53.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6206 exec execpod-affinitybzqwg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.254.35.252:80/ ; done' +Feb 12 10:15:53.990: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.35.252:80/\n" +Feb 12 10:15:53.991: INFO: stdout: "\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2\naffinity-clusterip-transition-b2rn2" +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Received response from host: affinity-clusterip-transition-b2rn2 +Feb 12 10:15:53.991: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6206, will wait for the garbage collector to delete the pods +Feb 12 10:15:54.153: INFO: Deleting ReplicationController affinity-clusterip-transition took: 8.170271ms +Feb 12 10:15:55.154: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 1.000476943s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:03.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6206" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:20.108 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":102,"skipped":1590,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:03.805: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-938 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:16:03.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689" in namespace "downward-api-938" to be "Succeeded or Failed" +Feb 12 10:16:04.008: INFO: Pod "downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119982ms +Feb 12 10:16:06.017: INFO: Pod "downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017991902s +Feb 12 10:16:08.032: INFO: Pod "downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032772765s +STEP: Saw pod success +Feb 12 10:16:08.032: INFO: Pod "downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689" satisfied condition "Succeeded or Failed" +Feb 12 10:16:08.036: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689 container client-container: +STEP: delete the pod +Feb 12 10:16:08.079: INFO: Waiting for pod downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689 to disappear +Feb 12 10:16:08.084: INFO: Pod downwardapi-volume-f90b0bee-e1cc-4bcd-9c21-315bc549c689 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:08.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-938" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":103,"skipped":1624,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:08.100: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4393 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward api env vars +Feb 12 10:16:08.290: INFO: Waiting up to 5m0s for pod "downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832" in namespace "downward-api-4393" to be "Succeeded or Failed" +Feb 12 10:16:08.300: INFO: Pod "downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832": Phase="Pending", Reason="", readiness=false. Elapsed: 9.615701ms +Feb 12 10:16:10.307: INFO: Pod "downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017292114s +Feb 12 10:16:12.317: INFO: Pod "downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026415998s +STEP: Saw pod success +Feb 12 10:16:12.317: INFO: Pod "downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832" satisfied condition "Succeeded or Failed" +Feb 12 10:16:12.320: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832 container dapi-container: +STEP: delete the pod +Feb 12 10:16:12.345: INFO: Waiting for pod downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832 to disappear +Feb 12 10:16:12.354: INFO: Pod downward-api-e7080813-d6ea-41e4-8524-ec7b252e6832 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:12.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4393" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":311,"completed":104,"skipped":1636,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:12.369: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-960 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-api-machinery] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-960" for this suite. +•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":105,"skipped":1688,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:12.585: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6593 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating Agnhost RC +Feb 12 10:16:12.747: INFO: namespace kubectl-6593 +Feb 12 10:16:12.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6593 create -f -' +Feb 12 10:16:13.281: INFO: stderr: "" +Feb 12 10:16:13.281: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Feb 12 10:16:14.292: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:16:14.292: INFO: Found 0 / 1 +Feb 12 10:16:15.286: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:16:15.286: INFO: Found 1 / 1 +Feb 12 10:16:15.286: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Feb 12 10:16:15.289: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:16:15.289: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Feb 12 10:16:15.289: INFO: wait on agnhost-primary startup in kubectl-6593 +Feb 12 10:16:15.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6593 logs agnhost-primary-jr7f8 agnhost-primary' +Feb 12 10:16:15.448: INFO: stderr: "" +Feb 12 10:16:15.448: INFO: stdout: "Paused\n" +STEP: exposing RC +Feb 12 10:16:15.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6593 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Feb 12 10:16:15.625: INFO: stderr: "" +Feb 12 10:16:15.625: INFO: stdout: "service/rm2 exposed\n" +Feb 12 10:16:15.629: INFO: Service rm2 in namespace kubectl-6593 found. +STEP: exposing service +Feb 12 10:16:17.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6593 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Feb 12 10:16:17.827: INFO: stderr: "" +Feb 12 10:16:17.827: INFO: stdout: "service/rm3 exposed\n" +Feb 12 10:16:17.846: INFO: Service rm3 in namespace kubectl-6593 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:19.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6593" for this suite. + +• [SLOW TEST:7.303 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl expose + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":311,"completed":106,"skipped":1708,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:19.889: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5083 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-map-c9501dba-2511-41e3-b0f1-e3636f7e6ab9 +STEP: Creating a pod to test consume configMaps +Feb 12 10:16:20.126: INFO: Waiting up to 5m0s for pod "pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4" in namespace "configmap-5083" to be "Succeeded or Failed" +Feb 12 10:16:20.134: INFO: Pod "pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.832511ms +Feb 12 10:16:22.139: INFO: Pod "pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012136052s +Feb 12 10:16:24.152: INFO: Pod "pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0255415s +STEP: Saw pod success +Feb 12 10:16:24.152: INFO: Pod "pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4" satisfied condition "Succeeded or Failed" +Feb 12 10:16:24.156: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4 container agnhost-container: +STEP: delete the pod +Feb 12 10:16:24.182: INFO: Waiting for pod pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4 to disappear +Feb 12 10:16:24.196: INFO: Pod pod-configmaps-a34751b8-7bd6-4cd4-b078-2e5a573fcff4 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:24.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5083" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":107,"skipped":1727,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:24.207: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5900 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0777 on node default medium +Feb 12 10:16:24.374: INFO: Waiting up to 5m0s for pod "pod-1795984e-9e51-4e73-a25b-7fcd11e9623c" in namespace "emptydir-5900" to be "Succeeded or Failed" +Feb 12 10:16:24.380: INFO: Pod "pod-1795984e-9e51-4e73-a25b-7fcd11e9623c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.987276ms +Feb 12 10:16:26.394: INFO: Pod "pod-1795984e-9e51-4e73-a25b-7fcd11e9623c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019298646s +Feb 12 10:16:28.406: INFO: Pod "pod-1795984e-9e51-4e73-a25b-7fcd11e9623c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031412203s +STEP: Saw pod success +Feb 12 10:16:28.406: INFO: Pod "pod-1795984e-9e51-4e73-a25b-7fcd11e9623c" satisfied condition "Succeeded or Failed" +Feb 12 10:16:28.409: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-1795984e-9e51-4e73-a25b-7fcd11e9623c container test-container: +STEP: delete the pod +Feb 12 10:16:28.431: INFO: Waiting for pod pod-1795984e-9e51-4e73-a25b-7fcd11e9623c to disappear +Feb 12 10:16:28.436: INFO: Pod pod-1795984e-9e51-4e73-a25b-7fcd11e9623c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5900" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":108,"skipped":1736,"failed":0} +SSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:28.447: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-9267 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test override command +Feb 12 10:16:28.619: INFO: Waiting up to 5m0s for pod "client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602" in namespace "containers-9267" to be "Succeeded or Failed" +Feb 12 10:16:28.627: INFO: Pod "client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602": Phase="Pending", Reason="", readiness=false. Elapsed: 7.166575ms +Feb 12 10:16:30.634: INFO: Pod "client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014856923s +Feb 12 10:16:32.644: INFO: Pod "client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025120783s +STEP: Saw pod success +Feb 12 10:16:32.645: INFO: Pod "client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602" satisfied condition "Succeeded or Failed" +Feb 12 10:16:32.648: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602 container agnhost-container: +STEP: delete the pod +Feb 12 10:16:32.673: INFO: Waiting for pod client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602 to disappear +Feb 12 10:16:32.681: INFO: Pod client-containers-4ea810be-30ef-47a4-bdab-dd512fedc602 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:32.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-9267" for this suite. +•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":311,"completed":109,"skipped":1746,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:32.705: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5096 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: validating api versions +Feb 12 10:16:32.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-5096 api-versions' +Feb 12 10:16:33.028: INFO: stderr: "" +Feb 12 10:16:33.028: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1alpha1\nflowcontrol.apiserver.k8s.io/v1beta1\ninternal.apiserver.k8s.io/v1alpha1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1alpha1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1alpha1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1alpha1\nscheduling.k8s.io/v1beta1\nsnapshot.storage.k8s.io/v1alpha1\nstorage.k8s.io/v1\nstorage.k8s.io/v1alpha1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:33.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5096" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":311,"completed":110,"skipped":1841,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:33.047: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7333 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:16:33.249: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2b30d9ec-0ce1-4928-bdf0-391ae2874718" in namespace "security-context-test-7333" to be "Succeeded or Failed" +Feb 12 10:16:33.256: INFO: Pod "alpine-nnp-false-2b30d9ec-0ce1-4928-bdf0-391ae2874718": Phase="Pending", Reason="", readiness=false. Elapsed: 7.741075ms +Feb 12 10:16:35.269: INFO: Pod "alpine-nnp-false-2b30d9ec-0ce1-4928-bdf0-391ae2874718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020169073s +Feb 12 10:16:35.269: INFO: Pod "alpine-nnp-false-2b30d9ec-0ce1-4928-bdf0-391ae2874718" satisfied condition "Succeeded or Failed" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:35.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-7333" for this suite. +•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":111,"skipped":1872,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:35.295: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1038 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-f495d517-60ad-42d7-915d-242dfa581dd1 +STEP: Creating a pod to test consume configMaps +Feb 12 10:16:35.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71" in namespace "configmap-1038" to be "Succeeded or Failed" +Feb 12 10:16:35.479: INFO: Pod "pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55054ms +Feb 12 10:16:37.485: INFO: Pod "pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015418913s +STEP: Saw pod success +Feb 12 10:16:37.485: INFO: Pod "pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71" satisfied condition "Succeeded or Failed" +Feb 12 10:16:37.488: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71 container agnhost-container: +STEP: delete the pod +Feb 12 10:16:37.513: INFO: Waiting for pod pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71 to disappear +Feb 12 10:16:37.516: INFO: Pod pod-configmaps-118c6ebb-caf9-4588-9ffe-8bfd1b6f0c71 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:37.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1038" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":112,"skipped":1879,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:37.529: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4568 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:16:37.682: INFO: Creating deployment "webserver-deployment" +Feb 12 10:16:37.689: INFO: Waiting for observed generation 1 +Feb 12 10:16:39.705: INFO: Waiting for all required pods to come up +Feb 12 10:16:39.712: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Feb 12 10:16:43.740: INFO: Waiting for deployment "webserver-deployment" to complete +Feb 12 10:16:43.750: INFO: Updating deployment "webserver-deployment" with a non-existent image +Feb 12 10:16:43.768: INFO: Updating deployment webserver-deployment +Feb 12 10:16:43.769: INFO: Waiting for observed generation 2 +Feb 12 10:16:45.795: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Feb 12 10:16:45.801: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Feb 12 10:16:45.804: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Feb 12 10:16:45.814: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Feb 12 10:16:45.814: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Feb 12 10:16:45.818: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Feb 12 10:16:45.826: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Feb 12 10:16:45.826: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Feb 12 10:16:45.835: INFO: Updating deployment webserver-deployment +Feb 12 10:16:45.835: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Feb 12 10:16:45.844: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Feb 12 10:16:45.847: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 10:16:45.859: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-4568 7cb65fb9-1c8a-4146-9aa9-88ec63e65ff2 576763 3 2021-02-12 10:16:37 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00325e378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-12 10:16:41 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-02-12 10:16:43 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Feb 12 10:16:45.868: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-4568 52acad8c-4b55-4485-b5e7-97769bc0829a 576767 3 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7cb65fb9-1c8a-4146-9aa9-88ec63e65ff2 0xc00333b9f7 0xc00333b9f8}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cb65fb9-1c8a-4146-9aa9-88ec63e65ff2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00333ba78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:16:45.868: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Feb 12 10:16:45.868: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7c4c9ff4d6 deployment-4568 310a7cdc-35d1-4388-ba32-bc926bec1717 576764 3 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7cb65fb9-1c8a-4146-9aa9-88ec63e65ff2 0xc00333bad7 0xc00333bad8}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cb65fb9-1c8a-4146-9aa9-88ec63e65ff2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7c4c9ff4d6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00333bb58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:16:45.885: INFO: Pod "webserver-deployment-795d758f88-57n46" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-57n46 webserver-deployment-795d758f88- deployment-4568 296e24ab-236a-46cf-a126-654fecc93d7f 576750 0 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:10.100.45.47/32 cni.projectcalico.org/podIPs:10.100.45.47/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325e747 0xc00325e748}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {calico Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:,StartTime:2021-02-12 10:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.886: INFO: Pod "webserver-deployment-795d758f88-6f5c8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-6f5c8 webserver-deployment-795d758f88- deployment-4568 d93081bf-f7f2-4887-9d89-921ce6636db9 576761 0 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:10.100.45.26/32 cni.projectcalico.org/podIPs:10.100.45.26/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325e910 0xc00325e911}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {calico Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:,StartTime:2021-02-12 10:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.886: INFO: Pod "webserver-deployment-795d758f88-bx8nx" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-bx8nx webserver-deployment-795d758f88- deployment-4568 1227f0ef-e2f5-4dda-a459-07c48265ab33 576741 0 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:10.100.92.243/32 cni.projectcalico.org/podIPs:10.100.92.243/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325eaf0 0xc00325eaf1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {calico Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:,StartTime:2021-02-12 10:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.887: INFO: Pod "webserver-deployment-795d758f88-ftp7q" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-ftp7q webserver-deployment-795d758f88- deployment-4568 d6302f2d-2e22-405b-8eb4-28ad95851ea6 576771 0 2021-02-12 10:16:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325eca0 0xc00325eca1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.888: INFO: Pod "webserver-deployment-795d758f88-pgjq5" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-pgjq5 webserver-deployment-795d758f88- deployment-4568 b66f00b4-99fb-44b9-a674-a6c1718bbb2c 576757 0 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:10.100.45.36/32 cni.projectcalico.org/podIPs:10.100.45.36/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325ede0 0xc00325ede1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {calico Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:,StartTime:2021-02-12 10:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.888: INFO: Pod "webserver-deployment-795d758f88-qx2sg" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-qx2sg webserver-deployment-795d758f88- deployment-4568 8ad8e379-b478-4fdd-a96c-6065643aa00a 576742 0 2021-02-12 10:16:43 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/podIP:10.100.92.241/32 cni.projectcalico.org/podIPs:10.100.92.241/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 52acad8c-4b55-4485-b5e7-97769bc0829a 0xc00325efb0 0xc00325efb1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52acad8c-4b55-4485-b5e7-97769bc0829a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:16:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {calico Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:,StartTime:2021-02-12 10:16:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.889: INFO: Pod "webserver-deployment-7c4c9ff4d6-6bgns" is not available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-6bgns webserver-deployment-7c4c9ff4d6- deployment-4568 35478ac9-1e2f-44de-9a71-522f07da03f9 576769 0 2021-02-12 10:16:45 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f160 0xc00325f161}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.889: INFO: Pod "webserver-deployment-7c4c9ff4d6-9cnr7" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-9cnr7 webserver-deployment-7c4c9ff4d6- deployment-4568 4edbb9a0-6b1d-494e-a698-59706cf4c1c2 576588 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.45.15/32 cni.projectcalico.org/podIPs:10.100.45.15/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f2a0 0xc00325f2a1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.15,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://94b1a89cc04dfea463f4dd02fbd285225f9148e1287b831a131e57e02bad7a8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.890: INFO: Pod "webserver-deployment-7c4c9ff4d6-fzf92" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-fzf92 webserver-deployment-7c4c9ff4d6- deployment-4568 51198fc5-56b7-43c3-a19d-ba0ec32ae279 576591 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.45.22/32 cni.projectcalico.org/podIPs:10.100.45.22/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f470 0xc00325f471}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.22,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://f1229ff15d818e10f66c1565ff0c4ea69607cfdabd56e7f5a2d224d9f33f20d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.890: INFO: Pod "webserver-deployment-7c4c9ff4d6-q5pd8" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-q5pd8 webserver-deployment-7c4c9ff4d6- deployment-4568 68a52515-9e8d-4f39-b5ec-be013943ef11 576585 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.45.24/32 cni.projectcalico.org/podIPs:10.100.45.24/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f640 0xc00325f641}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.24,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://b709b83f50f6e16590f9b4227fac2369352d90cf588eae3aa401ca194871baa4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.891: INFO: Pod "webserver-deployment-7c4c9ff4d6-qstwk" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-qstwk webserver-deployment-7c4c9ff4d6- deployment-4568 ad664a48-a555-47ee-b9a7-9a0ab49f7db9 576657 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.92.248/32 cni.projectcalico.org/podIPs:10.100.92.248/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f810 0xc00325f811}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.248,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://cd427b5bb62c7effa8cefcb2937e98c79496eb659490b8de289da7bca381b661,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.891: INFO: Pod "webserver-deployment-7c4c9ff4d6-r8jvq" is not available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-r8jvq webserver-deployment-7c4c9ff4d6- deployment-4568 ad581aa3-db92-47e4-8e12-f43ff8903b2e 576772 0 2021-02-12 10:16:45 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325f9e0 0xc00325f9e1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.892: INFO: Pod "webserver-deployment-7c4c9ff4d6-rp2sr" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-rp2sr webserver-deployment-7c4c9ff4d6- deployment-4568 beb81980-3966-42eb-bc14-1c99ef4e74d8 576654 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.92.227/32 cni.projectcalico.org/podIPs:10.100.92.227/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325fb10 0xc00325fb11}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.227,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://ad86c1fa4a9511a1c612cc46d90330c1c3aedee73239e5099a2a7c8211b1a5d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.892: INFO: Pod "webserver-deployment-7c4c9ff4d6-sdh6q" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-sdh6q webserver-deployment-7c4c9ff4d6- deployment-4568 e492a78c-979b-4484-8247-0d9ece02a1ab 576617 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.92.231/32 cni.projectcalico.org/podIPs:10.100.92.231/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325fce0 0xc00325fce1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.231\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.231,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://e370cfe7989ad6da64770f08cf6654b8bff16027d63b987a8528b50eeeda93ea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.892: INFO: Pod "webserver-deployment-7c4c9ff4d6-v7sf5" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-v7sf5 webserver-deployment-7c4c9ff4d6- deployment-4568 f87f6448-5fca-4d60-adeb-e64eef6d24e8 576643 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.45.23/32 cni.projectcalico.org/podIPs:10.100.45.23/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc00325feb0 0xc00325feb1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.45.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.234,PodIP:10.100.45.23,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://4d0cfce709da51ddb4bd2802d4ebb4e77a4fd09b8aed9b9d07caa7868cd02ff1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.45.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.893: INFO: Pod "webserver-deployment-7c4c9ff4d6-wdzdb" is available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-wdzdb webserver-deployment-7c4c9ff4d6- deployment-4568 257fec57-7d95-428f-a05e-d1a37e5ad98f 576651 0 2021-02-12 10:16:37 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[cni.projectcalico.org/podIP:10.100.92.239/32 cni.projectcalico.org/podIPs:10.100.92.239/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc0011021c0 0xc0011021c1}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 10:16:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 10:16:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.239,StartTime:2021-02-12 10:16:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 10:16:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:docker-pullable://10.60.253.37/magnum/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4,ContainerID:docker://1c58557414b1b8a6122b33352fda88f7c1910bc71c81917cb5c98d80502c5d1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:16:45.893: INFO: Pod "webserver-deployment-7c4c9ff4d6-xnmgr" is not available: +&Pod{ObjectMeta:{webserver-deployment-7c4c9ff4d6-xnmgr webserver-deployment-7c4c9ff4d6- deployment-4568 38607e5a-451c-436b-b538-94c8981e637e 576775 0 2021-02-12 10:16:45 +0000 UTC map[name:httpd pod-template-hash:7c4c9ff4d6] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-7c4c9ff4d6 310a7cdc-35d1-4388-ba32-bc926bec1717 0xc001102450 0xc001102451}] [] [{kube-controller-manager Update v1 2021-02-12 10:16:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"310a7cdc-35d1-4388-ba32-bc926bec1717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnzgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnzgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnzgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:16:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:45.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4568" for this suite. + +• [SLOW TEST:8.395 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":311,"completed":113,"skipped":1887,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:45.926: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3945 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:16:46.156: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Feb 12 10:16:50.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-3945 --namespace=crd-publish-openapi-3945 create -f -' +Feb 12 10:16:51.800: INFO: stderr: "" +Feb 12 10:16:51.800: INFO: stdout: "e2e-test-crd-publish-openapi-836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Feb 12 10:16:51.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-3945 --namespace=crd-publish-openapi-3945 delete e2e-test-crd-publish-openapi-836-crds test-cr' +Feb 12 10:16:52.198: INFO: stderr: "" +Feb 12 10:16:52.198: INFO: stdout: "e2e-test-crd-publish-openapi-836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Feb 12 10:16:52.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-3945 --namespace=crd-publish-openapi-3945 apply -f -' +Feb 12 10:16:52.985: INFO: stderr: "" +Feb 12 10:16:52.985: INFO: stdout: "e2e-test-crd-publish-openapi-836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Feb 12 10:16:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-3945 --namespace=crd-publish-openapi-3945 delete e2e-test-crd-publish-openapi-836-crds test-cr' +Feb 12 10:16:53.166: INFO: stderr: "" +Feb 12 10:16:53.166: INFO: stdout: "e2e-test-crd-publish-openapi-836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Feb 12 10:16:53.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-3945 explain e2e-test-crd-publish-openapi-836-crds' +Feb 12 10:16:53.553: INFO: stderr: "" +Feb 12 10:16:53.553: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-836-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:16:58.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3945" for this suite. + +• [SLOW TEST:12.534 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":311,"completed":114,"skipped":1899,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:16:58.461: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8941 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +Feb 12 10:16:58.618: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:02.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8941" for this suite. +•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":311,"completed":115,"skipped":1919,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:02.815: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5481 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Feb 12 10:17:04.536: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:04.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0212 10:17:04.536512 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 10:17:04.536580 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 10:17:04.536595 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +STEP: Destroying namespace "gc-5481" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":311,"completed":116,"skipped":1951,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:04.546: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3699 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating Agnhost RC +Feb 12 10:17:04.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3699 create -f -' +Feb 12 10:17:05.188: INFO: stderr: "" +Feb 12 10:17:05.188: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Feb 12 10:17:06.198: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:17:06.198: INFO: Found 0 / 1 +Feb 12 10:17:07.197: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:17:07.197: INFO: Found 1 / 1 +Feb 12 10:17:07.197: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Feb 12 10:17:07.201: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:17:07.201: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Feb 12 10:17:07.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-3699 patch pod agnhost-primary-bfcvr -p {"metadata":{"annotations":{"x":"y"}}}' +Feb 12 10:17:07.377: INFO: stderr: "" +Feb 12 10:17:07.377: INFO: stdout: "pod/agnhost-primary-bfcvr patched\n" +STEP: checking annotations +Feb 12 10:17:07.381: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 10:17:07.381: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:07.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3699" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":311,"completed":117,"skipped":1959,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:07.391: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4024 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name projected-secret-test-93549a19-7ccb-4edc-aa0a-67ade9d88741 +STEP: Creating a pod to test consume secrets +Feb 12 10:17:07.567: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10" in namespace "projected-4024" to be "Succeeded or Failed" +Feb 12 10:17:07.571: INFO: Pod "pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513341ms +Feb 12 10:17:09.591: INFO: Pod "pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024581769s +STEP: Saw pod success +Feb 12 10:17:09.591: INFO: Pod "pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10" satisfied condition "Succeeded or Failed" +Feb 12 10:17:09.595: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10 container projected-secret-volume-test: +STEP: delete the pod +Feb 12 10:17:09.633: INFO: Waiting for pod pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10 to disappear +Feb 12 10:17:09.639: INFO: Pod pod-projected-secrets-337ea9a9-ec9f-437f-bd1d-bf09874dba10 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4024" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":118,"skipped":1972,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:09.651: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-82 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-a9adacc0-10a7-4ec3-84bf-a9c8d55e5a57 +STEP: Creating a pod to test consume secrets +Feb 12 10:17:09.825: INFO: Waiting up to 5m0s for pod "pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08" in namespace "secrets-82" to be "Succeeded or Failed" +Feb 12 10:17:09.832: INFO: Pod "pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159126ms +Feb 12 10:17:11.841: INFO: Pod "pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015332771s +STEP: Saw pod success +Feb 12 10:17:11.841: INFO: Pod "pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08" satisfied condition "Succeeded or Failed" +Feb 12 10:17:11.844: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08 container secret-volume-test: +STEP: delete the pod +Feb 12 10:17:11.876: INFO: Waiting for pod pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08 to disappear +Feb 12 10:17:11.882: INFO: Pod pod-secrets-0673ae6f-a9a3-4a04-89ba-d641eff70a08 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:11.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-82" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":119,"skipped":1985,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:11.909: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3147 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:17:12.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20" in namespace "projected-3147" to be "Succeeded or Failed" +Feb 12 10:17:12.107: INFO: Pod "downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492962ms +Feb 12 10:17:14.116: INFO: Pod "downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016942374s +Feb 12 10:17:16.124: INFO: Pod "downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025654537s +STEP: Saw pod success +Feb 12 10:17:16.124: INFO: Pod "downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20" satisfied condition "Succeeded or Failed" +Feb 12 10:17:16.129: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20 container client-container: +STEP: delete the pod +Feb 12 10:17:16.154: INFO: Waiting for pod downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20 to disappear +Feb 12 10:17:16.162: INFO: Pod downwardapi-volume-aee858ce-e1e0-41fc-837e-2b8df3cb1b20 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:16.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3147" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":120,"skipped":2041,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:16.186: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9249 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:17:16.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e" in namespace "downward-api-9249" to be "Succeeded or Failed" +Feb 12 10:17:16.366: INFO: Pod "downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.077257ms +Feb 12 10:17:18.379: INFO: Pod "downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018037772s +STEP: Saw pod success +Feb 12 10:17:18.379: INFO: Pod "downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e" satisfied condition "Succeeded or Failed" +Feb 12 10:17:18.383: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e container client-container: +STEP: delete the pod +Feb 12 10:17:18.414: INFO: Waiting for pod downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e to disappear +Feb 12 10:17:18.418: INFO: Pod downwardapi-volume-bada3cac-b649-48aa-a573-ba8408f0b57e no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:18.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9249" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":121,"skipped":2073,"failed":0} +SS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:18.428: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename ingress +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingress-9157 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Feb 12 10:17:18.669: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Feb 12 10:17:18.678: INFO: starting watch +STEP: patching +STEP: updating +Feb 12 10:17:18.694: INFO: waiting for watch events with expected annotations +Feb 12 10:17:18.695: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:18.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-9157" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":311,"completed":122,"skipped":2075,"failed":0} + +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:18.769: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8721 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:22.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-8721" for this suite. +•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":311,"completed":123,"skipped":2075,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:22.988: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename aggregator +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-8140 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 +Feb 12 10:17:23.177: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the sample API server. +Feb 12 10:17:23.646: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Feb 12 10:17:30.810: INFO: Waited 5.087209759s for the sample-apiserver to be ready to handle requests. +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:31.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-8140" for this suite. + +• [SLOW TEST:8.991 seconds] +[sig-api-machinery] Aggregator +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":311,"completed":124,"skipped":2108,"failed":0} +SS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:31.979: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6794 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap configmap-6794/configmap-test-012cd6b8-0ff9-4de9-a915-29e0b6c35fa5 +STEP: Creating a pod to test consume configMaps +Feb 12 10:17:32.207: INFO: Waiting up to 5m0s for pod "pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907" in namespace "configmap-6794" to be "Succeeded or Failed" +Feb 12 10:17:32.221: INFO: Pod "pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907": Phase="Pending", Reason="", readiness=false. Elapsed: 13.627908ms +Feb 12 10:17:34.250: INFO: Pod "pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042899925s +STEP: Saw pod success +Feb 12 10:17:34.250: INFO: Pod "pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907" satisfied condition "Succeeded or Failed" +Feb 12 10:17:34.258: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907 container env-test: +STEP: delete the pod +Feb 12 10:17:34.346: INFO: Waiting for pod pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907 to disappear +Feb 12 10:17:34.351: INFO: Pod pod-configmaps-82abb6d3-f08f-4c6a-a680-477c80659907 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:34.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6794" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":125,"skipped":2110,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:34.364: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-9803 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:17:35.947: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +Feb 12 10:17:37.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721856, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721856, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721856, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721855, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:17:40.994: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:17:41.002: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:42.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-9803" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:7.878 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":311,"completed":126,"skipped":2141,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:42.253: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9906 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Feb 12 10:17:44.470: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:17:44.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9906" for this suite. +•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":127,"skipped":2181,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:17:44.497: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6374 +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: set up a multi version CRD +Feb 12 10:17:44.666: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:18:11.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6374" for this suite. + +• [SLOW TEST:27.236 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":311,"completed":128,"skipped":2193,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:18:11.737: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-196 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name cm-test-opt-del-5cc29285-4fbd-4395-8429-dcb0f4583e21 +STEP: Creating configMap with name cm-test-opt-upd-45a67275-ac3c-49b1-94b5-c72c840b5213 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-5cc29285-4fbd-4395-8429-dcb0f4583e21 +STEP: Updating configmap cm-test-opt-upd-45a67275-ac3c-49b1-94b5-c72c840b5213 +STEP: Creating configMap with name cm-test-opt-create-7f486f9c-7479-4755-b168-1b64f8bf3ea1 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:40.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-196" for this suite. + +• [SLOW TEST:89.093 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":129,"skipped":2205,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:40.831: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6132 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward api env vars +Feb 12 10:19:41.024: INFO: Waiting up to 5m0s for pod "downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425" in namespace "downward-api-6132" to be "Succeeded or Failed" +Feb 12 10:19:41.033: INFO: Pod "downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425": Phase="Pending", Reason="", readiness=false. Elapsed: 8.787311ms +Feb 12 10:19:43.047: INFO: Pod "downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02275695s +STEP: Saw pod success +Feb 12 10:19:43.047: INFO: Pod "downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425" satisfied condition "Succeeded or Failed" +Feb 12 10:19:43.050: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425 container dapi-container: +STEP: delete the pod +Feb 12 10:19:43.205: INFO: Waiting for pod downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425 to disappear +Feb 12 10:19:43.219: INFO: Pod downward-api-85d3f4ea-785b-4af7-b10c-494fc9a11425 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6132" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":311,"completed":130,"skipped":2215,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:43.243: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5436 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-map-bcafd19d-cdd9-4f7b-9d37-3409dab1bc61 +STEP: Creating a pod to test consume configMaps +Feb 12 10:19:43.425: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3" in namespace "projected-5436" to be "Succeeded or Failed" +Feb 12 10:19:43.434: INFO: Pod "pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.999486ms +Feb 12 10:19:45.451: INFO: Pod "pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025771253s +STEP: Saw pod success +Feb 12 10:19:45.451: INFO: Pod "pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3" satisfied condition "Succeeded or Failed" +Feb 12 10:19:45.456: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3 container agnhost-container: +STEP: delete the pod +Feb 12 10:19:45.487: INFO: Waiting for pod pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3 to disappear +Feb 12 10:19:45.491: INFO: Pod pod-projected-configmaps-ebf508e9-d453-4a70-8610-3b8af28d6bc3 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:45.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5436" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":131,"skipped":2226,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:45.501: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5514 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name projected-secret-test-map-27e2f28d-34ad-47c9-8cfa-f0845f2f8c5f +STEP: Creating a pod to test consume secrets +Feb 12 10:19:45.675: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50" in namespace "projected-5514" to be "Succeeded or Failed" +Feb 12 10:19:45.686: INFO: Pod "pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50": Phase="Pending", Reason="", readiness=false. Elapsed: 10.907044ms +Feb 12 10:19:47.699: INFO: Pod "pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024003007s +STEP: Saw pod success +Feb 12 10:19:47.700: INFO: Pod "pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50" satisfied condition "Succeeded or Failed" +Feb 12 10:19:47.704: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50 container projected-secret-volume-test: +STEP: delete the pod +Feb 12 10:19:47.744: INFO: Waiting for pod pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50 to disappear +Feb 12 10:19:47.748: INFO: Pod pod-projected-secrets-1b295920-1f42-4581-b2e9-f2152d20ef50 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:47.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5514" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":132,"skipped":2237,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:47.761: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5193 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0644 on node default medium +Feb 12 10:19:47.945: INFO: Waiting up to 5m0s for pod "pod-77e67ebe-f35c-4302-9e10-19e42153bc1a" in namespace "emptydir-5193" to be "Succeeded or Failed" +Feb 12 10:19:47.955: INFO: Pod "pod-77e67ebe-f35c-4302-9e10-19e42153bc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009937ms +Feb 12 10:19:49.967: INFO: Pod "pod-77e67ebe-f35c-4302-9e10-19e42153bc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022660315s +Feb 12 10:19:51.973: INFO: Pod "pod-77e67ebe-f35c-4302-9e10-19e42153bc1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028013141s +STEP: Saw pod success +Feb 12 10:19:51.973: INFO: Pod "pod-77e67ebe-f35c-4302-9e10-19e42153bc1a" satisfied condition "Succeeded or Failed" +Feb 12 10:19:51.977: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-77e67ebe-f35c-4302-9e10-19e42153bc1a container test-container: +STEP: delete the pod +Feb 12 10:19:52.003: INFO: Waiting for pod pod-77e67ebe-f35c-4302-9e10-19e42153bc1a to disappear +Feb 12 10:19:52.008: INFO: Pod pod-77e67ebe-f35c-4302-9e10-19e42153bc1a no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:52.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5193" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":133,"skipped":2239,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:52.025: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-888 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:19:52.535: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:19:54.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721992, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721992, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721992, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748721992, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:19:57.586: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:19:57.601: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-410-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:19:58.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-888" for this suite. +STEP: Destroying namespace "webhook-888-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.895 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":311,"completed":134,"skipped":2259,"failed":0} +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:19:58.920: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename prestop +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-2292 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating server pod server in namespace prestop-2292 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-2292 +STEP: Deleting pre-stop pod +Feb 12 10:20:08.155: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:08.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-2292" for this suite. + +• [SLOW TEST:9.288 seconds] +[k8s.io] [sig-node] PreStop +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":311,"completed":135,"skipped":2259,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:08.208: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1172 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +W0212 10:20:18.511850 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 10:20:18.512591 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 10:20:18.512720 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +Feb 12 10:20:18.512: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Feb 12 10:20:18.513: INFO: Deleting pod "simpletest-rc-to-be-deleted-7t447" in namespace "gc-1172" +Feb 12 10:20:18.526: INFO: Deleting pod "simpletest-rc-to-be-deleted-9ljk8" in namespace "gc-1172" +Feb 12 10:20:18.539: INFO: Deleting pod "simpletest-rc-to-be-deleted-j7szl" in namespace "gc-1172" +Feb 12 10:20:18.585: INFO: Deleting pod "simpletest-rc-to-be-deleted-kd5ld" in namespace "gc-1172" +Feb 12 10:20:18.617: INFO: Deleting pod "simpletest-rc-to-be-deleted-kzsp4" in namespace "gc-1172" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:18.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1172" for this suite. + +• [SLOW TEST:10.474 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":311,"completed":136,"skipped":2277,"failed":0} +SSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:18.685: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1510 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward api env vars +Feb 12 10:20:18.873: INFO: Waiting up to 5m0s for pod "downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf" in namespace "downward-api-1510" to be "Succeeded or Failed" +Feb 12 10:20:18.877: INFO: Pod "downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333313ms +Feb 12 10:20:20.891: INFO: Pod "downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018280661s +Feb 12 10:20:22.899: INFO: Pod "downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026788026s +STEP: Saw pod success +Feb 12 10:20:22.900: INFO: Pod "downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf" satisfied condition "Succeeded or Failed" +Feb 12 10:20:22.903: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf container dapi-container: +STEP: delete the pod +Feb 12 10:20:22.942: INFO: Waiting for pod downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf to disappear +Feb 12 10:20:22.947: INFO: Pod downward-api-82e6e5cc-82cc-40b3-be0a-91e9848915bf no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:22.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1510" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":311,"completed":137,"skipped":2281,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:22.959: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3319 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:20:23.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e" in namespace "projected-3319" to be "Succeeded or Failed" +Feb 12 10:20:23.152: INFO: Pod "downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.878063ms +Feb 12 10:20:25.164: INFO: Pod "downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019080118s +Feb 12 10:20:27.174: INFO: Pod "downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029864456s +STEP: Saw pod success +Feb 12 10:20:27.175: INFO: Pod "downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e" satisfied condition "Succeeded or Failed" +Feb 12 10:20:27.178: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e container client-container: +STEP: delete the pod +Feb 12 10:20:27.213: INFO: Waiting for pod downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e to disappear +Feb 12 10:20:27.218: INFO: Pod downwardapi-volume-c1c4bd87-c3ee-47fa-a370-c59fb276683e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:27.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3319" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":138,"skipped":2283,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:27.239: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7677 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name cm-test-opt-del-88b82303-325f-443e-b16a-ec8a5effa3c0 +STEP: Creating configMap with name cm-test-opt-upd-dd2202e4-ff38-4b21-b1ff-dd9cc2cf1865 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-88b82303-325f-443e-b16a-ec8a5effa3c0 +STEP: Updating configmap cm-test-opt-upd-dd2202e4-ff38-4b21-b1ff-dd9cc2cf1865 +STEP: Creating configMap with name cm-test-opt-create-d732e7c4-eb66-4cc3-832e-35da510af879 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:33.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7677" for this suite. + +• [SLOW TEST:6.321 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":139,"skipped":2305,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:33.563: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4332 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Feb 12 10:20:36.296: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b" +Feb 12 10:20:36.296: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b" in namespace "pods-4332" to be "terminated due to deadline exceeded" +Feb 12 10:20:36.301: INFO: Pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b": Phase="Running", Reason="", readiness=true. Elapsed: 5.5577ms +Feb 12 10:20:38.316: INFO: Pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b": Phase="Running", Reason="", readiness=true. Elapsed: 2.01978044s +Feb 12 10:20:40.329: INFO: Pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.032843258s +Feb 12 10:20:40.329: INFO: Pod "pod-update-activedeadlineseconds-b8b09213-75bb-410a-85cb-ff53bb31a62b" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:40.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4332" for this suite. + +• [SLOW TEST:6.784 seconds] +[k8s.io] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":311,"completed":140,"skipped":2321,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:40.356: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4172 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Feb 12 10:20:46.607: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Feb 12 10:20:46.615: INFO: Pod pod-with-poststart-http-hook still exists +Feb 12 10:20:48.615: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Feb 12 10:20:48.629: INFO: Pod pod-with-poststart-http-hook still exists +Feb 12 10:20:50.615: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Feb 12 10:20:50.627: INFO: Pod pod-with-poststart-http-hook still exists +Feb 12 10:20:52.615: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Feb 12 10:20:52.621: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:52.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4172" for this suite. + +• [SLOW TEST:12.284 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":311,"completed":141,"skipped":2460,"failed":0} +S +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:52.641: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-8982 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Feb 12 10:20:52.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8982 4e97d8e6-34c3-40b2-8876-1145fbe2be7a 579133 0 2021-02-12 10:20:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-12 10:20:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:20:52.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8982 4e97d8e6-34c3-40b2-8876-1145fbe2be7a 579134 0 2021-02-12 10:20:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-12 10:20:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Feb 12 10:20:52.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8982 4e97d8e6-34c3-40b2-8876-1145fbe2be7a 579135 0 2021-02-12 10:20:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-12 10:20:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:20:52.848: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8982 4e97d8e6-34c3-40b2-8876-1145fbe2be7a 579136 0 2021-02-12 10:20:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-12 10:20:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:52.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8982" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":311,"completed":142,"skipped":2461,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:52.869: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6441 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-map-ebb876bf-e3e8-45e0-a9e0-81e83949b823 +STEP: Creating a pod to test consume configMaps +Feb 12 10:20:53.067: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e" in namespace "configmap-6441" to be "Succeeded or Failed" +Feb 12 10:20:53.077: INFO: Pod "pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.583045ms +Feb 12 10:20:55.091: INFO: Pod "pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023445606s +STEP: Saw pod success +Feb 12 10:20:55.091: INFO: Pod "pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e" satisfied condition "Succeeded or Failed" +Feb 12 10:20:55.095: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e container agnhost-container: +STEP: delete the pod +Feb 12 10:20:55.125: INFO: Waiting for pod pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e to disappear +Feb 12 10:20:55.130: INFO: Pod pod-configmaps-ec2ee368-bf3d-4042-8b80-1dc71ad29b4e no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:20:55.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6441" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":143,"skipped":2483,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:20:55.145: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3341 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:21:08.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3341" for this suite. + +• [SLOW TEST:13.286 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":311,"completed":144,"skipped":2492,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:21:08.432: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1934 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name s-test-opt-del-46424b2e-169a-4533-8c40-398640cd0b57 +STEP: Creating secret with name s-test-opt-upd-ef74ca53-9ac5-4de2-875d-eccd5f2b8299 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-46424b2e-169a-4533-8c40-398640cd0b57 +STEP: Updating secret s-test-opt-upd-ef74ca53-9ac5-4de2-875d-eccd5f2b8299 +STEP: Creating secret with name s-test-opt-create-25d59c04-2576-4125-87c9-422896ce27ac +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:22:35.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1934" for this suite. + +• [SLOW TEST:87.063 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":145,"skipped":2494,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:22:35.496: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-7282 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Feb 12 10:22:35.735: INFO: Waiting up to 1m0s for all nodes to be ready +Feb 12 10:23:35.804: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:23:35.809: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-411 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:23:35.993: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Feb 12 10:23:35.998: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:23:36.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-411" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:23:36.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-7282" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:60.597 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":311,"completed":146,"skipped":2517,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:23:36.096: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6527 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-dfe50dd6-e07f-4294-af4f-cb15b8e9520e +STEP: Creating a pod to test consume configMaps +Feb 12 10:23:36.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb" in namespace "configmap-6527" to be "Succeeded or Failed" +Feb 12 10:23:36.284: INFO: Pod "pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.797569ms +Feb 12 10:23:38.296: INFO: Pod "pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020900056s +STEP: Saw pod success +Feb 12 10:23:38.296: INFO: Pod "pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb" satisfied condition "Succeeded or Failed" +Feb 12 10:23:38.299: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb container agnhost-container: +STEP: delete the pod +Feb 12 10:23:38.330: INFO: Waiting for pod pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb to disappear +Feb 12 10:23:38.334: INFO: Pod pod-configmaps-919d140c-f50c-4cd3-a1ba-e72375cb24fb no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:23:38.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6527" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":147,"skipped":2532,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:23:38.347: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6813 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-f0189fc1-64fe-4e60-97cd-59f752735a5a +STEP: Creating a pod to test consume secrets +Feb 12 10:23:38.577: INFO: Waiting up to 5m0s for pod "pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c" in namespace "secrets-6813" to be "Succeeded or Failed" +Feb 12 10:23:38.584: INFO: Pod "pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.900518ms +Feb 12 10:23:40.594: INFO: Pod "pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016397601s +STEP: Saw pod success +Feb 12 10:23:40.594: INFO: Pod "pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c" satisfied condition "Succeeded or Failed" +Feb 12 10:23:40.597: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c container secret-volume-test: +STEP: delete the pod +Feb 12 10:23:40.629: INFO: Waiting for pod pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c to disappear +Feb 12 10:23:40.632: INFO: Pod pod-secrets-94436058-cd97-46fa-9662-3ff8ad1f414c no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:23:40.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6813" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":148,"skipped":2548,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:23:40.644: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5473 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:23:40.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d" in namespace "projected-5473" to be "Succeeded or Failed" +Feb 12 10:23:40.824: INFO: Pod "downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.875592ms +Feb 12 10:23:42.837: INFO: Pod "downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018494685s +STEP: Saw pod success +Feb 12 10:23:42.837: INFO: Pod "downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d" satisfied condition "Succeeded or Failed" +Feb 12 10:23:42.842: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d container client-container: +STEP: delete the pod +Feb 12 10:23:42.871: INFO: Waiting for pod downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d to disappear +Feb 12 10:23:42.875: INFO: Pod downwardapi-volume-4993ec9b-d556-4ff9-bae5-f7027cbfe60d no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:23:42.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5473" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":149,"skipped":2558,"failed":0} +SS +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:23:42.885: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1441 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod liveness-e4824a50-2129-42d9-8639-beaa17e7c365 in namespace container-probe-1441 +Feb 12 10:23:45.075: INFO: Started pod liveness-e4824a50-2129-42d9-8639-beaa17e7c365 in namespace container-probe-1441 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 10:23:45.079: INFO: Initial restart count of pod liveness-e4824a50-2129-42d9-8639-beaa17e7c365 is 0 +Feb 12 10:24:09.235: INFO: Restart count of pod container-probe-1441/liveness-e4824a50-2129-42d9-8639-beaa17e7c365 is now 1 (24.155042025s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:24:09.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1441" for this suite. + +• [SLOW TEST:26.384 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":150,"skipped":2560,"failed":0} +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:24:09.269: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3666 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:24:13.498: INFO: DNS probes using dns-test-afba9e6f-f6be-44dc-b162-b28bfed8e885 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:24:17.575: INFO: DNS probes using dns-test-b2460a83-3deb-4f49-a5a1-a481f0bb18a5 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3666.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3666.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:24:21.691: INFO: DNS probes using dns-test-44bcc56e-b086-442c-b0cd-4a0b26a28e5d succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:24:21.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3666" for this suite. + +• [SLOW TEST:12.462 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":311,"completed":151,"skipped":2560,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:24:21.734: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-7035 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:24:21.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7035" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":311,"completed":152,"skipped":2577,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:24:21.992: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-6780 +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:24:22.146: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:24:23.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-6780" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":311,"completed":153,"skipped":2602,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:24:23.206: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7026 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Performing setup for networking test in namespace pod-network-test-7026 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Feb 12 10:24:23.371: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Feb 12 10:24:23.409: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 10:24:25.419: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 10:24:27.417: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:29.418: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:31.424: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:33.427: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:35.421: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:37.419: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:39.418: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:41.425: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:43.423: INFO: The status of Pod netserver-0 is Running (Ready = true) +Feb 12 10:24:43.434: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Feb 12 10:24:45.464: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Feb 12 10:24:45.464: INFO: Breadth first check of 10.100.92.252 on host 10.0.0.115... +Feb 12 10:24:45.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.100.45.2:9080/dial?request=hostname&protocol=http&host=10.100.92.252&port=8080&tries=1'] Namespace:pod-network-test-7026 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:24:45.467: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:24:45.784: INFO: Waiting for responses: map[] +Feb 12 10:24:45.784: INFO: reached 10.100.92.252 after 0/1 tries +Feb 12 10:24:45.784: INFO: Breadth first check of 10.100.45.5 on host 10.0.0.234... +Feb 12 10:24:45.791: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.100.45.2:9080/dial?request=hostname&protocol=http&host=10.100.45.5&port=8080&tries=1'] Namespace:pod-network-test-7026 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:24:45.791: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:24:46.051: INFO: Waiting for responses: map[] +Feb 12 10:24:46.051: INFO: reached 10.100.45.5 after 0/1 tries +Feb 12 10:24:46.051: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:24:46.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7026" for this suite. + +• [SLOW TEST:22.865 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":311,"completed":154,"skipped":2641,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:24:46.074: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-363 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Performing setup for networking test in namespace pod-network-test-363 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Feb 12 10:24:46.241: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Feb 12 10:24:46.291: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 10:24:48.305: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:50.305: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:52.306: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:54.298: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:56.306: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:24:58.303: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:25:00.305: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:25:02.305: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:25:04.301: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 10:25:06.324: INFO: The status of Pod netserver-0 is Running (Ready = true) +Feb 12 10:25:06.343: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Feb 12 10:25:10.414: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Feb 12 10:25:10.415: INFO: Going to poll 10.100.92.198 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Feb 12 10:25:10.418: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.100.92.198 8081 | grep -v '^\s*$'] Namespace:pod-network-test-363 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:25:10.419: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:25:11.784: INFO: Found all 1 expected endpoints: [netserver-0] +Feb 12 10:25:11.785: INFO: Going to poll 10.100.45.6 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Feb 12 10:25:11.797: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.100.45.6 8081 | grep -v '^\s*$'] Namespace:pod-network-test-363 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:25:11.797: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:25:13.057: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:13.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-363" for this suite. + +• [SLOW TEST:27.008 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":155,"skipped":2663,"failed":0} +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:13.083: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7804 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Feb 12 10:25:13.260: INFO: Waiting up to 5m0s for pod "pod-b67cdeda-ae87-4fb8-8998-533136046b6b" in namespace "emptydir-7804" to be "Succeeded or Failed" +Feb 12 10:25:13.265: INFO: Pod "pod-b67cdeda-ae87-4fb8-8998-533136046b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.080863ms +Feb 12 10:25:15.275: INFO: Pod "pod-b67cdeda-ae87-4fb8-8998-533136046b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01428146s +STEP: Saw pod success +Feb 12 10:25:15.275: INFO: Pod "pod-b67cdeda-ae87-4fb8-8998-533136046b6b" satisfied condition "Succeeded or Failed" +Feb 12 10:25:15.279: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-b67cdeda-ae87-4fb8-8998-533136046b6b container test-container: +STEP: delete the pod +Feb 12 10:25:15.443: INFO: Waiting for pod pod-b67cdeda-ae87-4fb8-8998-533136046b6b to disappear +Feb 12 10:25:15.447: INFO: Pod pod-b67cdeda-ae87-4fb8-8998-533136046b6b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:15.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7804" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":156,"skipped":2663,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:15.460: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1014 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:31.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1014" for this suite. + +• [SLOW TEST:16.399 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":311,"completed":157,"skipped":2676,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:31.864: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3036 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Feb 12 10:25:32.048: INFO: Pod name pod-release: Found 0 pods out of 1 +Feb 12 10:25:37.057: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:38.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-3036" for this suite. + +• [SLOW TEST:6.250 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":311,"completed":158,"skipped":2692,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:38.114: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename certificates +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-8110 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Feb 12 10:25:39.474: INFO: starting watch +STEP: patching +STEP: updating +Feb 12 10:25:39.490: INFO: waiting for watch events with expected annotations +Feb 12 10:25:39.490: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:39.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-8110" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":311,"completed":159,"skipped":2707,"failed":0} +S +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:39.565: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3502 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:39.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3502" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":311,"completed":160,"skipped":2708,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:39.744: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8010 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:25:41.318: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:25:43.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722341, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722341, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722341, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722341, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:25:46.367: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:25:46.377: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8305-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:47.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8010" for this suite. +STEP: Destroying namespace "webhook-8010-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:7.882 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":311,"completed":161,"skipped":2763,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:47.628: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3167 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Feb 12 10:25:47.818: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.818: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.822: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.822: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.843: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.843: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.878: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:47.878: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Feb 12 10:25:50.404: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Feb 12 10:25:50.404: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Feb 12 10:25:50.768: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Feb 12 10:25:50.791: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Feb 12 10:25:50.795: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.796: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 0 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.797: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.811: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.811: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.835: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.835: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 2 +Feb 12 10:25:50.865: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +STEP: listing Deployments +Feb 12 10:25:50.877: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Feb 12 10:25:50.901: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Feb 12 10:25:50.920: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:patched test-deployment-static:true] +Feb 12 10:25:50.920: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:50.922: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:50.941: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:50.966: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:50.980: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:50.987: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:51.002: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Feb 12 10:25:51.009: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Feb 12 10:25:52.964: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.964: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.965: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +Feb 12 10:25:52.966: INFO: observed Deployment test-deployment in namespace deployment-3167 with ReadyReplicas 1 +STEP: deleting the Deployment +Feb 12 10:25:52.998: INFO: observed event type MODIFIED +Feb 12 10:25:52.998: INFO: observed event type MODIFIED +Feb 12 10:25:52.999: INFO: observed event type MODIFIED +Feb 12 10:25:52.999: INFO: observed event type MODIFIED +Feb 12 10:25:52.999: INFO: observed event type MODIFIED +Feb 12 10:25:52.999: INFO: observed event type MODIFIED +Feb 12 10:25:53.002: INFO: observed event type MODIFIED +Feb 12 10:25:53.002: INFO: observed event type MODIFIED +Feb 12 10:25:53.002: INFO: observed event type MODIFIED +Feb 12 10:25:53.002: INFO: observed event type MODIFIED +Feb 12 10:25:53.002: INFO: observed event type MODIFIED +Feb 12 10:25:53.003: INFO: observed event type MODIFIED +Feb 12 10:25:53.003: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 10:25:53.013: INFO: Log out all the ReplicaSets if there is no deployment created +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:25:53.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-3167" for this suite. + +• [SLOW TEST:5.456 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":311,"completed":162,"skipped":2788,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:25:53.086: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5732 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:26:04.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5732" for this suite. + +• [SLOW TEST:11.252 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":311,"completed":163,"skipped":2822,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:26:04.338: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5288 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap configmap-5288/configmap-test-83e91d19-f805-4fe3-98c4-3251b1f91216 +STEP: Creating a pod to test consume configMaps +Feb 12 10:26:04.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f" in namespace "configmap-5288" to be "Succeeded or Failed" +Feb 12 10:26:04.572: INFO: Pod "pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670566ms +Feb 12 10:26:06.584: INFO: Pod "pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01566241s +STEP: Saw pod success +Feb 12 10:26:06.584: INFO: Pod "pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f" satisfied condition "Succeeded or Failed" +Feb 12 10:26:06.588: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f container env-test: +STEP: delete the pod +Feb 12 10:26:06.680: INFO: Waiting for pod pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f to disappear +Feb 12 10:26:06.684: INFO: Pod pod-configmaps-d94363e4-882c-4a25-9b7d-9870bbfb7a0f no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:26:06.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5288" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":311,"completed":164,"skipped":2837,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:26:06.701: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7866 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-7866 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating stateful set ss in namespace statefulset-7866 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7866 +Feb 12 10:26:06.892: INFO: Found 0 stateful pods, waiting for 1 +Feb 12 10:26:16.906: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Feb 12 10:26:16.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:26:17.316: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:26:17.316: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:26:17.316: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:26:17.324: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Feb 12 10:26:27.339: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:26:27.339: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:26:27.366: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:27.366: INFO: ss-0 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC }] +Feb 12 10:26:27.366: INFO: +Feb 12 10:26:27.366: INFO: StatefulSet ss has not reached scale 3, at 1 +Feb 12 10:26:28.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995588822s +Feb 12 10:26:29.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987167843s +Feb 12 10:26:30.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974177071s +Feb 12 10:26:31.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96100505s +Feb 12 10:26:32.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948717775s +Feb 12 10:26:33.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.938108914s +Feb 12 10:26:34.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.928113286s +Feb 12 10:26:35.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.916394521s +Feb 12 10:26:36.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 905.736522ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7866 +Feb 12 10:26:37.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:26:37.908: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 10:26:37.908: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:26:37.908: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:26:37.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:26:38.343: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Feb 12 10:26:38.343: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:26:38.343: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:26:38.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:26:38.741: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Feb 12 10:26:38.741: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:26:38.741: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:26:38.748: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 10:26:38.748: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 10:26:38.748: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Feb 12 10:26:38.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:26:39.161: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:26:39.161: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:26:39.161: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:26:39.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:26:39.552: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:26:39.552: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:26:39.552: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:26:39.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:26:39.969: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:26:39.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:26:39.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:26:39.969: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:26:39.976: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Feb 12 10:26:50.002: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:26:50.003: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:26:50.003: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:26:50.026: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:50.026: INFO: ss-0 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC }] +Feb 12 10:26:50.026: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:50.026: INFO: ss-2 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:50.026: INFO: +Feb 12 10:26:50.026: INFO: StatefulSet ss has not reached scale 0, at 3 +Feb 12 10:26:51.036: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:51.036: INFO: ss-0 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC }] +Feb 12 10:26:51.036: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:51.036: INFO: ss-2 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:51.036: INFO: +Feb 12 10:26:51.036: INFO: StatefulSet ss has not reached scale 0, at 3 +Feb 12 10:26:52.045: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:52.045: INFO: ss-0 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC }] +Feb 12 10:26:52.045: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:52.045: INFO: ss-2 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:52.045: INFO: +Feb 12 10:26:52.045: INFO: StatefulSet ss has not reached scale 0, at 3 +Feb 12 10:26:53.080: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:53.081: INFO: ss-0 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:06 +0000 UTC }] +Feb 12 10:26:53.081: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:53.081: INFO: ss-2 k8s-calico-coreos-yo5lpoxhpdlk-node-1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:53.081: INFO: +Feb 12 10:26:53.081: INFO: StatefulSet ss has not reached scale 0, at 3 +Feb 12 10:26:54.089: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:54.090: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:54.090: INFO: +Feb 12 10:26:54.090: INFO: StatefulSet ss has not reached scale 0, at 1 +Feb 12 10:26:55.097: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:55.097: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:55.097: INFO: +Feb 12 10:26:55.097: INFO: StatefulSet ss has not reached scale 0, at 1 +Feb 12 10:26:56.109: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:56.109: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:56.109: INFO: +Feb 12 10:26:56.109: INFO: StatefulSet ss has not reached scale 0, at 1 +Feb 12 10:26:57.121: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:57.121: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:57.121: INFO: +Feb 12 10:26:57.121: INFO: StatefulSet ss has not reached scale 0, at 1 +Feb 12 10:26:58.145: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:58.146: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:58.146: INFO: +Feb 12 10:26:58.146: INFO: StatefulSet ss has not reached scale 0, at 1 +Feb 12 10:26:59.156: INFO: POD NODE PHASE GRACE CONDITIONS +Feb 12 10:26:59.156: INFO: ss-1 k8s-calico-coreos-yo5lpoxhpdlk-node-0 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-12 10:26:27 +0000 UTC }] +Feb 12 10:26:59.156: INFO: +Feb 12 10:26:59.156: INFO: StatefulSet ss has not reached scale 0, at 1 +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7866 +Feb 12 10:27:00.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:00.602: INFO: rc: 1 +Feb 12 10:27:00.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("webserver") + +error: +exit status 1 +Feb 12 10:27:10.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:10.894: INFO: rc: 1 +Feb 12 10:27:10.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("webserver") + +error: +exit status 1 +Feb 12 10:27:20.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:21.095: INFO: rc: 1 +Feb 12 10:27:21.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:27:31.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:31.263: INFO: rc: 1 +Feb 12 10:27:31.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:27:41.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:41.453: INFO: rc: 1 +Feb 12 10:27:41.454: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:27:51.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:27:51.617: INFO: rc: 1 +Feb 12 10:27:51.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:01.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:01.795: INFO: rc: 1 +Feb 12 10:28:01.795: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:11.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:11.971: INFO: rc: 1 +Feb 12 10:28:11.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:21.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:22.153: INFO: rc: 1 +Feb 12 10:28:22.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:32.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:32.318: INFO: rc: 1 +Feb 12 10:28:32.318: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:42.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:42.485: INFO: rc: 1 +Feb 12 10:28:42.485: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:28:52.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:28:52.666: INFO: rc: 1 +Feb 12 10:28:52.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:02.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:02.843: INFO: rc: 1 +Feb 12 10:29:02.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:12.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:13.024: INFO: rc: 1 +Feb 12 10:29:13.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:23.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:23.222: INFO: rc: 1 +Feb 12 10:29:23.222: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:33.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:33.431: INFO: rc: 1 +Feb 12 10:29:33.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:43.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:43.579: INFO: rc: 1 +Feb 12 10:29:43.580: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:29:53.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:29:53.776: INFO: rc: 1 +Feb 12 10:29:53.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:03.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:03.959: INFO: rc: 1 +Feb 12 10:30:03.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:13.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:14.136: INFO: rc: 1 +Feb 12 10:30:14.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:24.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:24.308: INFO: rc: 1 +Feb 12 10:30:24.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:34.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:34.488: INFO: rc: 1 +Feb 12 10:30:34.488: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:44.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:44.699: INFO: rc: 1 +Feb 12 10:30:44.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:30:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:30:54.863: INFO: rc: 1 +Feb 12 10:30:54.863: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:04.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:05.037: INFO: rc: 1 +Feb 12 10:31:05.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:15.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:15.270: INFO: rc: 1 +Feb 12 10:31:15.270: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:25.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:25.518: INFO: rc: 1 +Feb 12 10:31:25.518: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:35.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:35.709: INFO: rc: 1 +Feb 12 10:31:35.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:45.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:45.898: INFO: rc: 1 +Feb 12 10:31:45.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:31:55.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:31:56.084: INFO: rc: 1 +Feb 12 10:31:56.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-1" not found + +error: +exit status 1 +Feb 12 10:32:06.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7866 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:32:06.249: INFO: rc: 1 +Feb 12 10:32:06.249: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: +Feb 12 10:32:06.249: INFO: Scaling statefulset ss to 0 +Feb 12 10:32:06.269: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 10:32:06.273: INFO: Deleting all statefulset in ns statefulset-7866 +Feb 12 10:32:06.278: INFO: Scaling statefulset ss to 0 +Feb 12 10:32:06.288: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:32:06.291: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:06.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7866" for this suite. + +• [SLOW TEST:359.624 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":311,"completed":165,"skipped":2846,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:06.327: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3209 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod pod-subpath-test-projected-mhq6 +STEP: Creating a pod to test atomic-volume-subpath +Feb 12 10:32:06.528: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mhq6" in namespace "subpath-3209" to be "Succeeded or Failed" +Feb 12 10:32:06.533: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.366269ms +Feb 12 10:32:08.542: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 2.014714535s +Feb 12 10:32:10.555: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 4.027768379s +Feb 12 10:32:12.563: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 6.035275721s +Feb 12 10:32:14.577: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 8.049782309s +Feb 12 10:32:16.592: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 10.064169496s +Feb 12 10:32:18.600: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 12.072850649s +Feb 12 10:32:20.616: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 14.088137778s +Feb 12 10:32:22.626: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 16.098088259s +Feb 12 10:32:24.641: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 18.112966465s +Feb 12 10:32:26.657: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 20.128901584s +Feb 12 10:32:28.669: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Running", Reason="", readiness=true. Elapsed: 22.141563683s +Feb 12 10:32:30.680: INFO: Pod "pod-subpath-test-projected-mhq6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.152273569s +STEP: Saw pod success +Feb 12 10:32:30.680: INFO: Pod "pod-subpath-test-projected-mhq6" satisfied condition "Succeeded or Failed" +Feb 12 10:32:30.683: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-subpath-test-projected-mhq6 container test-container-subpath-projected-mhq6: +STEP: delete the pod +Feb 12 10:32:30.780: INFO: Waiting for pod pod-subpath-test-projected-mhq6 to disappear +Feb 12 10:32:30.788: INFO: Pod pod-subpath-test-projected-mhq6 no longer exists +STEP: Deleting pod pod-subpath-test-projected-mhq6 +Feb 12 10:32:30.788: INFO: Deleting pod "pod-subpath-test-projected-mhq6" in namespace "subpath-3209" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3209" for this suite. + +• [SLOW TEST:24.480 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":311,"completed":166,"skipped":2855,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:30.808: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-5128 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:32:31.798: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Feb 12 10:32:33.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722751, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722751, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722751, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722751, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:32:36.850: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:32:36.864: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:38.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-5128" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:7.475 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":311,"completed":167,"skipped":2862,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:38.285: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-2780 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:46.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-2780" for this suite. + +• [SLOW TEST:8.214 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":311,"completed":168,"skipped":2876,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:46.507: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8414 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:46.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8414" for this suite. +•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":311,"completed":169,"skipped":2901,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:46.746: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename events +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1270 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Feb 12 10:32:46.922: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:46.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1270" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":311,"completed":170,"skipped":2911,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:46.950: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5856 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:32:47.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:32:49.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722767, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722767, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722767, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722767, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:32:52.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:52.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5856" for this suite. +STEP: Destroying namespace "webhook-5856-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.044 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":311,"completed":171,"skipped":2928,"failed":0} +SSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:52.995: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2680 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Feb 12 10:32:53.214: INFO: Waiting up to 5m0s for pod "pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457" in namespace "emptydir-2680" to be "Succeeded or Failed" +Feb 12 10:32:53.222: INFO: Pod "pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457": Phase="Pending", Reason="", readiness=false. Elapsed: 7.361399ms +Feb 12 10:32:55.233: INFO: Pod "pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019017054s +Feb 12 10:32:57.247: INFO: Pod "pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033289011s +STEP: Saw pod success +Feb 12 10:32:57.248: INFO: Pod "pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457" satisfied condition "Succeeded or Failed" +Feb 12 10:32:57.251: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457 container test-container: +STEP: delete the pod +Feb 12 10:32:57.281: INFO: Waiting for pod pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457 to disappear +Feb 12 10:32:57.289: INFO: Pod pod-ca60f328-aeb0-4a45-bf7a-ba94bedd9457 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:32:57.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2680" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":172,"skipped":2931,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:32:57.304: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7735 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 +STEP: creating an pod +Feb 12 10:32:57.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' +Feb 12 10:32:57.678: INFO: stderr: "" +Feb 12 10:32:57.678: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Waiting for log generator to start. +Feb 12 10:32:57.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Feb 12 10:32:57.678: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7735" to be "running and ready, or succeeded" +Feb 12 10:32:57.683: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.052527ms +Feb 12 10:32:59.692: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.014253565s +Feb 12 10:32:59.692: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Feb 12 10:32:59.692: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Feb 12 10:32:59.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator' +Feb 12 10:32:59.873: INFO: stderr: "" +Feb 12 10:32:59.873: INFO: stdout: "I0212 10:32:59.190992 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xgl6 567\nI0212 10:32:59.391237 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wqbd 351\nI0212 10:32:59.591241 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/5tv 589\nI0212 10:32:59.792805 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/58lv 299\n" +STEP: limiting log lines +Feb 12 10:32:59.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator --tail=1' +Feb 12 10:33:00.046: INFO: stderr: "" +Feb 12 10:33:00.046: INFO: stdout: "I0212 10:32:59.991250 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/9v9c 316\n" +Feb 12 10:33:00.046: INFO: got output "I0212 10:32:59.991250 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/9v9c 316\n" +STEP: limiting log bytes +Feb 12 10:33:00.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator --limit-bytes=1' +Feb 12 10:33:00.239: INFO: stderr: "" +Feb 12 10:33:00.239: INFO: stdout: "I" +Feb 12 10:33:00.239: INFO: got output "I" +STEP: exposing timestamps +Feb 12 10:33:00.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator --tail=1 --timestamps' +Feb 12 10:33:00.413: INFO: stderr: "" +Feb 12 10:33:00.413: INFO: stdout: "2021-02-12T10:33:00.391810088Z I0212 10:33:00.391473 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/w4c 301\n" +Feb 12 10:33:00.413: INFO: got output "2021-02-12T10:33:00.391810088Z I0212 10:33:00.391473 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/w4c 301\n" +STEP: restricting to a time range +Feb 12 10:33:02.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator --since=1s' +Feb 12 10:33:03.106: INFO: stderr: "" +Feb 12 10:33:03.106: INFO: stdout: "I0212 10:33:02.191110 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/6nhd 483\nI0212 10:33:02.391146 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/58d 585\nI0212 10:33:02.591267 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/rgcp 556\nI0212 10:33:02.791263 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/scfw 255\nI0212 10:33:02.991223 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/sjg 559\n" +Feb 12 10:33:03.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 logs logs-generator logs-generator --since=24h' +Feb 12 10:33:03.298: INFO: stderr: "" +Feb 12 10:33:03.298: INFO: stdout: "I0212 10:32:59.190992 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xgl6 567\nI0212 10:32:59.391237 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wqbd 351\nI0212 10:32:59.591241 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/5tv 589\nI0212 10:32:59.792805 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/58lv 299\nI0212 10:32:59.991250 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/9v9c 316\nI0212 10:33:00.191293 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/lh78 401\nI0212 10:33:00.391473 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/w4c 301\nI0212 10:33:00.591191 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/t2g 403\nI0212 10:33:00.791291 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/qjr 208\nI0212 10:33:00.991129 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/pkv 550\nI0212 10:33:01.191405 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/nff 574\nI0212 10:33:01.391115 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/xtp 378\nI0212 10:33:01.591432 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/pnn 533\nI0212 10:33:01.791157 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/mv4 264\nI0212 10:33:01.991342 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/cn5t 341\nI0212 10:33:02.191110 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/6nhd 483\nI0212 10:33:02.391146 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/58d 585\nI0212 10:33:02.591267 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/rgcp 556\nI0212 10:33:02.791263 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/scfw 255\nI0212 10:33:02.991223 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/sjg 559\nI0212 10:33:03.191226 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/7q7 268\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 +Feb 12 10:33:03.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-7735 delete pod logs-generator' +Feb 12 10:33:43.801: INFO: stderr: "" +Feb 12 10:33:43.801: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:33:43.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7735" for this suite. + +• [SLOW TEST:46.519 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":311,"completed":173,"skipped":2959,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:33:43.823: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8885 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating the pod +Feb 12 10:33:46.575: INFO: Successfully updated pod "annotationupdatef2da4e64-31d1-4459-9597-90236afeedda" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:33:48.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8885" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":174,"skipped":2990,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:33:48.624: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9201 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9201.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9201.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9201.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9201.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9201.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9201.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:33:52.832: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c: the server could not find the requested resource (get pods dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c) +Feb 12 10:33:52.837: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c: the server could not find the requested resource (get pods dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c) +Feb 12 10:33:52.852: INFO: Unable to read jessie_udp@PodARecord from pod dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c: the server could not find the requested resource (get pods dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c) +Feb 12 10:33:52.855: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c: the server could not find the requested resource (get pods dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c) +Feb 12 10:33:52.856: INFO: Lookups using dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 10:33:57.912: INFO: DNS probes using dns-9201/dns-test-a7378986-6d6d-4473-9d76-cdc8e8ac7e4c succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:33:58.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9201" for this suite. + +• [SLOW TEST:9.408 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":311,"completed":175,"skipped":3018,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:33:58.033: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3865 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Feb 12 10:33:58.215: INFO: Waiting up to 5m0s for pod "pod-037df1c0-cbee-4083-940c-908350354e0b" in namespace "emptydir-3865" to be "Succeeded or Failed" +Feb 12 10:33:58.220: INFO: Pod "pod-037df1c0-cbee-4083-940c-908350354e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.916479ms +Feb 12 10:34:00.228: INFO: Pod "pod-037df1c0-cbee-4083-940c-908350354e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012890575s +STEP: Saw pod success +Feb 12 10:34:00.228: INFO: Pod "pod-037df1c0-cbee-4083-940c-908350354e0b" satisfied condition "Succeeded or Failed" +Feb 12 10:34:00.232: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-037df1c0-cbee-4083-940c-908350354e0b container test-container: +STEP: delete the pod +Feb 12 10:34:00.260: INFO: Waiting for pod pod-037df1c0-cbee-4083-940c-908350354e0b to disappear +Feb 12 10:34:00.266: INFO: Pod pod-037df1c0-cbee-4083-940c-908350354e0b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:34:00.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3865" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":176,"skipped":3038,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:34:00.289: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-601 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:34:02.487: INFO: Deleting pod "var-expansion-40571a92-2728-4ef4-bab5-f46bacb9205b" in namespace "var-expansion-601" +Feb 12 10:34:02.495: INFO: Wait up to 5m0s for pod "var-expansion-40571a92-2728-4ef4-bab5-f46bacb9205b" to be fully deleted +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:34:44.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-601" for this suite. + +• [SLOW TEST:44.240 seconds] +[k8s.io] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":311,"completed":177,"skipped":3092,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:34:44.530: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename ingressclass +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ingressclass-4818 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Feb 12 10:34:44.729: INFO: starting watch +STEP: patching +STEP: updating +Feb 12 10:34:44.745: INFO: waiting for watch events with expected annotations +Feb 12 10:34:44.745: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:34:44.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-4818" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":311,"completed":178,"skipped":3101,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:34:44.785: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1738 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:34:46.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:34:48.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722886, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722886, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722886, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748722886, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:34:51.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Feb 12 10:34:55.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=webhook-1738 attach --namespace=webhook-1738 to-be-attached-pod -i -c=container1' +Feb 12 10:34:55.455: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:34:55.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1738" for this suite. +STEP: Destroying namespace "webhook-1738-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:10.768 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":311,"completed":179,"skipped":3114,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:34:55.554: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6699 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-6699 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-6699 +I0212 10:34:55.811925 22 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6699, replica count: 2 +Feb 12 10:34:58.862: INFO: Creating new exec pod +I0212 10:34:58.862710 22 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 10:35:01.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6699 exec execpodrfmxr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' +Feb 12 10:35:02.439: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Feb 12 10:35:02.439: INFO: stdout: "" +Feb 12 10:35:02.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-6699 exec execpodrfmxr -- /bin/sh -x -c nc -zv -t -w 2 10.254.184.4 80' +Feb 12 10:35:02.873: INFO: stderr: "+ nc -zv -t -w 2 10.254.184.4 80\nConnection to 10.254.184.4 80 port [tcp/http] succeeded!\n" +Feb 12 10:35:02.873: INFO: stdout: "" +Feb 12 10:35:02.873: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:35:02.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6699" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:7.360 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":311,"completed":180,"skipped":3130,"failed":0} +SSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:35:02.915: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename containers +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6673 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:35:05.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6673" for this suite. +•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":311,"completed":181,"skipped":3138,"failed":0} +S +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:35:05.150: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8077 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Feb 12 10:35:05.328: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8077 5e50da45-4463-4f52-8dbb-720ee7b166fb 583254 0 2021-02-12 10:35:05 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] [{e2e.test Update v1 2021-02-12 10:35:05 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6z95n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6z95n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6z95n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Feb 12 10:35:05.334: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Feb 12 10:35:07.349: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Feb 12 10:35:09.346: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Feb 12 10:35:09.346: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8077 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:35:09.346: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Verifying customized DNS server is configured on pod... +Feb 12 10:35:09.647: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8077 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:35:09.647: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:35:09.898: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:35:09.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8077" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":311,"completed":182,"skipped":3139,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:35:09.941: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4269 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-4269, will wait for the garbage collector to delete the pods +Feb 12 10:35:14.242: INFO: Deleting Job.batch foo took: 13.165646ms +Feb 12 10:35:14.343: INFO: Terminating Job.batch foo pods took: 100.842404ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:36:43.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-4269" for this suite. + +• [SLOW TEST:93.843 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":311,"completed":183,"skipped":3170,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:36:43.788: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-7090 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod with failed condition +STEP: updating the pod +Feb 12 10:38:44.524: INFO: Successfully updated pod "var-expansion-bdd00c6d-bd41-4753-abd3-d064c1ef04c8" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Feb 12 10:38:46.541: INFO: Deleting pod "var-expansion-bdd00c6d-bd41-4753-abd3-d064c1ef04c8" in namespace "var-expansion-7090" +Feb 12 10:38:46.549: INFO: Wait up to 5m0s for pod "var-expansion-bdd00c6d-bd41-4753-abd3-d064c1ef04c8" to be fully deleted +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:39:44.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-7090" for this suite. + +• [SLOW TEST:180.794 seconds] +[k8s.io] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":311,"completed":184,"skipped":3210,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:39:44.586: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2728 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating secret secrets-2728/secret-test-0e19abeb-56c6-4ffa-9c57-2bbe7e7d343f +STEP: Creating a pod to test consume secrets +Feb 12 10:39:44.783: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd" in namespace "secrets-2728" to be "Succeeded or Failed" +Feb 12 10:39:44.788: INFO: Pod "pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.656381ms +Feb 12 10:39:46.801: INFO: Pod "pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018057206s +Feb 12 10:39:48.824: INFO: Pod "pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04023152s +STEP: Saw pod success +Feb 12 10:39:48.824: INFO: Pod "pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd" satisfied condition "Succeeded or Failed" +Feb 12 10:39:48.835: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd container env-test: +STEP: delete the pod +Feb 12 10:39:49.030: INFO: Waiting for pod pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd to disappear +Feb 12 10:39:49.036: INFO: Pod pod-configmaps-b5fe2e05-6b14-458e-bd69-26ce16b182dd no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:39:49.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2728" for this suite. +•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":185,"skipped":3213,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:39:49.056: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7635 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:39:49.237: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c277ce33-9b39-45aa-9b4a-c0ac14236922" in namespace "security-context-test-7635" to be "Succeeded or Failed" +Feb 12 10:39:49.242: INFO: Pod "busybox-readonly-false-c277ce33-9b39-45aa-9b4a-c0ac14236922": Phase="Pending", Reason="", readiness=false. Elapsed: 5.114106ms +Feb 12 10:39:51.251: INFO: Pod "busybox-readonly-false-c277ce33-9b39-45aa-9b4a-c0ac14236922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014230783s +Feb 12 10:39:51.251: INFO: Pod "busybox-readonly-false-c277ce33-9b39-45aa-9b4a-c0ac14236922" satisfied condition "Succeeded or Failed" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:39:51.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-7635" for this suite. +•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":311,"completed":186,"skipped":3254,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:39:51.270: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9411 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:39:51.447: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Feb 12 10:39:56.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 create -f -' +Feb 12 10:39:56.988: INFO: stderr: "" +Feb 12 10:39:56.988: INFO: stdout: "e2e-test-crd-publish-openapi-5971-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Feb 12 10:39:56.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 delete e2e-test-crd-publish-openapi-5971-crds test-foo' +Feb 12 10:39:57.145: INFO: stderr: "" +Feb 12 10:39:57.145: INFO: stdout: "e2e-test-crd-publish-openapi-5971-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Feb 12 10:39:57.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 apply -f -' +Feb 12 10:39:57.481: INFO: stderr: "" +Feb 12 10:39:57.481: INFO: stdout: "e2e-test-crd-publish-openapi-5971-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Feb 12 10:39:57.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 delete e2e-test-crd-publish-openapi-5971-crds test-foo' +Feb 12 10:39:57.654: INFO: stderr: "" +Feb 12 10:39:57.654: INFO: stdout: "e2e-test-crd-publish-openapi-5971-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Feb 12 10:39:57.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 create -f -' +Feb 12 10:39:57.986: INFO: rc: 1 +Feb 12 10:39:57.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 apply -f -' +Feb 12 10:39:58.318: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Feb 12 10:39:58.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 create -f -' +Feb 12 10:39:58.714: INFO: rc: 1 +Feb 12 10:39:58.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 --namespace=crd-publish-openapi-9411 apply -f -' +Feb 12 10:39:59.151: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Feb 12 10:39:59.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 explain e2e-test-crd-publish-openapi-5971-crds' +Feb 12 10:39:59.511: INFO: stderr: "" +Feb 12 10:39:59.511: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5971-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Feb 12 10:39:59.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 explain e2e-test-crd-publish-openapi-5971-crds.metadata' +Feb 12 10:39:59.903: INFO: stderr: "" +Feb 12 10:39:59.903: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5971-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Feb 12 10:39:59.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 explain e2e-test-crd-publish-openapi-5971-crds.spec' +Feb 12 10:40:00.248: INFO: stderr: "" +Feb 12 10:40:00.248: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5971-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Feb 12 10:40:00.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 explain e2e-test-crd-publish-openapi-5971-crds.spec.bars' +Feb 12 10:40:00.581: INFO: stderr: "" +Feb 12 10:40:00.581: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5971-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Feb 12 10:40:00.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-9411 explain e2e-test-crd-publish-openapi-5971-crds.spec.bars2' +Feb 12 10:40:00.960: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:40:05.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9411" for this suite. + +• [SLOW TEST:14.530 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":311,"completed":187,"skipped":3259,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:40:05.802: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2302 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:40:07.022: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:40:09.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723207, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723207, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723207, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723207, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:40:12.064: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:40:22.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2302" for this suite. +STEP: Destroying namespace "webhook-2302-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:16.545 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":311,"completed":188,"skipped":3278,"failed":0} +SSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:40:22.347: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-260 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod liveness-c913e5df-68fb-4962-9801-8506b258bdc3 in namespace container-probe-260 +Feb 12 10:40:26.573: INFO: Started pod liveness-c913e5df-68fb-4962-9801-8506b258bdc3 in namespace container-probe-260 +STEP: checking the pod's current state and verifying that restartCount is present +Feb 12 10:40:26.578: INFO: Initial restart count of pod liveness-c913e5df-68fb-4962-9801-8506b258bdc3 is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:44:28.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-260" for this suite. + +• [SLOW TEST:246.028 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":311,"completed":189,"skipped":3286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:44:28.379: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6920 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: set up a multi version CRD +Feb 12 10:44:28.560: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:44:58.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6920" for this suite. + +• [SLOW TEST:29.925 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":311,"completed":190,"skipped":3320,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:44:58.304: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3827 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:44:58.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0" in namespace "downward-api-3827" to be "Succeeded or Failed" +Feb 12 10:44:58.479: INFO: Pod "downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559293ms +Feb 12 10:45:00.489: INFO: Pod "downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016028778s +Feb 12 10:45:02.504: INFO: Pod "downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031340354s +STEP: Saw pod success +Feb 12 10:45:02.504: INFO: Pod "downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0" satisfied condition "Succeeded or Failed" +Feb 12 10:45:02.508: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0 container client-container: +STEP: delete the pod +Feb 12 10:45:02.664: INFO: Waiting for pod downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0 to disappear +Feb 12 10:45:02.670: INFO: Pod downwardapi-volume-d9e789a8-f938-4ce2-b593-b9787fa003b0 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:45:02.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3827" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":191,"skipped":3326,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:45:02.680: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2443 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating the pod +Feb 12 10:45:05.407: INFO: Successfully updated pod "labelsupdate641a6ff6-6ba0-4c59-9655-c9398b0db7f0" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:45:07.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2443" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":192,"skipped":3332,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:45:07.455: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8347 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service endpoint-test2 in namespace services-8347 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8347 to expose endpoints map[] +Feb 12 10:45:07.632: INFO: successfully validated that service endpoint-test2 in namespace services-8347 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-8347 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8347 to expose endpoints map[pod1:[80]] +Feb 12 10:45:10.679: INFO: successfully validated that service endpoint-test2 in namespace services-8347 exposes endpoints map[pod1:[80]] +STEP: Creating pod pod2 in namespace services-8347 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8347 to expose endpoints map[pod1:[80] pod2:[80]] +Feb 12 10:45:13.722: INFO: successfully validated that service endpoint-test2 in namespace services-8347 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Deleting pod pod1 in namespace services-8347 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8347 to expose endpoints map[pod2:[80]] +Feb 12 10:45:13.780: INFO: successfully validated that service endpoint-test2 in namespace services-8347 exposes endpoints map[pod2:[80]] +STEP: Deleting pod pod2 in namespace services-8347 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8347 to expose endpoints map[] +Feb 12 10:45:13.816: INFO: successfully validated that service endpoint-test2 in namespace services-8347 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:45:13.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8347" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:6.414 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":311,"completed":193,"skipped":3339,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:45:13.873: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8817 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:45:14.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8" in namespace "projected-8817" to be "Succeeded or Failed" +Feb 12 10:45:14.051: INFO: Pod "downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858403ms +Feb 12 10:45:16.056: INFO: Pod "downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012304445s +STEP: Saw pod success +Feb 12 10:45:16.056: INFO: Pod "downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8" satisfied condition "Succeeded or Failed" +Feb 12 10:45:16.060: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8 container client-container: +STEP: delete the pod +Feb 12 10:45:16.086: INFO: Waiting for pod downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8 to disappear +Feb 12 10:45:16.090: INFO: Pod downwardapi-volume-0a81c627-bac3-44d4-ae87-dd9e64e1f7d8 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:45:16.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8817" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":311,"completed":194,"skipped":3373,"failed":0} + +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:45:16.113: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7080 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-7080 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-7080 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7080 +Feb 12 10:45:16.320: INFO: Found 0 stateful pods, waiting for 1 +Feb 12 10:45:26.337: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Feb 12 10:45:26.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:45:26.839: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:45:26.839: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:45:26.839: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:45:26.846: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Feb 12 10:45:36.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:45:36.862: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:45:36.883: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999387s +Feb 12 10:45:37.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995383808s +Feb 12 10:45:38.910: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979733464s +Feb 12 10:45:39.916: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969256675s +Feb 12 10:45:40.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962464321s +Feb 12 10:45:41.939: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.949450347s +Feb 12 10:45:42.952: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.940313593s +Feb 12 10:45:43.957: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.926427099s +Feb 12 10:45:44.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.921664582s +Feb 12 10:45:45.978: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.152662ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7080 +Feb 12 10:45:46.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:45:47.410: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 10:45:47.410: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:45:47.410: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:45:47.418: INFO: Found 1 stateful pods, waiting for 3 +Feb 12 10:45:57.436: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 10:45:57.436: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Feb 12 10:45:57.436: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Feb 12 10:45:57.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:45:57.849: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:45:57.849: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:45:57.849: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:45:57.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:45:58.368: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:45:58.368: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:45:58.368: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:45:58.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Feb 12 10:45:58.776: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Feb 12 10:45:58.776: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Feb 12 10:45:58.776: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Feb 12 10:45:58.776: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:45:58.784: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Feb 12 10:46:08.816: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:46:08.816: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:46:08.816: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Feb 12 10:46:08.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999586s +Feb 12 10:46:09.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991782644s +Feb 12 10:46:10.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978942051s +Feb 12 10:46:11.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972684161s +Feb 12 10:46:12.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964324537s +Feb 12 10:46:13.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952387637s +Feb 12 10:46:14.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.877526926s +Feb 12 10:46:15.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.868586168s +Feb 12 10:46:16.986: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.855854886s +Feb 12 10:46:18.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 844.18967ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7080 +Feb 12 10:46:19.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:46:19.443: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 10:46:19.443: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:46:19.443: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:46:19.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:46:19.863: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 10:46:19.863: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:46:19.863: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:46:19.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=statefulset-7080 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Feb 12 10:46:20.261: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Feb 12 10:46:20.261: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Feb 12 10:46:20.261: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Feb 12 10:46:20.261: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 10:47:50.299: INFO: Deleting all statefulset in ns statefulset-7080 +Feb 12 10:47:50.306: INFO: Scaling statefulset ss to 0 +Feb 12 10:47:50.323: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:47:50.326: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:47:50.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7080" for this suite. + +• [SLOW TEST:154.247 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":311,"completed":195,"skipped":3373,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:47:50.365: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9960 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:47:50.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588" in namespace "projected-9960" to be "Succeeded or Failed" +Feb 12 10:47:50.546: INFO: Pod "downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963736ms +Feb 12 10:47:52.557: INFO: Pod "downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016484845s +STEP: Saw pod success +Feb 12 10:47:52.558: INFO: Pod "downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588" satisfied condition "Succeeded or Failed" +Feb 12 10:47:52.562: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588 container client-container: +STEP: delete the pod +Feb 12 10:47:52.652: INFO: Waiting for pod downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588 to disappear +Feb 12 10:47:52.656: INFO: Pod downwardapi-volume-85d1cb2e-d878-449b-9490-d2de324e7588 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:47:52.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9960" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":196,"skipped":3394,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:47:52.671: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2617 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Feb 12 10:47:52.855: INFO: Waiting up to 5m0s for pod "pod-649112f8-cda2-45a1-ae83-8a3fa849f950" in namespace "emptydir-2617" to be "Succeeded or Failed" +Feb 12 10:47:52.865: INFO: Pod "pod-649112f8-cda2-45a1-ae83-8a3fa849f950": Phase="Pending", Reason="", readiness=false. Elapsed: 9.524387ms +Feb 12 10:47:54.873: INFO: Pod "pod-649112f8-cda2-45a1-ae83-8a3fa849f950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018337841s +STEP: Saw pod success +Feb 12 10:47:54.873: INFO: Pod "pod-649112f8-cda2-45a1-ae83-8a3fa849f950" satisfied condition "Succeeded or Failed" +Feb 12 10:47:54.878: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-649112f8-cda2-45a1-ae83-8a3fa849f950 container test-container: +STEP: delete the pod +Feb 12 10:47:54.906: INFO: Waiting for pod pod-649112f8-cda2-45a1-ae83-8a3fa849f950 to disappear +Feb 12 10:47:54.913: INFO: Pod pod-649112f8-cda2-45a1-ae83-8a3fa849f950 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:47:54.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2617" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":197,"skipped":3418,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:47:54.927: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1743 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:47:55.086: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Feb 12 10:48:00.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-1743 --namespace=crd-publish-openapi-1743 create -f -' +Feb 12 10:48:00.991: INFO: stderr: "" +Feb 12 10:48:00.991: INFO: stdout: "e2e-test-crd-publish-openapi-8239-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Feb 12 10:48:00.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-1743 --namespace=crd-publish-openapi-1743 delete e2e-test-crd-publish-openapi-8239-crds test-cr' +Feb 12 10:48:01.170: INFO: stderr: "" +Feb 12 10:48:01.170: INFO: stdout: "e2e-test-crd-publish-openapi-8239-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Feb 12 10:48:01.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-1743 --namespace=crd-publish-openapi-1743 apply -f -' +Feb 12 10:48:01.624: INFO: stderr: "" +Feb 12 10:48:01.624: INFO: stdout: "e2e-test-crd-publish-openapi-8239-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Feb 12 10:48:01.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-1743 --namespace=crd-publish-openapi-1743 delete e2e-test-crd-publish-openapi-8239-crds test-cr' +Feb 12 10:48:01.774: INFO: stderr: "" +Feb 12 10:48:01.774: INFO: stdout: "e2e-test-crd-publish-openapi-8239-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Feb 12 10:48:01.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-1743 explain e2e-test-crd-publish-openapi-8239-crds' +Feb 12 10:48:02.143: INFO: stderr: "" +Feb 12 10:48:02.143: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8239-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:48:07.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1743" for this suite. + +• [SLOW TEST:12.224 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":311,"completed":198,"skipped":3428,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:48:07.152: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1923 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:48:07.311: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:48:09.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1923" for this suite. +•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":311,"completed":199,"skipped":3461,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:48:09.400: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2299 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0777 on node default medium +Feb 12 10:48:09.570: INFO: Waiting up to 5m0s for pod "pod-007f7a1e-798e-4791-993a-1807494c8360" in namespace "emptydir-2299" to be "Succeeded or Failed" +Feb 12 10:48:09.575: INFO: Pod "pod-007f7a1e-798e-4791-993a-1807494c8360": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670742ms +Feb 12 10:48:11.583: INFO: Pod "pod-007f7a1e-798e-4791-993a-1807494c8360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012678812s +STEP: Saw pod success +Feb 12 10:48:11.583: INFO: Pod "pod-007f7a1e-798e-4791-993a-1807494c8360" satisfied condition "Succeeded or Failed" +Feb 12 10:48:11.587: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-007f7a1e-798e-4791-993a-1807494c8360 container test-container: +STEP: delete the pod +Feb 12 10:48:11.612: INFO: Waiting for pod pod-007f7a1e-798e-4791-993a-1807494c8360 to disappear +Feb 12 10:48:11.619: INFO: Pod pod-007f7a1e-798e-4791-993a-1807494c8360 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:48:11.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2299" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":200,"skipped":3502,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:48:11.628: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9517 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9517 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-9517 +STEP: creating replication controller externalsvc in namespace services-9517 +I0212 10:48:11.811642 22 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9517, replica count: 2 +I0212 10:48:14.862819 22 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Feb 12 10:48:14.908: INFO: Creating new exec pod +Feb 12 10:48:18.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9517 exec execpodsnth7 -- /bin/sh -x -c nslookup clusterip-service.services-9517.svc.cluster.local' +Feb 12 10:48:19.392: INFO: stderr: "+ nslookup clusterip-service.services-9517.svc.cluster.local\n" +Feb 12 10:48:19.392: INFO: stdout: "Server:\t\t10.254.0.10\nAddress:\t10.254.0.10#53\n\nclusterip-service.services-9517.svc.cluster.local\tcanonical name = externalsvc.services-9517.svc.cluster.local.\nName:\texternalsvc.services-9517.svc.cluster.local\nAddress: 10.254.255.156\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-9517, will wait for the garbage collector to delete the pods +Feb 12 10:48:19.466: INFO: Deleting ReplicationController externalsvc took: 14.873045ms +Feb 12 10:48:20.467: INFO: Terminating ReplicationController externalsvc pods took: 1.000739579s +Feb 12 10:48:43.816: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:48:43.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9517" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:32.224 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":311,"completed":201,"skipped":3507,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:48:43.855: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3785 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Feb 12 10:48:44.023: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:48:48.931: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:49:09.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3785" for this suite. + +• [SLOW TEST:25.697 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":311,"completed":202,"skipped":3510,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:49:09.552: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2434 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 +Feb 12 10:49:09.716: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Feb 12 10:49:09.726: INFO: Waiting for terminating namespaces to be deleted... +Feb 12 10:49:09.731: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-0 before test +Feb 12 10:49:09.743: INFO: calico-node-kwp5z from kube-system started at 2021-02-09 10:10:52 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.743: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: csi-cinder-nodeplugin-dlptl from kube-system started at 2021-02-09 10:11:22 +0000 UTC (2 container statuses recorded) +Feb 12 10:49:09.743: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: kube-dns-autoscaler-69ccc7c7c7-qwdlm from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.743: INFO: Container autoscaler ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: magnum-metrics-server-5c48f677d9-9t4sh from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.743: INFO: Container metrics-server ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: npd-ktpl7 from kube-system started at 2021-02-09 10:11:22 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.743: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:49:09.743: INFO: sonobuoy-e2e-job-49d5db3cb7e540b0 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:49:09.744: INFO: Container e2e ready: true, restart count 0 +Feb 12 10:49:09.744: INFO: Container sonobuoy-worker ready: true, restart count 0 +Feb 12 10:49:09.744: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:49:09.744: INFO: Container sonobuoy-worker ready: false, restart count 9 +Feb 12 10:49:09.744: INFO: Container systemd-logs ready: true, restart count 0 +Feb 12 10:49:09.744: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-1 before test +Feb 12 10:49:09.752: INFO: calico-node-xf85t from kube-system started at 2021-02-09 10:10:53 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.752: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:49:09.752: INFO: csi-cinder-nodeplugin-pgnxp from kube-system started at 2021-02-12 09:40:03 +0000 UTC (2 container statuses recorded) +Feb 12 10:49:09.752: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:49:09.752: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:49:09.752: INFO: npd-6phx9 from kube-system started at 2021-02-09 10:11:14 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.752: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:49:09.752: INFO: sonobuoy from sonobuoy started at 2021-02-12 09:27:24 +0000 UTC (1 container statuses recorded) +Feb 12 10:49:09.752: INFO: Container kube-sonobuoy ready: true, restart count 0 +Feb 12 10:49:09.752: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:49:09.752: INFO: Container sonobuoy-worker ready: false, restart count 9 +Feb 12 10:49:09.753: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.1662fa6a03fde496], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.1662fa6a04f80957], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:49:10.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2434" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":311,"completed":203,"skipped":3544,"failed":0} +SSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:49:10.828: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2251 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service multi-endpoint-test in namespace services-2251 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2251 to expose endpoints map[] +Feb 12 10:49:11.003: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Feb 12 10:49:12.026: INFO: successfully validated that service multi-endpoint-test in namespace services-2251 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-2251 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2251 to expose endpoints map[pod1:[100]] +Feb 12 10:49:15.070: INFO: successfully validated that service multi-endpoint-test in namespace services-2251 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-2251 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2251 to expose endpoints map[pod1:[100] pod2:[101]] +Feb 12 10:49:18.127: INFO: successfully validated that service multi-endpoint-test in namespace services-2251 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Deleting pod pod1 in namespace services-2251 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2251 to expose endpoints map[pod2:[101]] +Feb 12 10:49:18.175: INFO: successfully validated that service multi-endpoint-test in namespace services-2251 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-2251 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2251 to expose endpoints map[] +Feb 12 10:49:18.217: INFO: successfully validated that service multi-endpoint-test in namespace services-2251 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:49:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2251" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:7.448 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":311,"completed":204,"skipped":3551,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:49:18.276: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2207 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-2207 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating statefulset ss in namespace statefulset-2207 +Feb 12 10:49:18.459: INFO: Found 0 stateful pods, waiting for 1 +Feb 12 10:49:28.477: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 10:49:28.504: INFO: Deleting all statefulset in ns statefulset-2207 +Feb 12 10:49:28.515: INFO: Scaling statefulset ss to 0 +Feb 12 10:49:48.576: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 10:49:48.584: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:49:48.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2207" for this suite. + +• [SLOW TEST:30.359 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":311,"completed":205,"skipped":3561,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:49:48.641: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6546 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Feb 12 10:49:52.842: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6546 PodName:var-expansion-74d9279a-bcd8-4e69-877a-7d2859f492b0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:49:52.842: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: test for file in mounted path +Feb 12 10:49:53.068: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6546 PodName:var-expansion-74d9279a-bcd8-4e69-877a-7d2859f492b0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:49:53.068: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: updating the annotation value +Feb 12 10:49:53.816: INFO: Successfully updated pod "var-expansion-74d9279a-bcd8-4e69-877a-7d2859f492b0" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Feb 12 10:49:53.821: INFO: Deleting pod "var-expansion-74d9279a-bcd8-4e69-877a-7d2859f492b0" in namespace "var-expansion-6546" +Feb 12 10:49:53.827: INFO: Wait up to 5m0s for pod "var-expansion-74d9279a-bcd8-4e69-877a-7d2859f492b0" to be fully deleted +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:50:43.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6546" for this suite. + +• [SLOW TEST:55.226 seconds] +[k8s.io] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":311,"completed":206,"skipped":3596,"failed":0} +SS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:50:43.867: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1978 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +Feb 12 10:50:44.045: INFO: PodSpec: initContainers in spec.initContainers +Feb 12 10:51:29.260: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fd0fa3fd-c55d-48b2-b843-1469610acda2", GenerateName:"", Namespace:"init-container-1978", SelfLink:"", UID:"be4ba888-8365-4975-b416-7de904fbfbfe", ResourceVersion:"586735", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63748723844, loc:(*time.Location)(0x7962e20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"45672917"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"10.100.45.61/32", "cni.projectcalico.org/podIPs":"10.100.45.61/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00134a060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00134a080)}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00134a0a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00134a0c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00134a0e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00134a100)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jmb8t", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005c22040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"10.60.253.37/magnum/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jmb8t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"10.60.253.37/magnum/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jmb8t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jmb8t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003ad20c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-calico-coreos-yo5lpoxhpdlk-node-1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00360c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ad2140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ad2160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003ad2168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003ad216c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0024d2050), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723844, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723844, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723844, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748723844, loc:(*time.Location)(0x7962e20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.0.234", PodIP:"10.100.45.61", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.45.61"}}, StartTime:(*v1.Time)(0xc00134a120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00360c0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00360c230)}, Ready:false, RestartCount:3, Image:"10.60.253.37/magnum/busybox:1.29", ImageID:"docker-pullable://10.60.253.37/magnum/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://672de1a2e37f1eeb37f0bcf3122d61a4eca9d432b8493d15315cbe32a0ce3678", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00134a160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"10.60.253.37/magnum/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00134a140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003ad21ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:51:29.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1978" for this suite. + +• [SLOW TEST:45.415 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":311,"completed":207,"skipped":3598,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:51:29.284: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-8167 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod pod-subpath-test-secret-2vg2 +STEP: Creating a pod to test atomic-volume-subpath +Feb 12 10:51:29.480: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2vg2" in namespace "subpath-8167" to be "Succeeded or Failed" +Feb 12 10:51:29.486: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.270935ms +Feb 12 10:51:31.493: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 2.012834497s +Feb 12 10:51:33.509: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 4.028817338s +Feb 12 10:51:35.522: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 6.04177785s +Feb 12 10:51:37.534: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 8.05384583s +Feb 12 10:51:39.545: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.064811465s +Feb 12 10:51:41.559: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 12.078493077s +Feb 12 10:51:43.580: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 14.099098686s +Feb 12 10:51:45.597: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 16.11605528s +Feb 12 10:51:47.606: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 18.125777714s +Feb 12 10:51:49.622: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 20.14163263s +Feb 12 10:51:51.636: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Running", Reason="", readiness=true. Elapsed: 22.155234196s +Feb 12 10:51:53.653: INFO: Pod "pod-subpath-test-secret-2vg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.172066001s +STEP: Saw pod success +Feb 12 10:51:53.653: INFO: Pod "pod-subpath-test-secret-2vg2" satisfied condition "Succeeded or Failed" +Feb 12 10:51:53.656: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-subpath-test-secret-2vg2 container test-container-subpath-secret-2vg2: +STEP: delete the pod +Feb 12 10:51:53.826: INFO: Waiting for pod pod-subpath-test-secret-2vg2 to disappear +Feb 12 10:51:53.832: INFO: Pod pod-subpath-test-secret-2vg2 no longer exists +STEP: Deleting pod pod-subpath-test-secret-2vg2 +Feb 12 10:51:53.832: INFO: Deleting pod "pod-subpath-test-secret-2vg2" in namespace "subpath-8167" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:51:53.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8167" for this suite. + +• [SLOW TEST:24.567 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":311,"completed":208,"skipped":3610,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:51:53.857: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replicaset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-3792 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:51:54.024: INFO: Creating ReplicaSet my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde +Feb 12 10:51:54.037: INFO: Pod name my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde: Found 0 pods out of 1 +Feb 12 10:51:59.051: INFO: Pod name my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde: Found 1 pods out of 1 +Feb 12 10:51:59.052: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde" is running +Feb 12 10:51:59.057: INFO: Pod "my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde-6k8tf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 10:51:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 10:51:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 10:51:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 10:51:54 +0000 UTC Reason: Message:}]) +Feb 12 10:51:59.058: INFO: Trying to dial the pod +Feb 12 10:52:04.089: INFO: Controller my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde: Got expected result from replica 1 [my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde-6k8tf]: "my-hostname-basic-3edbc52f-1447-4db4-b92b-35325f174fde-6k8tf", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:52:04.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3792" for this suite. + +• [SLOW TEST:10.248 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":209,"skipped":3720,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:04.107: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8566 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:52:04.293: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:52:06.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8566" for this suite. +•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":311,"completed":210,"skipped":3743,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:06.580: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename init-container +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4847 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating the pod +Feb 12 10:52:06.801: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:52:12.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-4847" for this suite. + +• [SLOW TEST:5.943 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":311,"completed":211,"skipped":3758,"failed":0} +SSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:12.523: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1672 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-146e60f2-cccb-4039-b32c-648550162255 +STEP: Creating a pod to test consume configMaps +Feb 12 10:52:12.698: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0" in namespace "projected-1672" to be "Succeeded or Failed" +Feb 12 10:52:12.704: INFO: Pod "pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388651ms +Feb 12 10:52:14.722: INFO: Pod "pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023906374s +STEP: Saw pod success +Feb 12 10:52:14.722: INFO: Pod "pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0" satisfied condition "Succeeded or Failed" +Feb 12 10:52:14.733: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0 container agnhost-container: +STEP: delete the pod +Feb 12 10:52:14.891: INFO: Waiting for pod pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0 to disappear +Feb 12 10:52:14.897: INFO: Pod pod-projected-configmaps-56d36722-2524-4934-9b80-178a39b1e8a0 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:52:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1672" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":212,"skipped":3761,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:14.908: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7406 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Feb 12 10:52:16.235: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +W0212 10:52:16.235730 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 10:52:16.235824 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 10:52:16.235840 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +Feb 12 10:52:16.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7406" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":311,"completed":213,"skipped":3780,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:16.252: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-6311 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 +Feb 12 10:52:16.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Feb 12 10:52:16.422: INFO: Waiting for terminating namespaces to be deleted... +Feb 12 10:52:16.427: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-0 before test +Feb 12 10:52:16.439: INFO: calico-node-kwp5z from kube-system started at 2021-02-09 10:10:52 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: csi-cinder-nodeplugin-dlptl from kube-system started at 2021-02-09 10:11:22 +0000 UTC (2 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: kube-dns-autoscaler-69ccc7c7c7-qwdlm from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container autoscaler ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: magnum-metrics-server-5c48f677d9-9t4sh from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container metrics-server ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: npd-ktpl7 from kube-system started at 2021-02-09 10:11:22 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: sonobuoy-e2e-job-49d5db3cb7e540b0 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container e2e ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: Container sonobuoy-worker ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:52:16.439: INFO: Container sonobuoy-worker ready: false, restart count 9 +Feb 12 10:52:16.439: INFO: Container systemd-logs ready: true, restart count 0 +Feb 12 10:52:16.439: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-1 before test +Feb 12 10:52:16.446: INFO: pod-init-6f00b337-c9ae-4eac-bb9f-032897cced58 from init-container-4847 started at 2021-02-12 10:52:06 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.446: INFO: Container run1 ready: true, restart count 0 +Feb 12 10:52:16.446: INFO: calico-node-xf85t from kube-system started at 2021-02-09 10:10:53 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.446: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: csi-cinder-nodeplugin-pgnxp from kube-system started at 2021-02-12 09:40:03 +0000 UTC (2 container statuses recorded) +Feb 12 10:52:16.447: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: npd-6phx9 from kube-system started at 2021-02-09 10:11:14 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.447: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: pod-exec-websocket-e2a7bd79-3fad-4ae7-b9dd-d8878ed1d347 from pods-8566 started at 2021-02-12 10:52:04 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.447: INFO: Container main ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: sonobuoy from sonobuoy started at 2021-02-12 09:27:24 +0000 UTC (1 container statuses recorded) +Feb 12 10:52:16.447: INFO: Container kube-sonobuoy ready: true, restart count 0 +Feb 12 10:52:16.447: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:52:16.447: INFO: Container sonobuoy-worker ready: false, restart count 9 +Feb 12 10:52:16.447: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-7070560c-c57f-4635-8d27-70615a2027b3 90 +STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled +STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.0.0.234 on the node which pod1 resides and expect scheduled +STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.0.0.234 but use UDP protocol on the node which pod2 resides +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 +Feb 12 10:52:30.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.0.234 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:30.671: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 +Feb 12 10:52:30.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.0.234:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:30.952: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 UDP +Feb 12 10:52:31.209: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.0.0.234 54321] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:31.209: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 +Feb 12 10:52:36.466: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.0.234 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:36.466: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 +Feb 12 10:52:36.843: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.0.234:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:36.843: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 UDP +Feb 12 10:52:37.074: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.0.0.234 54321] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:37.074: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 +Feb 12 10:52:42.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.0.234 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:42.337: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 +Feb 12 10:52:42.587: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.0.234:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:42.588: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 UDP +Feb 12 10:52:42.827: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.0.0.234 54321] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:42.827: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 +Feb 12 10:52:48.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.0.234 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:48.074: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 +Feb 12 10:52:48.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.0.234:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:48.329: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 UDP +Feb 12 10:52:48.576: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.0.0.234 54321] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:48.576: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 +Feb 12 10:52:53.823: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.0.234 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:53.823: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 +Feb 12 10:52:54.082: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.0.234:54321/hostname] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:54.082: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.0.234, port: 54321 UDP +Feb 12 10:52:54.333: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.0.0.234 54321] Namespace:sched-pred-6311 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 10:52:54.333: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: removing the label kubernetes.io/e2e-7070560c-c57f-4635-8d27-70615a2027b3 off the node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-7070560c-c57f-4635-8d27-70615a2027b3 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:52:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-6311" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 + +• [SLOW TEST:43.363 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":311,"completed":214,"skipped":3826,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:52:59.615: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6866 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6866.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6866.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6866.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6866.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6866.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6866.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 10:53:11.847: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca: the server could not find the requested resource (get pods dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca) +Feb 12 10:53:11.851: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca: the server could not find the requested resource (get pods dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca) +Feb 12 10:53:11.863: INFO: Unable to read jessie_udp@PodARecord from pod dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca: the server could not find the requested resource (get pods dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca) +Feb 12 10:53:11.867: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca: the server could not find the requested resource (get pods dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca) +Feb 12 10:53:11.867: INFO: Lookups using dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 10:53:16.906: INFO: DNS probes using dns-6866/dns-test-57ac6dba-0ca3-45cc-b190-9284fca361ca succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:16.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6866" for this suite. + +• [SLOW TEST:17.383 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":311,"completed":215,"skipped":3830,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:17.005: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1874 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Feb 12 10:53:17.293: INFO: Waiting up to 5m0s for pod "pod-fe08afcf-6719-463e-9b9d-ea092fc24d82" in namespace "emptydir-1874" to be "Succeeded or Failed" +Feb 12 10:53:17.304: INFO: Pod "pod-fe08afcf-6719-463e-9b9d-ea092fc24d82": Phase="Pending", Reason="", readiness=false. Elapsed: 10.98037ms +Feb 12 10:53:19.319: INFO: Pod "pod-fe08afcf-6719-463e-9b9d-ea092fc24d82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025313808s +Feb 12 10:53:21.329: INFO: Pod "pod-fe08afcf-6719-463e-9b9d-ea092fc24d82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035822531s +STEP: Saw pod success +Feb 12 10:53:21.329: INFO: Pod "pod-fe08afcf-6719-463e-9b9d-ea092fc24d82" satisfied condition "Succeeded or Failed" +Feb 12 10:53:21.335: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-fe08afcf-6719-463e-9b9d-ea092fc24d82 container test-container: +STEP: delete the pod +Feb 12 10:53:21.364: INFO: Waiting for pod pod-fe08afcf-6719-463e-9b9d-ea092fc24d82 to disappear +Feb 12 10:53:21.372: INFO: Pod pod-fe08afcf-6719-463e-9b9d-ea092fc24d82 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:21.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1874" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":216,"skipped":3832,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:21.390: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-9194 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-224 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-6758 +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:27.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-9194" for this suite. +STEP: Destroying namespace "nsdeletetest-224" for this suite. +Feb 12 10:53:27.921: INFO: Namespace nsdeletetest-224 was already deleted +STEP: Destroying namespace "nsdeletetest-6758" for this suite. + +• [SLOW TEST:6.540 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":311,"completed":217,"skipped":3835,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:27.931: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-8901 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:53:28.141: INFO: Creating deployment "test-recreate-deployment" +Feb 12 10:53:28.148: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Feb 12 10:53:28.165: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Feb 12 10:53:30.177: INFO: Waiting deployment "test-recreate-deployment" to complete +Feb 12 10:53:30.181: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Feb 12 10:53:30.192: INFO: Updating deployment test-recreate-deployment +Feb 12 10:53:30.193: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 10:53:30.270: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-8901 502e8099-15d6-40d3-9b88-b0d8bfb69ac6 587560 2 2021-02-12 10:53:28 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d0f4d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-12 10:53:30 +0000 UTC,LastTransitionTime:2021-02-12 10:53:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5d4bb669d6" is progressing.,LastUpdateTime:2021-02-12 10:53:30 +0000 UTC,LastTransitionTime:2021-02-12 10:53:28 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Feb 12 10:53:30.283: INFO: New ReplicaSet "test-recreate-deployment-5d4bb669d6" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-5d4bb669d6 deployment-8901 3d26b534-d509-4676-b377-c17ada1f95c1 587557 1 2021-02-12 10:53:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5d4bb669d6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 502e8099-15d6-40d3-9b88-b0d8bfb69ac6 0xc005d0f847 0xc005d0f848}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"502e8099-15d6-40d3-9b88-b0d8bfb69ac6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5d4bb669d6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5d4bb669d6] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d0f8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:53:30.283: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Feb 12 10:53:30.283: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-8901 9616b8ca-e477-4fdf-9f64-4e36ccfc091e 587547 2 2021-02-12 10:53:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 502e8099-15d6-40d3-9b88-b0d8bfb69ac6 0xc005d0f937 0xc005d0f938}] [] [{kube-controller-manager Update apps/v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"502e8099-15d6-40d3-9b88-b0d8bfb69ac6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005d0f9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 10:53:30.289: INFO: Pod "test-recreate-deployment-5d4bb669d6-ql4k5" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-5d4bb669d6-ql4k5 test-recreate-deployment-5d4bb669d6- deployment-8901 086b11b2-ad21-442c-8865-ccec1736590b 587559 0 2021-02-12 10:53:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5d4bb669d6] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-5d4bb669d6 3d26b534-d509-4676-b377-c17ada1f95c1 0xc005d0fdf7 0xc005d0fdf8}] [] [{kube-controller-manager Update v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d26b534-d509-4676-b377-c17ada1f95c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-12 10:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hgpd2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hgpd2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hgpd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 10:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:,StartTime:2021-02-12 10:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:10.60.253.37/magnum/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:30.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8901" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":218,"skipped":3850,"failed":0} +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:30.306: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8589 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:53:31.628: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:53:33.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724011, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724011, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:53:36.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Feb 12 10:53:36.711: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:36.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8589" for this suite. +STEP: Destroying namespace "webhook-8589-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.517 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":311,"completed":219,"skipped":3853,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:36.824: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-570 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:53:37.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475" in namespace "downward-api-570" to be "Succeeded or Failed" +Feb 12 10:53:37.145: INFO: Pod "downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475": Phase="Pending", Reason="", readiness=false. Elapsed: 13.879976ms +Feb 12 10:53:39.157: INFO: Pod "downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025831814s +STEP: Saw pod success +Feb 12 10:53:39.157: INFO: Pod "downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475" satisfied condition "Succeeded or Failed" +Feb 12 10:53:39.159: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475 container client-container: +STEP: delete the pod +Feb 12 10:53:39.194: INFO: Waiting for pod downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475 to disappear +Feb 12 10:53:39.221: INFO: Pod downwardapi-volume-69fa289f-51f3-47bc-9ce4-a3814407b475 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:39.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-570" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":220,"skipped":3866,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:39.250: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4989 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap that has name configmap-test-emptyKey-2993f529-b16f-42c5-ac70-364d634b1ea6 +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:53:39.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4989" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":311,"completed":221,"skipped":3886,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:53:39.458: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6015 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Feb 12 10:53:39.623: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 10:53:44.297: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:03.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6015" for this suite. + +• [SLOW TEST:23.771 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":311,"completed":222,"skipped":3892,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:03.229: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename security-context-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-5323 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:54:03.416: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ed8591c0-ead4-4119-8e79-27c6f41d139e" in namespace "security-context-test-5323" to be "Succeeded or Failed" +Feb 12 10:54:03.421: INFO: Pod "busybox-user-65534-ed8591c0-ead4-4119-8e79-27c6f41d139e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501551ms +Feb 12 10:54:05.460: INFO: Pod "busybox-user-65534-ed8591c0-ead4-4119-8e79-27c6f41d139e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04374205s +Feb 12 10:54:05.460: INFO: Pod "busybox-user-65534-ed8591c0-ead4-4119-8e79-27c6f41d139e" satisfied condition "Succeeded or Failed" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5323" for this suite. +•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":223,"skipped":3912,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:05.474: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3398 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:54:06.709: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:54:08.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724046, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724046, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724046, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724046, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:54:11.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:11.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3398" for this suite. +STEP: Destroying namespace "webhook-3398-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.549 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":311,"completed":224,"skipped":3943,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:12.023: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6815 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 10:54:12.641: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 10:54:14.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724052, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724052, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724052, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724052, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 10:54:17.696: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 10:54:17.737: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3032-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:18.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6815" for this suite. +STEP: Destroying namespace "webhook-6815-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.950 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":311,"completed":225,"skipped":3958,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:18.974: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4625 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 10:54:19.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c" in namespace "projected-4625" to be "Succeeded or Failed" +Feb 12 10:54:19.215: INFO: Pod "downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.169965ms +Feb 12 10:54:21.225: INFO: Pod "downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015334769s +STEP: Saw pod success +Feb 12 10:54:21.225: INFO: Pod "downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c" satisfied condition "Succeeded or Failed" +Feb 12 10:54:21.228: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c container client-container: +STEP: delete the pod +Feb 12 10:54:21.308: INFO: Waiting for pod downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c to disappear +Feb 12 10:54:21.312: INFO: Pod downwardapi-volume-b9a1a8e7-811c-4b1e-90cd-5959e3da124c no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:21.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4625" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":226,"skipped":3967,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:21.326: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6774 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: running the image 10.60.253.37/magnum/httpd:2.4.38-alpine +Feb 12 10:54:21.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6774 run e2e-test-httpd-pod --restart=Never --image=10.60.253.37/magnum/httpd:2.4.38-alpine' +Feb 12 10:54:21.688: INFO: stderr: "" +Feb 12 10:54:21.688: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +Feb 12 10:54:21.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-6774 delete pods e2e-test-httpd-pod' +Feb 12 10:54:43.727: INFO: stderr: "" +Feb 12 10:54:43.727: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:43.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6774" for this suite. + +• [SLOW TEST:22.425 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":311,"completed":227,"skipped":3968,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:43.753: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4388 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Feb 12 10:54:43.963: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588219 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:54:43.964: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588220 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:54:43.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588221 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Feb 12 10:54:54.017: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588253 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:54:54.017: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588254 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 10:54:54.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4388 ec727b46-46ab-4bfd-b375-ebef5d4411a7 588255 0 2021-02-12 10:54:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-12 10:54:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:54:54.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4388" for this suite. + +• [SLOW TEST:10.277 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":311,"completed":228,"skipped":3971,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:54:54.034: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9375 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:55:22.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9375" for this suite. + +• [SLOW TEST:28.241 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":311,"completed":229,"skipped":3993,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:55:22.277: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-9007 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:55:27.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9007" for this suite. + +• [SLOW TEST:5.360 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":311,"completed":230,"skipped":4000,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:55:27.643: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9582 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Feb 12 10:55:27.815: INFO: Waiting up to 5m0s for pod "pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc" in namespace "emptydir-9582" to be "Succeeded or Failed" +Feb 12 10:55:27.824: INFO: Pod "pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61748ms +Feb 12 10:55:29.836: INFO: Pod "pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020327012s +STEP: Saw pod success +Feb 12 10:55:29.836: INFO: Pod "pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc" satisfied condition "Succeeded or Failed" +Feb 12 10:55:29.840: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc container test-container: +STEP: delete the pod +Feb 12 10:55:29.865: INFO: Waiting for pod pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc to disappear +Feb 12 10:55:29.869: INFO: Pod pod-7f13461b-9bee-4e7a-a60e-f7c1468638fc no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 10:55:29.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9582" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":231,"skipped":4026,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 10:55:29.882: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3197 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 +Feb 12 10:55:30.088: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Feb 12 10:55:30.096: INFO: Waiting for terminating namespaces to be deleted... +Feb 12 10:55:30.102: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-0 before test +Feb 12 10:55:30.114: INFO: calico-node-kwp5z from kube-system started at 2021-02-09 10:10:52 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: csi-cinder-nodeplugin-dlptl from kube-system started at 2021-02-09 10:11:22 +0000 UTC (2 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: kube-dns-autoscaler-69ccc7c7c7-qwdlm from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container autoscaler ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: magnum-metrics-server-5c48f677d9-9t4sh from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container metrics-server ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: npd-ktpl7 from kube-system started at 2021-02-09 10:11:22 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: rc-test-mqq9b from replication-controller-9007 started at 2021-02-12 10:55:24 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container rc-test ready: true, restart count 0 +Feb 12 10:55:30.114: INFO: sonobuoy-e2e-job-49d5db3cb7e540b0 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:55:30.114: INFO: Container e2e ready: true, restart count 0 +Feb 12 10:55:30.115: INFO: Container sonobuoy-worker ready: true, restart count 0 +Feb 12 10:55:30.115: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:55:30.115: INFO: Container sonobuoy-worker ready: false, restart count 10 +Feb 12 10:55:30.115: INFO: Container systemd-logs ready: true, restart count 0 +Feb 12 10:55:30.115: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-1 before test +Feb 12 10:55:30.126: INFO: calico-node-xf85t from kube-system started at 2021-02-09 10:10:53 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container calico-node ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: csi-cinder-nodeplugin-pgnxp from kube-system started at 2021-02-12 09:40:03 +0000 UTC (2 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: npd-6phx9 from kube-system started at 2021-02-09 10:11:14 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: rc-test-zhr2r from replication-controller-9007 started at 2021-02-12 10:55:22 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container rc-test ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: sonobuoy from sonobuoy started at 2021-02-12 09:27:24 +0000 UTC (1 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container kube-sonobuoy ready: true, restart count 0 +Feb 12 10:55:30.126: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 10:55:30.126: INFO: Container sonobuoy-worker ready: false, restart count 10 +Feb 12 10:55:30.126: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-93a2589b-dd8a-47c7-a4ba-15aa7a74df8a 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.0.0.234 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-93a2589b-dd8a-47c7-a4ba-15aa7a74df8a off the node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-93a2589b-dd8a-47c7-a4ba-15aa7a74df8a +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:00:34.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3197" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 + +• [SLOW TEST:304.424 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":311,"completed":232,"skipped":4076,"failed":0} +S +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:00:34.308: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename podtemplate +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in podtemplate-8005 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:00:34.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-8005" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":311,"completed":233,"skipped":4077,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:00:34.535: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-36 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-858439ab-d175-469d-a7d4-37abd64cc645 +STEP: Creating a pod to test consume configMaps +Feb 12 11:00:34.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2" in namespace "configmap-36" to be "Succeeded or Failed" +Feb 12 11:00:34.732: INFO: Pod "pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32425ms +Feb 12 11:00:36.745: INFO: Pod "pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01962537s +Feb 12 11:00:38.758: INFO: Pod "pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033196048s +STEP: Saw pod success +Feb 12 11:00:38.758: INFO: Pod "pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2" satisfied condition "Succeeded or Failed" +Feb 12 11:00:38.763: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2 container configmap-volume-test: +STEP: delete the pod +Feb 12 11:00:38.926: INFO: Waiting for pod pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2 to disappear +Feb 12 11:00:38.930: INFO: Pod pod-configmaps-b85ecfec-0a03-4c0d-9f8c-2d68034b7ba2 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:00:38.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-36" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":234,"skipped":4122,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:00:38.940: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7459 +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name s-test-opt-del-56188383-f7ce-4dc8-ab35-717dbbda310c +STEP: Creating secret with name s-test-opt-upd-3e887c69-a3fc-45b2-97ce-330e04d49044 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-56188383-f7ce-4dc8-ab35-717dbbda310c +STEP: Updating secret s-test-opt-upd-3e887c69-a3fc-45b2-97ce-330e04d49044 +STEP: Creating secret with name s-test-opt-create-73805cc1-ca87-476c-9805-23ffaba42d9a +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:07.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7459" for this suite. + +• [SLOW TEST:89.122 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":235,"skipped":4133,"failed":0} +SS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:08.064: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3099 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating the pod +Feb 12 11:02:12.899: INFO: Successfully updated pod "annotationupdate05b18d79-c333-4c25-8fde-8a566308636b" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:14.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3099" for this suite. + +• [SLOW TEST:6.890 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":236,"skipped":4135,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:14.954: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7003 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 11:02:16.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 11:02:18.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724536, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724536, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724536, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724536, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 11:02:21.173: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:33.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7003" for this suite. +STEP: Destroying namespace "webhook-7003-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:18.505 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":311,"completed":237,"skipped":4139,"failed":0} +SSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:33.459: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9709 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:37.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9709" for this suite. +•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":311,"completed":238,"skipped":4144,"failed":0} +SS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:37.829: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-7186 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-7186" for this suite. +•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":239,"skipped":4146,"failed":0} + +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:40.075: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-3695 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Feb 12 11:02:42.288: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:02:42.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-3695" for this suite. +•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":240,"skipped":4146,"failed":0} +SSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:02:42.315: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-runtime +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9560 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:06.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9560" for this suite. + +• [SLOW TEST:24.600 seconds] +[k8s.io] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 + when starting a container that exits + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":311,"completed":241,"skipped":4156,"failed":0} +SSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:06.916: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename proxy +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-8524 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-54kfl in namespace proxy-8524 +I0212 11:03:07.121967 22 runners.go:190] Created replication controller with name: proxy-service-54kfl, namespace: proxy-8524, replica count: 1 +I0212 11:03:08.172648 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0212 11:03:09.174305 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0212 11:03:10.177776 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0212 11:03:11.178388 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0212 11:03:12.179117 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0212 11:03:13.179824 22 runners.go:190] proxy-service-54kfl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 11:03:13.195: INFO: setup took 6.115744849s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Feb 12 11:03:13.205: INFO: (0) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 9.714875ms) +Feb 12 11:03:13.211: INFO: (0) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 16.025507ms) +Feb 12 11:03:13.214: INFO: (0) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 18.765514ms) +Feb 12 11:03:13.215: INFO: (0) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 18.923518ms) +Feb 12 11:03:13.216: INFO: (0) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 20.647914ms) +Feb 12 11:03:13.216: INFO: (0) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 20.965433ms) +Feb 12 11:03:13.219: INFO: (0) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 23.143037ms) +Feb 12 11:03:13.219: INFO: (0) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 23.123238ms) +Feb 12 11:03:13.219: INFO: (0) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 23.558879ms) +Feb 12 11:03:13.219: INFO: (0) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 23.644449ms) +Feb 12 11:03:13.220: INFO: (0) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 24.500523ms) +Feb 12 11:03:13.222: INFO: (0) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 26.776909ms) +Feb 12 11:03:13.222: INFO: (0) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 26.694509ms) +Feb 12 11:03:13.223: INFO: (0) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 6.921115ms) +Feb 12 11:03:13.235: INFO: (1) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 8.085028ms) +Feb 12 11:03:13.235: INFO: (1) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 7.428583ms) +Feb 12 11:03:13.235: INFO: (1) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 11.245267ms) +Feb 12 11:03:13.240: INFO: (1) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 12.341886ms) +Feb 12 11:03:13.244: INFO: (2) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 3.990721ms) +Feb 12 11:03:13.246: INFO: (2) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 6.847944ms) +Feb 12 11:03:13.249: INFO: (2) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 9.1799ms) +Feb 12 11:03:13.249: INFO: (2) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.265057ms) +Feb 12 11:03:13.250: INFO: (2) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: ... (200; 10.212047ms) +Feb 12 11:03:13.250: INFO: (2) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.420254ms) +Feb 12 11:03:13.251: INFO: (2) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 11.058623ms) +Feb 12 11:03:13.251: INFO: (2) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 11.354424ms) +Feb 12 11:03:13.252: INFO: (2) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 11.529615ms) +Feb 12 11:03:13.252: INFO: (2) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 12.055012ms) +Feb 12 11:03:13.257: INFO: (3) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 4.240817ms) +Feb 12 11:03:13.258: INFO: (3) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 5.315635ms) +Feb 12 11:03:13.258: INFO: (3) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 5.430045ms) +Feb 12 11:03:13.259: INFO: (3) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 6.323417ms) +Feb 12 11:03:13.259: INFO: (3) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 6.351492ms) +Feb 12 11:03:13.259: INFO: (3) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: ... (200; 8.086213ms) +Feb 12 11:03:13.261: INFO: (3) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 8.179357ms) +Feb 12 11:03:13.262: INFO: (3) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 9.656262ms) +Feb 12 11:03:13.263: INFO: (3) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 10.05954ms) +Feb 12 11:03:13.263: INFO: (3) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 10.525258ms) +Feb 12 11:03:13.263: INFO: (3) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 10.233513ms) +Feb 12 11:03:13.267: INFO: (4) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 3.997525ms) +Feb 12 11:03:13.268: INFO: (4) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 4.408298ms) +Feb 12 11:03:13.268: INFO: (4) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 4.859953ms) +Feb 12 11:03:13.270: INFO: (4) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 5.91427ms) +Feb 12 11:03:13.270: INFO: (4) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 6.639577ms) +Feb 12 11:03:13.270: INFO: (4) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 5.79474ms) +Feb 12 11:03:13.270: INFO: (4) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 5.590275ms) +Feb 12 11:03:13.271: INFO: (4) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 6.588028ms) +Feb 12 11:03:13.272: INFO: (4) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.510236ms) +Feb 12 11:03:13.272: INFO: (4) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 7.685281ms) +Feb 12 11:03:13.272: INFO: (4) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 7.912927ms) +Feb 12 11:03:13.272: INFO: (4) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.926116ms) +Feb 12 11:03:13.272: INFO: (4) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 8.036039ms) +Feb 12 11:03:13.273: INFO: (4) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 9.680353ms) +Feb 12 11:03:13.273: INFO: (4) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: ... (200; 9.475292ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 9.578629ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 9.666247ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.613072ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 9.534676ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.68458ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 9.679357ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 9.659635ms) +Feb 12 11:03:13.283: INFO: (5) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: ... (200; 6.653656ms) +Feb 12 11:03:13.294: INFO: (6) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.803356ms) +Feb 12 11:03:13.294: INFO: (6) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.81033ms) +Feb 12 11:03:13.294: INFO: (6) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 7.699863ms) +Feb 12 11:03:13.294: INFO: (6) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 7.759835ms) +Feb 12 11:03:13.295: INFO: (6) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 8.716863ms) +Feb 12 11:03:13.295: INFO: (6) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 8.800794ms) +Feb 12 11:03:13.295: INFO: (6) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 8.921722ms) +Feb 12 11:03:13.295: INFO: (6) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.058366ms) +Feb 12 11:03:13.296: INFO: (6) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 9.093685ms) +Feb 12 11:03:13.296: INFO: (6) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.512798ms) +Feb 12 11:03:13.296: INFO: (6) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 9.953204ms) +Feb 12 11:03:13.301: INFO: (7) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 3.955632ms) +Feb 12 11:03:13.303: INFO: (7) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 6.191723ms) +Feb 12 11:03:13.303: INFO: (7) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 6.347029ms) +Feb 12 11:03:13.303: INFO: (7) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 11.009089ms) +Feb 12 11:03:13.308: INFO: (7) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 11.176968ms) +Feb 12 11:03:13.308: INFO: (7) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 11.431886ms) +Feb 12 11:03:13.314: INFO: (8) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 5.781915ms) +Feb 12 11:03:13.316: INFO: (8) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.136173ms) +Feb 12 11:03:13.316: INFO: (8) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 7.903887ms) +Feb 12 11:03:13.317: INFO: (8) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 8.457776ms) +Feb 12 11:03:13.318: INFO: (8) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 9.112492ms) +Feb 12 11:03:13.318: INFO: (8) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.494657ms) +Feb 12 11:03:13.318: INFO: (8) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.494332ms) +Feb 12 11:03:13.318: INFO: (8) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 9.698103ms) +Feb 12 11:03:13.319: INFO: (8) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 10.365324ms) +Feb 12 11:03:13.319: INFO: (8) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 10.079479ms) +Feb 12 11:03:13.319: INFO: (8) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 10.460413ms) +Feb 12 11:03:13.319: INFO: (8) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 10.493081ms) +Feb 12 11:03:13.319: INFO: (8) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 10.593168ms) +Feb 12 11:03:13.320: INFO: (8) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 11.421315ms) +Feb 12 11:03:13.321: INFO: (8) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 11.809042ms) +Feb 12 11:03:13.325: INFO: (9) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 3.547711ms) +Feb 12 11:03:13.326: INFO: (9) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 4.318371ms) +Feb 12 11:03:13.326: INFO: (9) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 4.097677ms) +Feb 12 11:03:13.327: INFO: (9) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 4.067528ms) +Feb 12 11:03:13.327: INFO: (9) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 7.535575ms) +Feb 12 11:03:13.330: INFO: (9) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.43293ms) +Feb 12 11:03:13.331: INFO: (9) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 7.95278ms) +Feb 12 11:03:13.331: INFO: (9) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 8.751667ms) +Feb 12 11:03:13.332: INFO: (9) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 8.762966ms) +Feb 12 11:03:13.336: INFO: (10) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 3.718783ms) +Feb 12 11:03:13.336: INFO: (10) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 4.173045ms) +Feb 12 11:03:13.337: INFO: (10) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 5.086913ms) +Feb 12 11:03:13.337: INFO: (10) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 5.65981ms) +Feb 12 11:03:13.338: INFO: (10) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 6.122903ms) +Feb 12 11:03:13.339: INFO: (10) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 6.40312ms) +Feb 12 11:03:13.340: INFO: (10) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 8.27962ms) +Feb 12 11:03:13.340: INFO: (10) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 8.393412ms) +Feb 12 11:03:13.341: INFO: (10) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 8.155991ms) +Feb 12 11:03:13.341: INFO: (10) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 8.79858ms) +Feb 12 11:03:13.341: INFO: (10) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 8.544917ms) +Feb 12 11:03:13.341: INFO: (10) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 7.299325ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 8.119606ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 7.93588ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 7.952763ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 8.13075ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 8.086499ms) +Feb 12 11:03:13.351: INFO: (11) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: ... (200; 7.996713ms) +Feb 12 11:03:13.352: INFO: (11) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 9.647178ms) +Feb 12 11:03:13.352: INFO: (11) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 9.72936ms) +Feb 12 11:03:13.352: INFO: (11) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 9.737214ms) +Feb 12 11:03:13.358: INFO: (12) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 5.288186ms) +Feb 12 11:03:13.358: INFO: (12) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 5.4937ms) +Feb 12 11:03:13.358: INFO: (12) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 5.624293ms) +Feb 12 11:03:13.358: INFO: (12) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 5.609981ms) +Feb 12 11:03:13.358: INFO: (12) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 5.470051ms) +Feb 12 11:03:13.360: INFO: (12) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 7.186278ms) +Feb 12 11:03:13.360: INFO: (12) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 7.942564ms) +Feb 12 11:03:13.360: INFO: (12) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 7.913032ms) +Feb 12 11:03:13.360: INFO: (12) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.897333ms) +Feb 12 11:03:13.361: INFO: (12) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 8.338687ms) +Feb 12 11:03:13.361: INFO: (12) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 8.32019ms) +Feb 12 11:03:13.361: INFO: (12) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 8.499817ms) +Feb 12 11:03:13.362: INFO: (12) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 7.893131ms) +Feb 12 11:03:13.370: INFO: (13) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 8.248103ms) +Feb 12 11:03:13.371: INFO: (13) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 8.152845ms) +Feb 12 11:03:13.370: INFO: (13) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 8.472989ms) +Feb 12 11:03:13.371: INFO: (13) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 9.143647ms) +Feb 12 11:03:13.372: INFO: (13) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.737442ms) +Feb 12 11:03:13.372: INFO: (13) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 9.75808ms) +Feb 12 11:03:13.376: INFO: (14) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 3.14961ms) +Feb 12 11:03:13.376: INFO: (14) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 3.314135ms) +Feb 12 11:03:13.378: INFO: (14) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 5.219814ms) +Feb 12 11:03:13.379: INFO: (14) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 6.242044ms) +Feb 12 11:03:13.380: INFO: (14) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 7.521959ms) +Feb 12 11:03:13.380: INFO: (14) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 7.624379ms) +Feb 12 11:03:13.382: INFO: (14) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 9.045744ms) +Feb 12 11:03:13.382: INFO: (14) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 10.479324ms) +Feb 12 11:03:13.383: INFO: (14) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 10.492209ms) +Feb 12 11:03:13.384: INFO: (14) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 11.729313ms) +Feb 12 11:03:13.385: INFO: (14) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 11.997044ms) +Feb 12 11:03:13.389: INFO: (15) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 3.811684ms) +Feb 12 11:03:13.389: INFO: (15) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 4.235298ms) +Feb 12 11:03:13.389: INFO: (15) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 7.428021ms) +Feb 12 11:03:13.393: INFO: (15) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 8.185636ms) +Feb 12 11:03:13.393: INFO: (15) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.794399ms) +Feb 12 11:03:13.394: INFO: (15) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 8.184304ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 9.069326ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 9.193923ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 9.524822ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 9.806505ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.634188ms) +Feb 12 11:03:13.395: INFO: (15) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.800795ms) +Feb 12 11:03:13.401: INFO: (16) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 5.728521ms) +Feb 12 11:03:13.401: INFO: (16) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 6.082335ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 7.612965ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.407591ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.375088ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 7.913319ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 8.095192ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 8.217684ms) +Feb 12 11:03:13.403: INFO: (16) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 8.226385ms) +Feb 12 11:03:13.406: INFO: (16) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 10.889572ms) +Feb 12 11:03:13.406: INFO: (16) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 11.026132ms) +Feb 12 11:03:13.406: INFO: (16) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 10.895342ms) +Feb 12 11:03:13.406: INFO: (16) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 6.240884ms) +Feb 12 11:03:13.413: INFO: (17) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 7.460874ms) +Feb 12 11:03:13.415: INFO: (17) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 7.611425ms) +Feb 12 11:03:13.415: INFO: (17) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 7.701509ms) +Feb 12 11:03:13.415: INFO: (17) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 7.984377ms) +Feb 12 11:03:13.416: INFO: (17) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 8.689911ms) +Feb 12 11:03:13.416: INFO: (17) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 8.891976ms) +Feb 12 11:03:13.416: INFO: (17) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 8.772102ms) +Feb 12 11:03:13.416: INFO: (17) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 9.470308ms) +Feb 12 11:03:13.417: INFO: (17) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 9.994621ms) +Feb 12 11:03:13.421: INFO: (18) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw/proxy/: test (200; 3.985059ms) +Feb 12 11:03:13.421: INFO: (18) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 4.160799ms) +Feb 12 11:03:13.423: INFO: (18) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 6.022444ms) +Feb 12 11:03:13.424: INFO: (18) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname2/proxy/: bar (200; 6.783395ms) +Feb 12 11:03:13.424: INFO: (18) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test<... (200; 7.209027ms) +Feb 12 11:03:13.425: INFO: (18) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 6.846788ms) +Feb 12 11:03:13.425: INFO: (18) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname2/proxy/: tls qux (200; 7.712813ms) +Feb 12 11:03:13.426: INFO: (18) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 8.62952ms) +Feb 12 11:03:13.426: INFO: (18) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 8.629974ms) +Feb 12 11:03:13.427: INFO: (18) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 9.86459ms) +Feb 12 11:03:13.427: INFO: (18) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 10.124433ms) +Feb 12 11:03:13.427: INFO: (18) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 10.124613ms) +Feb 12 11:03:13.427: INFO: (18) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:162/proxy/: bar (200; 10.102133ms) +Feb 12 11:03:13.428: INFO: (18) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 10.493667ms) +Feb 12 11:03:13.428: INFO: (18) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname2/proxy/: bar (200; 10.605928ms) +Feb 12 11:03:13.431: INFO: (19) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 3.149009ms) +Feb 12 11:03:13.432: INFO: (19) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:462/proxy/: tls qux (200; 4.266088ms) +Feb 12 11:03:13.432: INFO: (19) /api/v1/namespaces/proxy-8524/pods/http:proxy-service-54kfl-jvhlw:1080/proxy/: ... (200; 4.208214ms) +Feb 12 11:03:13.434: INFO: (19) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:1080/proxy/: test<... (200; 5.943808ms) +Feb 12 11:03:13.435: INFO: (19) /api/v1/namespaces/proxy-8524/services/proxy-service-54kfl:portname1/proxy/: foo (200; 7.009575ms) +Feb 12 11:03:13.437: INFO: (19) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:443/proxy/: test (200; 9.223606ms) +Feb 12 11:03:13.437: INFO: (19) /api/v1/namespaces/proxy-8524/pods/proxy-service-54kfl-jvhlw:160/proxy/: foo (200; 9.167905ms) +Feb 12 11:03:13.437: INFO: (19) /api/v1/namespaces/proxy-8524/pods/https:proxy-service-54kfl-jvhlw:460/proxy/: tls baz (200; 9.155108ms) +Feb 12 11:03:13.437: INFO: (19) /api/v1/namespaces/proxy-8524/services/http:proxy-service-54kfl:portname1/proxy/: foo (200; 9.282286ms) +Feb 12 11:03:13.437: INFO: (19) /api/v1/namespaces/proxy-8524/services/https:proxy-service-54kfl:tlsportname1/proxy/: tls baz (200; 9.373058ms) +STEP: deleting ReplicationController proxy-service-54kfl in namespace proxy-8524, will wait for the garbage collector to delete the pods +Feb 12 11:03:13.505: INFO: Deleting ReplicationController proxy-service-54kfl took: 11.952226ms +Feb 12 11:03:14.505: INFO: Terminating ReplicationController proxy-service-54kfl pods took: 1.000276899s +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:23.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-8524" for this suite. + +• [SLOW TEST:16.813 seconds] +[sig-network] Proxy +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":311,"completed":242,"skipped":4159,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:23.735: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-4332 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating replication controller my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52 +Feb 12 11:03:23.912: INFO: Pod name my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52: Found 0 pods out of 1 +Feb 12 11:03:28.929: INFO: Pod name my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52: Found 1 pods out of 1 +Feb 12 11:03:28.930: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52" are running +Feb 12 11:03:28.936: INFO: Pod "my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52-vngzv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 11:03:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 11:03:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 11:03:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-12 11:03:23 +0000 UTC Reason: Message:}]) +Feb 12 11:03:28.937: INFO: Trying to dial the pod +Feb 12 11:03:33.958: INFO: Controller my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52: Got expected result from replica 1 [my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52-vngzv]: "my-hostname-basic-1b4b53ad-1c17-4b82-b0a8-9d02b6552f52-vngzv", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:33.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-4332" for this suite. + +• [SLOW TEST:10.239 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":243,"skipped":4181,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:33.979: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2273 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Create set of pods +Feb 12 11:03:34.147: INFO: created test-pod-1 +Feb 12 11:03:34.161: INFO: created test-pod-2 +Feb 12 11:03:34.174: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:34.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2273" for this suite. +•{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":311,"completed":244,"skipped":4202,"failed":0} + +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:34.229: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7338 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-dfc2344c-fb7c-4adc-b6b4-397c3fd82ede +STEP: Creating a pod to test consume secrets +Feb 12 11:03:34.404: INFO: Waiting up to 5m0s for pod "pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7" in namespace "secrets-7338" to be "Succeeded or Failed" +Feb 12 11:03:34.411: INFO: Pod "pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945263ms +Feb 12 11:03:36.423: INFO: Pod "pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018535348s +STEP: Saw pod success +Feb 12 11:03:36.423: INFO: Pod "pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7" satisfied condition "Succeeded or Failed" +Feb 12 11:03:36.426: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7 container secret-volume-test: +STEP: delete the pod +Feb 12 11:03:36.469: INFO: Waiting for pod pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7 to disappear +Feb 12 11:03:36.478: INFO: Pod pod-secrets-b4f32543-e34e-4ec5-a6fc-06442a1f93c7 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:36.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7338" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":245,"skipped":4202,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:36.496: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename replication-controller +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-346 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:03:36.658: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Feb 12 11:03:38.723: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:39.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-346" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":311,"completed":246,"skipped":4212,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:39.764: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2061 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating all guestbook components +Feb 12 11:03:39.930: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Feb 12 11:03:39.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:40.640: INFO: stderr: "" +Feb 12 11:03:40.640: INFO: stdout: "service/agnhost-replica created\n" +Feb 12 11:03:40.640: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Feb 12 11:03:40.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:40.980: INFO: stderr: "" +Feb 12 11:03:40.980: INFO: stdout: "service/agnhost-primary created\n" +Feb 12 11:03:40.980: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Feb 12 11:03:40.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:41.341: INFO: stderr: "" +Feb 12 11:03:41.341: INFO: stdout: "service/frontend created\n" +Feb 12 11:03:41.342: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.21 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Feb 12 11:03:41.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:41.690: INFO: stderr: "" +Feb 12 11:03:41.690: INFO: stdout: "deployment.apps/frontend created\n" +Feb 12 11:03:41.691: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.21 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Feb 12 11:03:41.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:42.134: INFO: stderr: "" +Feb 12 11:03:42.134: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Feb 12 11:03:42.135: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.21 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Feb 12 11:03:42.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 create -f -' +Feb 12 11:03:42.707: INFO: stderr: "" +Feb 12 11:03:42.707: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Feb 12 11:03:42.708: INFO: Waiting for all frontend pods to be Running. +Feb 12 11:03:47.758: INFO: Waiting for frontend to serve content. +Feb 12 11:03:47.776: INFO: Trying to add a new entry to the guestbook. +Feb 12 11:03:47.787: INFO: Verifying that added entry can be retrieved. +Feb 12 11:03:47.799: INFO: Failed to get response from guestbook. err: , response: {"data":""} +STEP: using delete to clean up resources +Feb 12 11:03:52.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:52.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:52.983: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Feb 12 11:03:52.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:53.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:53.162: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Feb 12 11:03:53.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:53.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:53.358: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Feb 12 11:03:53.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:53.518: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:53.518: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Feb 12 11:03:53.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:53.686: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:53.686: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Feb 12 11:03:53.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2061 delete --grace-period=0 --force -f -' +Feb 12 11:03:53.839: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:03:53.839: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:03:53.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2061" for this suite. + +• [SLOW TEST:14.090 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Guestbook application + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":311,"completed":247,"skipped":4253,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:03:53.856: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5130 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 11:03:55.152: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 11:03:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724635, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724635, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724635, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748724635, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 11:04:00.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:04:00.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5130" for this suite. +STEP: Destroying namespace "webhook-5130-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.496 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":311,"completed":248,"skipped":4275,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:04:00.352: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-237 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:04:00.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 create -f -' +Feb 12 11:04:00.903: INFO: stderr: "" +Feb 12 11:04:00.903: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Feb 12 11:04:00.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 create -f -' +Feb 12 11:04:01.244: INFO: stderr: "" +Feb 12 11:04:01.244: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Feb 12 11:04:02.258: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 11:04:02.258: INFO: Found 0 / 1 +Feb 12 11:04:03.287: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 11:04:03.287: INFO: Found 1 / 1 +Feb 12 11:04:03.287: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Feb 12 11:04:03.300: INFO: Selector matched 1 pods for map[app:agnhost] +Feb 12 11:04:03.300: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Feb 12 11:04:03.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 describe pod agnhost-primary-gq7nc' +Feb 12 11:04:03.522: INFO: stderr: "" +Feb 12 11:04:03.522: INFO: stdout: "Name: agnhost-primary-gq7nc\nNamespace: kubectl-237\nPriority: 0\nNode: k8s-calico-coreos-yo5lpoxhpdlk-node-1/10.0.0.234\nStart Time: Fri, 12 Feb 2021 11:04:00 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/podIP: 10.100.45.31/32\n cni.projectcalico.org/podIPs: 10.100.45.31/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 10.100.45.31\nIPs:\n IP: 10.100.45.31\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://71c235da8ea1401b955c0e411b4b74e3ed27622482b566e815b90bc74acbf10e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 12 Feb 2021 11:04:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tx7n4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tx7n4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tx7n4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-237/agnhost-primary-gq7nc to k8s-calico-coreos-yo5lpoxhpdlk-node-1\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Feb 12 11:04:03.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 describe rc agnhost-primary' +Feb 12 11:04:03.711: INFO: stderr: "" +Feb 12 11:04:03.711: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-237\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-gq7nc\n" +Feb 12 11:04:03.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 describe service agnhost-primary' +Feb 12 11:04:03.891: INFO: stderr: "" +Feb 12 11:04:03.891: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-237\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.254.115.245\nIPs: 10.254.115.245\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.100.45.31:6379\nSession Affinity: None\nEvents: \n" +Feb 12 11:04:03.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 describe node k8s-calico-coreos-yo5lpoxhpdlk-master-0' +Feb 12 11:04:04.095: INFO: stderr: "" +Feb 12 11:04:04.096: INFO: stdout: "Name: k8s-calico-coreos-yo5lpoxhpdlk-master-0\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=ds4G\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=RegionOne\n failure-domain.beta.kubernetes.io/zone=nova\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-calico-coreos-yo5lpoxhpdlk-master-0\n kubernetes.io/os=linux\n magnum.openstack.org/nodegroup=default-master\n magnum.openstack.org/role=master\n node-role.kubernetes.io/master=\n node.kubernetes.io/instance-type=ds4G\n topology.kubernetes.io/region=RegionOne\n topology.kubernetes.io/zone=nova\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.0.0.174/24\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 09 Feb 2021 10:08:09 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-calico-coreos-yo5lpoxhpdlk-master-0\n AcquireTime: \n RenewTime: Fri, 12 Feb 2021 11:04:02 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Tue, 09 Feb 2021 10:09:01 +0000 Tue, 09 Feb 2021 10:09:01 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Fri, 12 Feb 2021 11:02:21 +0000 Tue, 09 Feb 2021 10:08:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 12 Feb 2021 11:02:21 +0000 Tue, 09 Feb 2021 10:08:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 12 Feb 2021 11:02:21 +0000 Tue, 09 Feb 2021 10:08:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 12 Feb 2021 11:02:21 +0000 Tue, 09 Feb 2021 10:09:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.0.0.174\n ExternalIP: 172.24.4.56\n Hostname: k8s-calico-coreos-yo5lpoxhpdlk-master-0\nCapacity:\n cpu: 4\n ephemeral-storage: 20435948Ki\n hugepages-2Mi: 0\n memory: 4019120Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18833769646\n hugepages-2Mi: 0\n memory: 3916720Ki\n pods: 110\nSystem Info:\n Machine ID: c2240ae608cd4b36ab56f7267329bfea\n System UUID: c2240ae6-08cd-4b36-ab56-f7267329bfea\n Boot ID: ffdf5bf3-934c-443a-b3b8-2f965cbde653\n Kernel Version: 5.10.12-200.fc33.x86_64\n OS Image: Fedora CoreOS 33.20210117.3.2\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://19.3.13\n Kubelet Version: v1.20.2\n Kube-Proxy Version: v1.20.2\nPodCIDR: 10.100.0.0/24\nPodCIDRs: 10.100.0.0/24\nProviderID: openstack:///c2240ae6-08cd-4b36-ab56-f7267329bfea\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-kube-controllers-64f65c6c55-dsdw2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system calico-node-5cjs2 250m (6%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system coredns-577cdd4fd7-bllk9 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 3d\n kube-system coredns-577cdd4fd7-cghb7 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 3d\n kube-system csi-cinder-controllerplugin-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system dashboard-metrics-scraper-68db445878-lqr6x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system k8s-keystone-auth-hfcn9 200m (5%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system kubernetes-dashboard-69f6bf9476-trshc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n kube-system openstack-cloud-controller-manager-c2zgh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d\n sonobuoy sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-pmjsz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 96m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (16%) 0 (0%)\n memory 140Mi (3%) 340Mi (8%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" +Feb 12 11:04:04.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-237 describe namespace kubectl-237' +Feb 12 11:04:04.312: INFO: stderr: "" +Feb 12 11:04:04.312: INFO: stdout: "Name: kubectl-237\nLabels: e2e-framework=kubectl\n e2e-run=0d107ceb-b101-441e-983d-10a9dcbe166d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:04:04.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-237" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":311,"completed":249,"skipped":4302,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:04:04.330: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-5732 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Feb 12 11:04:12.615: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:12.615: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:12.922: INFO: Exec stderr: "" +Feb 12 11:04:12.922: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:12.922: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:13.154: INFO: Exec stderr: "" +Feb 12 11:04:13.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:13.154: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:13.384: INFO: Exec stderr: "" +Feb 12 11:04:13.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:13.384: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:13.677: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Feb 12 11:04:13.677: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:13.677: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:13.901: INFO: Exec stderr: "" +Feb 12 11:04:13.901: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:13.901: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:14.160: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Feb 12 11:04:14.160: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:14.160: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:14.477: INFO: Exec stderr: "" +Feb 12 11:04:14.477: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:14.477: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:14.722: INFO: Exec stderr: "" +Feb 12 11:04:14.722: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:14.722: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:14.979: INFO: Exec stderr: "" +Feb 12 11:04:14.979: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5732 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:04:14.979: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:04:15.211: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:04:15.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-5732" for this suite. + +• [SLOW TEST:10.905 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":250,"skipped":4327,"failed":0} +S +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:04:15.235: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename discovery +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-7988 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:04:16.317: INFO: Checking APIGroup: apiregistration.k8s.io +Feb 12 11:04:16.320: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Feb 12 11:04:16.321: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.321: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Feb 12 11:04:16.321: INFO: Checking APIGroup: apps +Feb 12 11:04:16.325: INFO: PreferredVersion.GroupVersion: apps/v1 +Feb 12 11:04:16.325: INFO: Versions found [{apps/v1 v1}] +Feb 12 11:04:16.325: INFO: apps/v1 matches apps/v1 +Feb 12 11:04:16.325: INFO: Checking APIGroup: events.k8s.io +Feb 12 11:04:16.328: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Feb 12 11:04:16.328: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.328: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Feb 12 11:04:16.328: INFO: Checking APIGroup: authentication.k8s.io +Feb 12 11:04:16.332: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Feb 12 11:04:16.332: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.332: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Feb 12 11:04:16.332: INFO: Checking APIGroup: authorization.k8s.io +Feb 12 11:04:16.335: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Feb 12 11:04:16.335: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.335: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Feb 12 11:04:16.335: INFO: Checking APIGroup: autoscaling +Feb 12 11:04:16.338: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Feb 12 11:04:16.338: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Feb 12 11:04:16.338: INFO: autoscaling/v1 matches autoscaling/v1 +Feb 12 11:04:16.338: INFO: Checking APIGroup: batch +Feb 12 11:04:16.342: INFO: PreferredVersion.GroupVersion: batch/v1 +Feb 12 11:04:16.342: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1} {batch/v2alpha1 v2alpha1}] +Feb 12 11:04:16.342: INFO: batch/v1 matches batch/v1 +Feb 12 11:04:16.342: INFO: Checking APIGroup: certificates.k8s.io +Feb 12 11:04:16.345: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Feb 12 11:04:16.345: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.345: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Feb 12 11:04:16.345: INFO: Checking APIGroup: networking.k8s.io +Feb 12 11:04:16.348: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Feb 12 11:04:16.348: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.349: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Feb 12 11:04:16.349: INFO: Checking APIGroup: extensions +Feb 12 11:04:16.352: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 +Feb 12 11:04:16.352: INFO: Versions found [{extensions/v1beta1 v1beta1}] +Feb 12 11:04:16.352: INFO: extensions/v1beta1 matches extensions/v1beta1 +Feb 12 11:04:16.352: INFO: Checking APIGroup: policy +Feb 12 11:04:16.356: INFO: PreferredVersion.GroupVersion: policy/v1beta1 +Feb 12 11:04:16.356: INFO: Versions found [{policy/v1beta1 v1beta1}] +Feb 12 11:04:16.356: INFO: policy/v1beta1 matches policy/v1beta1 +Feb 12 11:04:16.356: INFO: Checking APIGroup: rbac.authorization.k8s.io +Feb 12 11:04:16.360: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Feb 12 11:04:16.360: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1} {rbac.authorization.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.360: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Feb 12 11:04:16.360: INFO: Checking APIGroup: storage.k8s.io +Feb 12 11:04:16.364: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Feb 12 11:04:16.364: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1} {storage.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.364: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Feb 12 11:04:16.364: INFO: Checking APIGroup: admissionregistration.k8s.io +Feb 12 11:04:16.368: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Feb 12 11:04:16.368: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.368: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Feb 12 11:04:16.368: INFO: Checking APIGroup: apiextensions.k8s.io +Feb 12 11:04:16.371: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Feb 12 11:04:16.371: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.371: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Feb 12 11:04:16.371: INFO: Checking APIGroup: scheduling.k8s.io +Feb 12 11:04:16.376: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Feb 12 11:04:16.376: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1} {scheduling.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.376: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Feb 12 11:04:16.376: INFO: Checking APIGroup: coordination.k8s.io +Feb 12 11:04:16.379: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Feb 12 11:04:16.379: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.379: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Feb 12 11:04:16.380: INFO: Checking APIGroup: node.k8s.io +Feb 12 11:04:16.383: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Feb 12 11:04:16.383: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1} {node.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.383: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Feb 12 11:04:16.383: INFO: Checking APIGroup: discovery.k8s.io +Feb 12 11:04:16.388: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 +Feb 12 11:04:16.388: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.388: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 +Feb 12 11:04:16.388: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Feb 12 11:04:16.392: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Feb 12 11:04:16.392: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1} {flowcontrol.apiserver.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.392: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Feb 12 11:04:16.392: INFO: Checking APIGroup: internal.apiserver.k8s.io +Feb 12 11:04:16.396: INFO: PreferredVersion.GroupVersion: internal.apiserver.k8s.io/v1alpha1 +Feb 12 11:04:16.396: INFO: Versions found [{internal.apiserver.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.396: INFO: internal.apiserver.k8s.io/v1alpha1 matches internal.apiserver.k8s.io/v1alpha1 +Feb 12 11:04:16.397: INFO: Checking APIGroup: crd.projectcalico.org +Feb 12 11:04:16.400: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Feb 12 11:04:16.400: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Feb 12 11:04:16.400: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Feb 12 11:04:16.400: INFO: Checking APIGroup: snapshot.storage.k8s.io +Feb 12 11:04:16.404: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1alpha1 +Feb 12 11:04:16.404: INFO: Versions found [{snapshot.storage.k8s.io/v1alpha1 v1alpha1}] +Feb 12 11:04:16.404: INFO: snapshot.storage.k8s.io/v1alpha1 matches snapshot.storage.k8s.io/v1alpha1 +Feb 12 11:04:16.404: INFO: Checking APIGroup: metrics.k8s.io +Feb 12 11:04:16.407: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Feb 12 11:04:16.407: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Feb 12 11:04:16.407: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:04:16.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-7988" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":311,"completed":251,"skipped":4328,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:04:16.422: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9045 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:04:16.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9045" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":311,"completed":252,"skipped":4333,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:04:16.677: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2869 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: running the image 10.60.253.37/magnum/httpd:2.4.38-alpine +Feb 12 11:04:16.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2869 run e2e-test-httpd-pod --image=10.60.253.37/magnum/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' +Feb 12 11:04:17.040: INFO: stderr: "" +Feb 12 11:04:17.040: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Feb 12 11:04:17.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2869 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "10.60.253.37/magnum/busybox:1.29"}]}} --dry-run=server' +Feb 12 11:04:17.410: INFO: stderr: "" +Feb 12 11:04:17.410: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image 10.60.253.37/magnum/httpd:2.4.38-alpine +Feb 12 11:04:17.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2869 delete pods e2e-test-httpd-pod' +Feb 12 11:05:12.570: INFO: stderr: "" +Feb 12 11:05:12.570: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:05:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2869" for this suite. + +• [SLOW TEST:55.931 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":311,"completed":253,"skipped":4377,"failed":0} +SS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:05:12.609: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9342 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-9342 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-9342 +I0212 11:05:12.806815 22 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9342, replica count: 2 +I0212 11:05:15.858031 22 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 11:05:15.858: INFO: Creating new exec pod +Feb 12 11:05:18.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' +Feb 12 11:05:19.364: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Feb 12 11:05:19.364: INFO: stdout: "" +Feb 12 11:05:19.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 10.254.60.212 80' +Feb 12 11:05:19.799: INFO: stderr: "+ nc -zv -t -w 2 10.254.60.212 80\nConnection to 10.254.60.212 80 port [tcp/http] succeeded!\n" +Feb 12 11:05:19.799: INFO: stdout: "" +Feb 12 11:05:19.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.115 31943' +Feb 12 11:05:20.248: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.115 31943\nConnection to 10.0.0.115 31943 port [tcp/31943] succeeded!\n" +Feb 12 11:05:20.248: INFO: stdout: "" +Feb 12 11:05:20.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.234 31943' +Feb 12 11:05:20.689: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.234 31943\nConnection to 10.0.0.234 31943 port [tcp/31943] succeeded!\n" +Feb 12 11:05:20.689: INFO: stdout: "" +Feb 12 11:05:20.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.246 31943' +Feb 12 11:05:21.119: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.246 31943\nConnection to 172.24.4.246 31943 port [tcp/31943] succeeded!\n" +Feb 12 11:05:21.119: INFO: stdout: "" +Feb 12 11:05:21.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-9342 exec execpod2prgg -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.157 31943' +Feb 12 11:05:21.525: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.157 31943\nConnection to 172.24.4.157 31943 port [tcp/31943] succeeded!\n" +Feb 12 11:05:21.525: INFO: stdout: "" +Feb 12 11:05:21.525: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:05:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9342" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:8.969 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":311,"completed":254,"skipped":4379,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:05:21.582: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename svcaccounts +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-1845 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test service account token: +Feb 12 11:05:21.815: INFO: Waiting up to 5m0s for pod "test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0" in namespace "svcaccounts-1845" to be "Succeeded or Failed" +Feb 12 11:05:21.827: INFO: Pod "test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.953776ms +Feb 12 11:05:23.834: INFO: Pod "test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018937447s +Feb 12 11:05:25.846: INFO: Pod "test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030414954s +STEP: Saw pod success +Feb 12 11:05:25.846: INFO: Pod "test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0" satisfied condition "Succeeded or Failed" +Feb 12 11:05:25.851: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0 container agnhost-container: +STEP: delete the pod +Feb 12 11:05:25.943: INFO: Waiting for pod test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0 to disappear +Feb 12 11:05:25.953: INFO: Pod test-pod-af80b7bf-3425-4f72-a35c-9a663b2e3cc0 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:05:25.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1845" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":311,"completed":255,"skipped":4395,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:05:25.968: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-943 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Feb 12 11:05:26.141: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591057 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:05:26.142: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591057 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Feb 12 11:05:36.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591128 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:05:36.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591128 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Feb 12 11:05:46.187: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591148 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:05:46.187: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591148 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Feb 12 11:05:56.208: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591168 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:05:56.208: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 9198a018-4aa0-487a-b27d-d0dfb84b5b4a 591168 0 2021-02-12 11:05:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-12 11:05:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Feb 12 11:06:06.228: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 51fde53a-aab9-4fe1-967a-fe1a161c9177 591188 0 2021-02-12 11:06:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-12 11:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:06:06.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 51fde53a-aab9-4fe1-967a-fe1a161c9177 591188 0 2021-02-12 11:06:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-12 11:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Feb 12 11:06:16.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 51fde53a-aab9-4fe1-967a-fe1a161c9177 591208 0 2021-02-12 11:06:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-12 11:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Feb 12 11:06:16.254: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 51fde53a-aab9-4fe1-967a-fe1a161c9177 591208 0 2021-02-12 11:06:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-12 11:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:06:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-943" for this suite. + +• [SLOW TEST:60.315 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":311,"completed":256,"skipped":4437,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:06:26.284: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4645 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: validating cluster-info +Feb 12 11:06:26.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-4645 cluster-info' +Feb 12 11:06:26.616: INFO: stderr: "" +Feb 12 11:06:26.616: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.254.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:06:26.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4645" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":311,"completed":257,"skipped":4439,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:06:26.632: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1386 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 11:06:26.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921" in namespace "downward-api-1386" to be "Succeeded or Failed" +Feb 12 11:06:26.818: INFO: Pod "downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764217ms +Feb 12 11:06:28.830: INFO: Pod "downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015243186s +STEP: Saw pod success +Feb 12 11:06:28.830: INFO: Pod "downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921" satisfied condition "Succeeded or Failed" +Feb 12 11:06:28.833: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921 container client-container: +STEP: delete the pod +Feb 12 11:06:28.923: INFO: Waiting for pod downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921 to disappear +Feb 12 11:06:28.929: INFO: Pod downwardapi-volume-90766443-e0ab-4fc3-ae2e-2b35639e9921 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:06:28.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1386" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":258,"skipped":4475,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:06:28.942: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2022 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-2022 +Feb 12 11:06:31.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Feb 12 11:06:31.671: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Feb 12 11:06:31.671: INFO: stdout: "iptables" +Feb 12 11:06:31.671: INFO: proxyMode: iptables +Feb 12 11:06:31.689: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Feb 12 11:06:31.693: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-2022 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-2022 +I0212 11:06:31.713672 22 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2022, replica count: 3 +I0212 11:06:34.766153 22 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 11:06:34.783: INFO: Creating new exec pod +Feb 12 11:06:39.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' +Feb 12 11:06:40.258: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Feb 12 11:06:40.258: INFO: stdout: "" +Feb 12 11:06:40.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 10.254.29.205 80' +Feb 12 11:06:40.663: INFO: stderr: "+ nc -zv -t -w 2 10.254.29.205 80\nConnection to 10.254.29.205 80 port [tcp/http] succeeded!\n" +Feb 12 11:06:40.663: INFO: stdout: "" +Feb 12 11:06:40.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.115 30651' +Feb 12 11:06:41.070: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.115 30651\nConnection to 10.0.0.115 30651 port [tcp/30651] succeeded!\n" +Feb 12 11:06:41.070: INFO: stdout: "" +Feb 12 11:06:41.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 10.0.0.234 30651' +Feb 12 11:06:41.468: INFO: stderr: "+ nc -zv -t -w 2 10.0.0.234 30651\nConnection to 10.0.0.234 30651 port [tcp/30651] succeeded!\n" +Feb 12 11:06:41.468: INFO: stdout: "" +Feb 12 11:06:41.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.246 30651' +Feb 12 11:06:41.892: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.246 30651\nConnection to 172.24.4.246 30651 port [tcp/30651] succeeded!\n" +Feb 12 11:06:41.892: INFO: stdout: "" +Feb 12 11:06:41.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c nc -zv -t -w 2 172.24.4.157 30651' +Feb 12 11:06:42.383: INFO: stderr: "+ nc -zv -t -w 2 172.24.4.157 30651\nConnection to 172.24.4.157 30651 port [tcp/30651] succeeded!\n" +Feb 12 11:06:42.383: INFO: stdout: "" +Feb 12 11:06:42.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.0.115:30651/ ; done' +Feb 12 11:06:42.906: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n" +Feb 12 11:06:42.906: INFO: stdout: "\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr\naffinity-nodeport-timeout-f6mcr" +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Received response from host: affinity-nodeport-timeout-f6mcr +Feb 12 11:06:42.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.0.0.115:30651/' +Feb 12 11:06:43.369: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n" +Feb 12 11:06:43.369: INFO: stdout: "affinity-nodeport-timeout-f6mcr" +Feb 12 11:07:03.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.0.0.115:30651/' +Feb 12 11:07:03.864: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n" +Feb 12 11:07:03.864: INFO: stdout: "affinity-nodeport-timeout-f6mcr" +Feb 12 11:07:23.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-2022 exec execpod-affinityhrxv5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.0.0.115:30651/' +Feb 12 11:07:24.329: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.0.0.115:30651/\n" +Feb 12 11:07:24.329: INFO: stdout: "affinity-nodeport-timeout-ckfrz" +Feb 12 11:07:24.329: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2022, will wait for the garbage collector to delete the pods +Feb 12 11:07:24.418: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.437409ms +Feb 12 11:07:25.419: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 1.000437222s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:08:12.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2022" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:103.730 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":259,"skipped":4508,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:08:12.673: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9547 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-map-b36b5cf3-747f-4bc3-b15a-6e6ae0bcfa91 +STEP: Creating a pod to test consume secrets +Feb 12 11:08:12.908: INFO: Waiting up to 5m0s for pod "pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc" in namespace "secrets-9547" to be "Succeeded or Failed" +Feb 12 11:08:12.915: INFO: Pod "pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492442ms +Feb 12 11:08:14.924: INFO: Pod "pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016251048s +Feb 12 11:08:16.937: INFO: Pod "pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028903959s +STEP: Saw pod success +Feb 12 11:08:16.937: INFO: Pod "pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc" satisfied condition "Succeeded or Failed" +Feb 12 11:08:16.941: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc container secret-volume-test: +STEP: delete the pod +Feb 12 11:08:17.022: INFO: Waiting for pod pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc to disappear +Feb 12 11:08:17.031: INFO: Pod pod-secrets-505cd0e3-db86-4146-908a-94cab4fecdfc no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:08:17.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9547" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":260,"skipped":4524,"failed":0} +SSSSSSSSS +------------------------------ +[k8s.io] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:08:17.046: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename lease-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-6824 +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:08:17.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-6824" for this suite. +•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":311,"completed":261,"skipped":4533,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:08:17.308: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-7135 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:08:28.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7135" for this suite. + +• [SLOW TEST:11.233 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":311,"completed":262,"skipped":4545,"failed":0} +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:08:28.541: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-2182 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Feb 12 11:08:32.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:32.791: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:34.791: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:34.798: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:36.791: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:36.802: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:38.791: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:38.809: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:40.791: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:40.806: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:42.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:42.812: INFO: Pod pod-with-prestop-http-hook still exists +Feb 12 11:08:44.791: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Feb 12 11:08:44.801: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:08:44.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2182" for this suite. + +• [SLOW TEST:16.292 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":311,"completed":263,"skipped":4545,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:08:44.834: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-210 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating service in namespace services-210 +Feb 12 11:08:47.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Feb 12 11:08:47.494: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Feb 12 11:08:47.494: INFO: stdout: "iptables" +Feb 12 11:08:47.494: INFO: proxyMode: iptables +Feb 12 11:08:47.518: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Feb 12 11:08:47.522: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-210 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-210 +I0212 11:08:47.553520 22 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-210, replica count: 3 +I0212 11:08:50.604310 22 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 11:08:50.619: INFO: Creating new exec pod +Feb 12 11:08:53.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec execpod-affinityrj4kf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' +Feb 12 11:08:54.090: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Feb 12 11:08:54.090: INFO: stdout: "" +Feb 12 11:08:54.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec execpod-affinityrj4kf -- /bin/sh -x -c nc -zv -t -w 2 10.254.37.52 80' +Feb 12 11:08:54.510: INFO: stderr: "+ nc -zv -t -w 2 10.254.37.52 80\nConnection to 10.254.37.52 80 port [tcp/http] succeeded!\n" +Feb 12 11:08:54.511: INFO: stdout: "" +Feb 12 11:08:54.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec execpod-affinityrj4kf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.254.37.52:80/ ; done' +Feb 12 11:08:55.052: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n" +Feb 12 11:08:55.052: INFO: stdout: "\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm\naffinity-clusterip-timeout-4xqnm" +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Received response from host: affinity-clusterip-timeout-4xqnm +Feb 12 11:08:55.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec execpod-affinityrj4kf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.254.37.52:80/' +Feb 12 11:08:55.486: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n" +Feb 12 11:08:55.486: INFO: stdout: "affinity-clusterip-timeout-4xqnm" +Feb 12 11:09:15.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-210 exec execpod-affinityrj4kf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.254.37.52:80/' +Feb 12 11:09:16.001: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.254.37.52:80/\n" +Feb 12 11:09:16.001: INFO: stdout: "affinity-clusterip-timeout-zp6mq" +Feb 12 11:09:16.001: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-210, will wait for the garbage collector to delete the pods +Feb 12 11:09:16.087: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 9.244519ms +Feb 12 11:09:19.588: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 3.500944494s +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:10:12.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-210" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:87.899 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":264,"skipped":4551,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:10:12.738: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9301 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-projected-all-test-volume-f07b00ac-9f8c-4941-9b63-d9a84476419e +STEP: Creating secret with name secret-projected-all-test-volume-4162de0c-800e-4746-b251-5fee08984a39 +STEP: Creating a pod to test Check all projections for projected volume plugin +Feb 12 11:10:12.926: INFO: Waiting up to 5m0s for pod "projected-volume-30924842-5851-4220-ad88-23cb69224795" in namespace "projected-9301" to be "Succeeded or Failed" +Feb 12 11:10:12.932: INFO: Pod "projected-volume-30924842-5851-4220-ad88-23cb69224795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383478ms +Feb 12 11:10:14.942: INFO: Pod "projected-volume-30924842-5851-4220-ad88-23cb69224795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016210374s +Feb 12 11:10:16.957: INFO: Pod "projected-volume-30924842-5851-4220-ad88-23cb69224795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031153632s +STEP: Saw pod success +Feb 12 11:10:16.957: INFO: Pod "projected-volume-30924842-5851-4220-ad88-23cb69224795" satisfied condition "Succeeded or Failed" +Feb 12 11:10:16.961: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod projected-volume-30924842-5851-4220-ad88-23cb69224795 container projected-all-volume-test: +STEP: delete the pod +Feb 12 11:10:17.038: INFO: Waiting for pod projected-volume-30924842-5851-4220-ad88-23cb69224795 to disappear +Feb 12 11:10:17.049: INFO: Pod projected-volume-30924842-5851-4220-ad88-23cb69224795 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:10:17.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9301" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":311,"completed":265,"skipped":4593,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:10:17.060: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5545 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-5545 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-5545 +STEP: creating replication controller externalsvc in namespace services-5545 +I0212 11:10:17.275586 22 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5545, replica count: 2 +I0212 11:10:20.326379 22 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Feb 12 11:10:20.382: INFO: Creating new exec pod +Feb 12 11:10:24.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=services-5545 exec execpod4d97s -- /bin/sh -x -c nslookup nodeport-service.services-5545.svc.cluster.local' +Feb 12 11:10:24.877: INFO: stderr: "+ nslookup nodeport-service.services-5545.svc.cluster.local\n" +Feb 12 11:10:24.877: INFO: stdout: "Server:\t\t10.254.0.10\nAddress:\t10.254.0.10#53\n\nnodeport-service.services-5545.svc.cluster.local\tcanonical name = externalsvc.services-5545.svc.cluster.local.\nName:\texternalsvc.services-5545.svc.cluster.local\nAddress: 10.254.224.140\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-5545, will wait for the garbage collector to delete the pods +Feb 12 11:10:24.950: INFO: Deleting ReplicationController externalsvc took: 13.281758ms +Feb 12 11:10:25.951: INFO: Terminating ReplicationController externalsvc pods took: 1.000947374s +Feb 12 11:11:12.702: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:11:12.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5545" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 + +• [SLOW TEST:55.681 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":311,"completed":266,"skipped":4598,"failed":0} +SS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:11:12.742: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename namespaces +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7705 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a Namespace +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-8c70da31-8fc3-4723-92be-1c3a054bfdc6-6258 +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:11:13.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-7705" for this suite. +STEP: Destroying namespace "nspatchtest-8c70da31-8fc3-4723-92be-1c3a054bfdc6-6258" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":311,"completed":267,"skipped":4600,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:11:13.102: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7220 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +Feb 12 11:11:19.356: INFO: 0 pods remaining +Feb 12 11:11:19.356: INFO: 0 pods has nil DeletionTimestamp +Feb 12 11:11:19.356: INFO: +STEP: Gathering metrics +W0212 11:11:20.371059 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 11:11:20.371149 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 11:11:20.371182 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +Feb 12 11:11:20.371: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:11:20.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7220" for this suite. + +• [SLOW TEST:7.362 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":311,"completed":268,"skipped":4604,"failed":0} +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:11:20.464: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename server-version +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in server-version-7597 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Request ServerVersion +STEP: Confirm major version +Feb 12 11:11:20.710: INFO: Major version: 1 +STEP: Confirm minor version +Feb 12 11:11:20.710: INFO: cleanMinorVersion: 20 +Feb 12 11:11:20.710: INFO: Minor version: 20 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:11:20.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-7597" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":311,"completed":269,"skipped":4604,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:11:20.723: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6474 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Feb 12 11:11:20.941: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:20.947: INFO: Number of nodes with available pods: 0 +Feb 12 11:11:20.948: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:11:21.955: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:21.959: INFO: Number of nodes with available pods: 0 +Feb 12 11:11:21.959: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:11:22.962: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:22.967: INFO: Number of nodes with available pods: 0 +Feb 12 11:11:22.967: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:11:23.959: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:23.963: INFO: Number of nodes with available pods: 2 +Feb 12 11:11:23.964: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Feb 12 11:11:23.988: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:23.994: INFO: Number of nodes with available pods: 1 +Feb 12 11:11:23.995: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:11:25.006: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:25.010: INFO: Number of nodes with available pods: 1 +Feb 12 11:11:25.010: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:11:26.001: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:26.007: INFO: Number of nodes with available pods: 1 +Feb 12 11:11:26.007: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:11:27.010: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:11:27.015: INFO: Number of nodes with available pods: 2 +Feb 12 11:11:27.015: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6474, will wait for the garbage collector to delete the pods +Feb 12 11:11:27.115: INFO: Deleting DaemonSet.extensions daemon-set took: 13.427703ms +Feb 12 11:11:27.215: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.232287ms +Feb 12 11:12:12.634: INFO: Number of nodes with available pods: 0 +Feb 12 11:12:12.634: INFO: Number of running nodes: 0, number of available pods: 0 +Feb 12 11:12:12.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"593025"},"items":null} + +Feb 12 11:12:12.641: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"593025"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:12.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6474" for this suite. + +• [SLOW TEST:52.002 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":311,"completed":270,"skipped":4619,"failed":0} +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:12.728: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2792 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0666 on node default medium +Feb 12 11:12:12.940: INFO: Waiting up to 5m0s for pod "pod-09bbb049-9acb-4f34-ab03-168aa30659d5" in namespace "emptydir-2792" to be "Succeeded or Failed" +Feb 12 11:12:12.946: INFO: Pod "pod-09bbb049-9acb-4f34-ab03-168aa30659d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724255ms +Feb 12 11:12:14.963: INFO: Pod "pod-09bbb049-9acb-4f34-ab03-168aa30659d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023061256s +Feb 12 11:12:16.973: INFO: Pod "pod-09bbb049-9acb-4f34-ab03-168aa30659d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032493377s +STEP: Saw pod success +Feb 12 11:12:16.973: INFO: Pod "pod-09bbb049-9acb-4f34-ab03-168aa30659d5" satisfied condition "Succeeded or Failed" +Feb 12 11:12:16.977: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-09bbb049-9acb-4f34-ab03-168aa30659d5 container test-container: +STEP: delete the pod +Feb 12 11:12:17.152: INFO: Waiting for pod pod-09bbb049-9acb-4f34-ab03-168aa30659d5 to disappear +Feb 12 11:12:17.156: INFO: Pod pod-09bbb049-9acb-4f34-ab03-168aa30659d5 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:17.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2792" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":271,"skipped":4619,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:17.174: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-250 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create deployment with httpd image +Feb 12 11:12:17.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-250 create -f -' +Feb 12 11:12:17.771: INFO: stderr: "" +Feb 12 11:12:17.771: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Feb 12 11:12:17.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-250 diff -f -' +Feb 12 11:12:18.429: INFO: rc: 1 +Feb 12 11:12:18.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-250 delete -f -' +Feb 12 11:12:18.577: INFO: stderr: "" +Feb 12 11:12:18.577: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:18.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-250" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":311,"completed":272,"skipped":4629,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:18.617: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pod-network-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-5887 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Performing setup for networking test in namespace pod-network-test-5887 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Feb 12 11:12:18.774: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Feb 12 11:12:18.813: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 11:12:20.824: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Feb 12 11:12:22.826: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 11:12:24.826: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 11:12:26.821: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 11:12:28.826: INFO: The status of Pod netserver-0 is Running (Ready = false) +Feb 12 11:12:30.822: INFO: The status of Pod netserver-0 is Running (Ready = true) +Feb 12 11:12:30.829: INFO: The status of Pod netserver-1 is Running (Ready = false) +Feb 12 11:12:32.843: INFO: The status of Pod netserver-1 is Running (Ready = false) +Feb 12 11:12:34.838: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Feb 12 11:12:36.894: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Feb 12 11:12:36.894: INFO: Going to poll 10.100.92.223 on port 8080 at least 0 times, with a maximum of 34 tries before failing +Feb 12 11:12:36.899: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.100.92.223:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5887 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:12:36.899: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:12:37.162: INFO: Found all 1 expected endpoints: [netserver-0] +Feb 12 11:12:37.162: INFO: Going to poll 10.100.45.52 on port 8080 at least 0 times, with a maximum of 34 tries before failing +Feb 12 11:12:37.170: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.100.45.52:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5887 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Feb 12 11:12:37.170: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +Feb 12 11:12:37.407: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:37.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5887" for this suite. + +• [SLOW TEST:18.810 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":273,"skipped":4647,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:37.428: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename emptydir +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5862 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test emptydir 0644 on node default medium +Feb 12 11:12:37.613: INFO: Waiting up to 5m0s for pod "pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27" in namespace "emptydir-5862" to be "Succeeded or Failed" +Feb 12 11:12:37.621: INFO: Pod "pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11597ms +Feb 12 11:12:39.663: INFO: Pod "pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050177734s +STEP: Saw pod success +Feb 12 11:12:39.664: INFO: Pod "pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27" satisfied condition "Succeeded or Failed" +Feb 12 11:12:39.709: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27 container test-container: +STEP: delete the pod +Feb 12 11:12:39.860: INFO: Waiting for pod pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27 to disappear +Feb 12 11:12:39.865: INFO: Pod pod-c7914d2e-7e0e-4efb-b154-f3388a6c5f27 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:39.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5862" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":274,"skipped":4656,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:39.877: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-678 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 +Feb 12 11:12:40.038: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Feb 12 11:12:40.048: INFO: Waiting for terminating namespaces to be deleted... +Feb 12 11:12:40.053: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-0 before test +Feb 12 11:12:40.068: INFO: calico-node-kwp5z from kube-system started at 2021-02-09 10:10:52 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container calico-node ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: csi-cinder-nodeplugin-dlptl from kube-system started at 2021-02-09 10:11:22 +0000 UTC (2 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: kube-dns-autoscaler-69ccc7c7c7-qwdlm from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container autoscaler ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: magnum-metrics-server-5c48f677d9-9t4sh from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container metrics-server ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: npd-ktpl7 from kube-system started at 2021-02-09 10:11:22 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: netserver-0 from pod-network-test-5887 started at 2021-02-12 11:12:18 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container webserver ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: sonobuoy-e2e-job-49d5db3cb7e540b0 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container e2e ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: Container sonobuoy-worker ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:12:40.068: INFO: Container sonobuoy-worker ready: false, restart count 13 +Feb 12 11:12:40.068: INFO: Container systemd-logs ready: true, restart count 0 +Feb 12 11:12:40.068: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-1 before test +Feb 12 11:12:40.089: INFO: calico-node-xf85t from kube-system started at 2021-02-09 10:10:53 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container calico-node ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: csi-cinder-nodeplugin-pgnxp from kube-system started at 2021-02-12 09:40:03 +0000 UTC (2 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: npd-6phx9 from kube-system started at 2021-02-09 10:11:14 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: httpd-deployment-8576b7d76b-bh8fp from kubectl-250 started at 2021-02-12 11:12:17 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container httpd ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: host-test-container-pod from pod-network-test-5887 started at 2021-02-12 11:12:34 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container agnhost-container ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: netserver-1 from pod-network-test-5887 started at 2021-02-12 11:12:18 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container webserver ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: test-container-pod from pod-network-test-5887 started at 2021-02-12 11:12:34 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container webserver ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: sonobuoy from sonobuoy started at 2021-02-12 09:27:24 +0000 UTC (1 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container kube-sonobuoy ready: true, restart count 0 +Feb 12 11:12:40.089: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:12:40.089: INFO: Container sonobuoy-worker ready: false, restart count 13 +Feb 12 11:12:40.089: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: verifying the node has the label node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +STEP: verifying the node has the label node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod calico-node-kwp5z requesting resource cpu=250m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod calico-node-xf85t requesting resource cpu=250m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod csi-cinder-nodeplugin-dlptl requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod csi-cinder-nodeplugin-pgnxp requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod kube-dns-autoscaler-69ccc7c7c7-qwdlm requesting resource cpu=20m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod magnum-metrics-server-5c48f677d9-9t4sh requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod npd-6phx9 requesting resource cpu=20m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod npd-ktpl7 requesting resource cpu=20m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod httpd-deployment-8576b7d76b-bh8fp requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod host-test-container-pod requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod netserver-0 requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod netserver-1 requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod test-container-pod requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod sonobuoy requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod sonobuoy-e2e-job-49d5db3cb7e540b0 requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.182: INFO: Pod sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +Feb 12 11:12:40.182: INFO: Pod sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 requesting resource cpu=0m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +STEP: Starting Pods to consume most of the cluster CPU. +Feb 12 11:12:40.183: INFO: Creating a pod which consumes cpu=2597m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +Feb 12 11:12:40.197: INFO: Creating a pod which consumes cpu=2611m on Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c.1662fbb266efd1d6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-678/filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c to k8s-calico-coreos-yo5lpoxhpdlk-node-0] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c.1662fbb2aa96eb84], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c.1662fbb2b10ddcd2], Reason = [Created], Message = [Created container filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c.1662fbb2bc8ecce0], Reason = [Started], Message = [Started container filler-pod-137861fa-5a26-4e94-b63b-384ed3f4823c] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a.1662fbb2678f5a86], Reason = [Scheduled], Message = [Successfully assigned sched-pred-678/filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a to k8s-calico-coreos-yo5lpoxhpdlk-node-1] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a.1662fbb2b222952e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a.1662fbb2b6fa205c], Reason = [Created], Message = [Created container filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a.1662fbb2c159d059], Reason = [Started], Message = [Started container filler-pod-9dc1027e-41d4-4c2d-9bcd-52c14d9c9b1a] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1662fbb358898934], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1662fbb359422498], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] +STEP: removing the label node off the node k8s-calico-coreos-yo5lpoxhpdlk-node-0 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:45.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-678" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 + +• [SLOW TEST:5.464 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":311,"completed":275,"skipped":4667,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:45.345: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename svc-latency +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-5328 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:12:45.512: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-5328 +I0212 11:12:45.538947 22 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5328, replica count: 1 +I0212 11:12:46.590548 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0212 11:12:47.590938 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0212 11:12:48.591449 22 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Feb 12 11:12:48.711: INFO: Created: latency-svc-wb26z +Feb 12 11:12:48.721: INFO: Got endpoints: latency-svc-wb26z [29.145651ms] +Feb 12 11:12:48.738: INFO: Created: latency-svc-ssv4n +Feb 12 11:12:48.747: INFO: Got endpoints: latency-svc-ssv4n [24.76013ms] +Feb 12 11:12:48.748: INFO: Created: latency-svc-j7t2s +Feb 12 11:12:48.749: INFO: Created: latency-svc-gw59k +Feb 12 11:12:48.760: INFO: Got endpoints: latency-svc-gw59k [38.104038ms] +Feb 12 11:12:48.763: INFO: Got endpoints: latency-svc-j7t2s [39.682639ms] +Feb 12 11:12:48.766: INFO: Created: latency-svc-sg7h5 +Feb 12 11:12:48.774: INFO: Created: latency-svc-vc2d8 +Feb 12 11:12:48.775: INFO: Got endpoints: latency-svc-sg7h5 [52.106977ms] +Feb 12 11:12:48.783: INFO: Created: latency-svc-lfjph +Feb 12 11:12:48.786: INFO: Got endpoints: latency-svc-vc2d8 [62.603208ms] +Feb 12 11:12:48.788: INFO: Created: latency-svc-6rjxt +Feb 12 11:12:48.793: INFO: Got endpoints: latency-svc-lfjph [70.558186ms] +Feb 12 11:12:48.796: INFO: Got endpoints: latency-svc-6rjxt [73.293598ms] +Feb 12 11:12:48.800: INFO: Created: latency-svc-wj8bj +Feb 12 11:12:48.822: INFO: Created: latency-svc-2v88q +Feb 12 11:12:48.823: INFO: Created: latency-svc-62d6r +Feb 12 11:12:48.823: INFO: Created: latency-svc-2tdz2 +Feb 12 11:12:48.824: INFO: Got endpoints: latency-svc-62d6r [101.04499ms] +Feb 12 11:12:48.824: INFO: Got endpoints: latency-svc-2v88q [101.374103ms] +Feb 12 11:12:48.824: INFO: Got endpoints: latency-svc-2tdz2 [28.760413ms] +Feb 12 11:12:48.824: INFO: Got endpoints: latency-svc-wj8bj [101.740275ms] +Feb 12 11:12:48.826: INFO: Created: latency-svc-n5tgp +Feb 12 11:12:48.838: INFO: Created: latency-svc-fldpr +Feb 12 11:12:48.838: INFO: Created: latency-svc-szkjw +Feb 12 11:12:48.840: INFO: Got endpoints: latency-svc-n5tgp [118.177089ms] +Feb 12 11:12:48.841: INFO: Got endpoints: latency-svc-szkjw [118.450256ms] +Feb 12 11:12:48.848: INFO: Created: latency-svc-w5dsp +Feb 12 11:12:48.849: INFO: Got endpoints: latency-svc-fldpr [125.698107ms] +Feb 12 11:12:48.852: INFO: Got endpoints: latency-svc-w5dsp [128.506156ms] +Feb 12 11:12:48.853: INFO: Created: latency-svc-jnj7p +Feb 12 11:12:48.860: INFO: Created: latency-svc-qnnzx +Feb 12 11:12:48.862: INFO: Got endpoints: latency-svc-jnj7p [139.205017ms] +Feb 12 11:12:48.863: INFO: Created: latency-svc-489wh +Feb 12 11:12:48.866: INFO: Got endpoints: latency-svc-qnnzx [118.483479ms] +Feb 12 11:12:48.881: INFO: Created: latency-svc-4z8pf +Feb 12 11:12:48.884: INFO: Got endpoints: latency-svc-489wh [124.137606ms] +Feb 12 11:12:48.888: INFO: Created: latency-svc-g4xgh +Feb 12 11:12:48.901: INFO: Got endpoints: latency-svc-g4xgh [125.613066ms] +Feb 12 11:12:48.902: INFO: Got endpoints: latency-svc-4z8pf [138.485417ms] +Feb 12 11:12:48.906: INFO: Created: latency-svc-gbllq +Feb 12 11:12:48.910: INFO: Got endpoints: latency-svc-gbllq [124.640752ms] +Feb 12 11:12:48.912: INFO: Created: latency-svc-vf42m +Feb 12 11:12:48.916: INFO: Created: latency-svc-c2czm +Feb 12 11:12:48.921: INFO: Created: latency-svc-tbfdl +Feb 12 11:12:48.922: INFO: Got endpoints: latency-svc-vf42m [128.411253ms] +Feb 12 11:12:48.924: INFO: Created: latency-svc-wl72h +Feb 12 11:12:48.925: INFO: Got endpoints: latency-svc-c2czm [101.242731ms] +Feb 12 11:12:48.938: INFO: Created: latency-svc-v9572 +Feb 12 11:12:48.939: INFO: Got endpoints: latency-svc-tbfdl [113.958873ms] +Feb 12 11:12:48.939: INFO: Got endpoints: latency-svc-wl72h [114.121697ms] +Feb 12 11:12:48.945: INFO: Got endpoints: latency-svc-v9572 [120.65462ms] +Feb 12 11:12:48.954: INFO: Created: latency-svc-2hlbc +Feb 12 11:12:48.954: INFO: Created: latency-svc-tqtwh +Feb 12 11:12:48.955: INFO: Got endpoints: latency-svc-2hlbc [114.815636ms] +Feb 12 11:12:48.960: INFO: Created: latency-svc-qrp56 +Feb 12 11:12:48.962: INFO: Got endpoints: latency-svc-tqtwh [120.584327ms] +Feb 12 11:12:48.965: INFO: Created: latency-svc-5qmwb +Feb 12 11:12:48.972: INFO: Created: latency-svc-pvm2n +Feb 12 11:12:48.973: INFO: Got endpoints: latency-svc-5qmwb [120.807474ms] +Feb 12 11:12:48.974: INFO: Got endpoints: latency-svc-qrp56 [125.027192ms] +Feb 12 11:12:48.979: INFO: Created: latency-svc-ktlkd +Feb 12 11:12:48.984: INFO: Got endpoints: latency-svc-pvm2n [122.534545ms] +Feb 12 11:12:48.985: INFO: Created: latency-svc-7pg4p +Feb 12 11:12:48.986: INFO: Got endpoints: latency-svc-ktlkd [120.065836ms] +Feb 12 11:12:48.992: INFO: Got endpoints: latency-svc-7pg4p [107.089928ms] +Feb 12 11:12:48.995: INFO: Created: latency-svc-fzn74 +Feb 12 11:12:48.996: INFO: Created: latency-svc-rjvhj +Feb 12 11:12:49.000: INFO: Created: latency-svc-nvrpp +Feb 12 11:12:49.003: INFO: Got endpoints: latency-svc-fzn74 [101.668301ms] +Feb 12 11:12:49.003: INFO: Created: latency-svc-zmj28 +Feb 12 11:12:49.018: INFO: Got endpoints: latency-svc-rjvhj [116.755913ms] +Feb 12 11:12:49.018: INFO: Created: latency-svc-qqlb7 +Feb 12 11:12:49.026: INFO: Created: latency-svc-mdzdt +Feb 12 11:12:49.028: INFO: Created: latency-svc-gkksl +Feb 12 11:12:49.036: INFO: Created: latency-svc-btqx2 +Feb 12 11:12:49.039: INFO: Created: latency-svc-tdxcg +Feb 12 11:12:49.042: INFO: Created: latency-svc-mc2xn +Feb 12 11:12:49.045: INFO: Created: latency-svc-48f75 +Feb 12 11:12:49.048: INFO: Created: latency-svc-bjtnp +Feb 12 11:12:49.052: INFO: Created: latency-svc-fs7s4 +Feb 12 11:12:49.054: INFO: Created: latency-svc-bdkp7 +Feb 12 11:12:49.058: INFO: Created: latency-svc-kr25c +Feb 12 11:12:49.063: INFO: Created: latency-svc-wks6c +Feb 12 11:12:49.065: INFO: Created: latency-svc-lp9zv +Feb 12 11:12:49.069: INFO: Got endpoints: latency-svc-nvrpp [158.207181ms] +Feb 12 11:12:49.076: INFO: Created: latency-svc-rkgtx +Feb 12 11:12:49.122: INFO: Got endpoints: latency-svc-zmj28 [200.62801ms] +Feb 12 11:12:49.131: INFO: Created: latency-svc-whxs8 +Feb 12 11:12:49.165: INFO: Got endpoints: latency-svc-qqlb7 [239.818104ms] +Feb 12 11:12:49.190: INFO: Created: latency-svc-4j6gt +Feb 12 11:12:49.216: INFO: Got endpoints: latency-svc-mdzdt [277.581723ms] +Feb 12 11:12:49.226: INFO: Created: latency-svc-lsh25 +Feb 12 11:12:49.271: INFO: Got endpoints: latency-svc-gkksl [331.792442ms] +Feb 12 11:12:49.279: INFO: Created: latency-svc-mzkcd +Feb 12 11:12:49.321: INFO: Got endpoints: latency-svc-btqx2 [375.897264ms] +Feb 12 11:12:49.328: INFO: Created: latency-svc-phv4d +Feb 12 11:12:49.363: INFO: Got endpoints: latency-svc-tdxcg [408.728872ms] +Feb 12 11:12:49.374: INFO: Created: latency-svc-rp6nm +Feb 12 11:12:49.418: INFO: Got endpoints: latency-svc-mc2xn [455.924278ms] +Feb 12 11:12:49.426: INFO: Created: latency-svc-59zk2 +Feb 12 11:12:49.466: INFO: Got endpoints: latency-svc-48f75 [493.556873ms] +Feb 12 11:12:49.482: INFO: Created: latency-svc-tsrw8 +Feb 12 11:12:49.518: INFO: Got endpoints: latency-svc-bjtnp [543.762074ms] +Feb 12 11:12:49.526: INFO: Created: latency-svc-rtwh9 +Feb 12 11:12:49.566: INFO: Got endpoints: latency-svc-fs7s4 [581.655829ms] +Feb 12 11:12:49.578: INFO: Created: latency-svc-mr6bb +Feb 12 11:12:49.618: INFO: Got endpoints: latency-svc-bdkp7 [631.155275ms] +Feb 12 11:12:49.627: INFO: Created: latency-svc-4mws8 +Feb 12 11:12:49.666: INFO: Got endpoints: latency-svc-kr25c [674.295171ms] +Feb 12 11:12:49.676: INFO: Created: latency-svc-k9xzf +Feb 12 11:12:49.714: INFO: Got endpoints: latency-svc-wks6c [711.46777ms] +Feb 12 11:12:49.726: INFO: Created: latency-svc-pbqzc +Feb 12 11:12:49.767: INFO: Got endpoints: latency-svc-lp9zv [748.772927ms] +Feb 12 11:12:49.774: INFO: Created: latency-svc-kgssk +Feb 12 11:12:49.820: INFO: Got endpoints: latency-svc-rkgtx [751.218418ms] +Feb 12 11:12:49.829: INFO: Created: latency-svc-ztdqg +Feb 12 11:12:49.867: INFO: Got endpoints: latency-svc-whxs8 [744.96612ms] +Feb 12 11:12:49.875: INFO: Created: latency-svc-h6bqw +Feb 12 11:12:49.915: INFO: Got endpoints: latency-svc-4j6gt [750.238973ms] +Feb 12 11:12:49.924: INFO: Created: latency-svc-cdscz +Feb 12 11:12:49.967: INFO: Got endpoints: latency-svc-lsh25 [750.445838ms] +Feb 12 11:12:49.974: INFO: Created: latency-svc-bch92 +Feb 12 11:12:50.018: INFO: Got endpoints: latency-svc-mzkcd [746.911427ms] +Feb 12 11:12:50.026: INFO: Created: latency-svc-sqbcd +Feb 12 11:12:50.065: INFO: Got endpoints: latency-svc-phv4d [744.181506ms] +Feb 12 11:12:50.081: INFO: Created: latency-svc-9jj67 +Feb 12 11:12:50.114: INFO: Got endpoints: latency-svc-rp6nm [750.354023ms] +Feb 12 11:12:50.125: INFO: Created: latency-svc-zr4zt +Feb 12 11:12:50.166: INFO: Got endpoints: latency-svc-59zk2 [748.565039ms] +Feb 12 11:12:50.179: INFO: Created: latency-svc-rnpd8 +Feb 12 11:12:50.217: INFO: Got endpoints: latency-svc-tsrw8 [751.016732ms] +Feb 12 11:12:50.226: INFO: Created: latency-svc-25wmc +Feb 12 11:12:50.266: INFO: Got endpoints: latency-svc-rtwh9 [748.079645ms] +Feb 12 11:12:50.277: INFO: Created: latency-svc-qmvmm +Feb 12 11:12:50.335: INFO: Got endpoints: latency-svc-mr6bb [769.309523ms] +Feb 12 11:12:50.351: INFO: Created: latency-svc-fbxqk +Feb 12 11:12:50.377: INFO: Got endpoints: latency-svc-4mws8 [759.463415ms] +Feb 12 11:12:50.405: INFO: Created: latency-svc-5t6f9 +Feb 12 11:12:50.434: INFO: Got endpoints: latency-svc-k9xzf [767.689846ms] +Feb 12 11:12:50.450: INFO: Created: latency-svc-tjwvv +Feb 12 11:12:50.466: INFO: Got endpoints: latency-svc-pbqzc [751.711555ms] +Feb 12 11:12:50.476: INFO: Created: latency-svc-nm7tr +Feb 12 11:12:50.515: INFO: Got endpoints: latency-svc-kgssk [748.182162ms] +Feb 12 11:12:50.523: INFO: Created: latency-svc-x5dqc +Feb 12 11:12:50.575: INFO: Got endpoints: latency-svc-ztdqg [754.830757ms] +Feb 12 11:12:50.603: INFO: Created: latency-svc-8twck +Feb 12 11:12:50.629: INFO: Got endpoints: latency-svc-h6bqw [761.123561ms] +Feb 12 11:12:50.643: INFO: Created: latency-svc-6b286 +Feb 12 11:12:50.665: INFO: Got endpoints: latency-svc-cdscz [749.446338ms] +Feb 12 11:12:50.674: INFO: Created: latency-svc-tcdkh +Feb 12 11:12:50.714: INFO: Got endpoints: latency-svc-bch92 [746.830376ms] +Feb 12 11:12:50.721: INFO: Created: latency-svc-p2p4m +Feb 12 11:12:50.769: INFO: Got endpoints: latency-svc-sqbcd [751.483641ms] +Feb 12 11:12:50.778: INFO: Created: latency-svc-hkzxp +Feb 12 11:12:50.816: INFO: Got endpoints: latency-svc-9jj67 [750.84044ms] +Feb 12 11:12:50.828: INFO: Created: latency-svc-pnq4l +Feb 12 11:12:50.866: INFO: Got endpoints: latency-svc-zr4zt [751.340449ms] +Feb 12 11:12:50.892: INFO: Created: latency-svc-wjxk5 +Feb 12 11:12:50.916: INFO: Got endpoints: latency-svc-rnpd8 [749.350092ms] +Feb 12 11:12:50.939: INFO: Created: latency-svc-977gx +Feb 12 11:12:50.965: INFO: Got endpoints: latency-svc-25wmc [747.729717ms] +Feb 12 11:12:50.975: INFO: Created: latency-svc-b4gw4 +Feb 12 11:12:51.021: INFO: Got endpoints: latency-svc-qmvmm [755.174107ms] +Feb 12 11:12:51.031: INFO: Created: latency-svc-jjxl8 +Feb 12 11:12:51.067: INFO: Got endpoints: latency-svc-fbxqk [731.427853ms] +Feb 12 11:12:51.074: INFO: Created: latency-svc-7s4d6 +Feb 12 11:12:51.125: INFO: Got endpoints: latency-svc-5t6f9 [747.888138ms] +Feb 12 11:12:51.134: INFO: Created: latency-svc-wfkrn +Feb 12 11:12:51.166: INFO: Got endpoints: latency-svc-tjwvv [731.557077ms] +Feb 12 11:12:51.174: INFO: Created: latency-svc-v55m2 +Feb 12 11:12:51.215: INFO: Got endpoints: latency-svc-nm7tr [748.242954ms] +Feb 12 11:12:51.222: INFO: Created: latency-svc-rtg4j +Feb 12 11:12:51.264: INFO: Got endpoints: latency-svc-x5dqc [748.893247ms] +Feb 12 11:12:51.275: INFO: Created: latency-svc-w76tb +Feb 12 11:12:51.317: INFO: Got endpoints: latency-svc-8twck [741.464548ms] +Feb 12 11:12:51.324: INFO: Created: latency-svc-z4j9c +Feb 12 11:12:51.388: INFO: Got endpoints: latency-svc-6b286 [759.235222ms] +Feb 12 11:12:51.397: INFO: Created: latency-svc-7nxhf +Feb 12 11:12:51.419: INFO: Got endpoints: latency-svc-tcdkh [754.023336ms] +Feb 12 11:12:51.433: INFO: Created: latency-svc-rq72c +Feb 12 11:12:51.464: INFO: Got endpoints: latency-svc-p2p4m [750.081552ms] +Feb 12 11:12:51.473: INFO: Created: latency-svc-cl8t6 +Feb 12 11:12:51.515: INFO: Got endpoints: latency-svc-hkzxp [745.962925ms] +Feb 12 11:12:51.526: INFO: Created: latency-svc-4dx9k +Feb 12 11:12:51.575: INFO: Got endpoints: latency-svc-pnq4l [759.040717ms] +Feb 12 11:12:51.587: INFO: Created: latency-svc-qdsqc +Feb 12 11:12:51.614: INFO: Got endpoints: latency-svc-wjxk5 [747.942116ms] +Feb 12 11:12:51.632: INFO: Created: latency-svc-n8g9d +Feb 12 11:12:51.672: INFO: Got endpoints: latency-svc-977gx [755.124094ms] +Feb 12 11:12:51.680: INFO: Created: latency-svc-7fdnf +Feb 12 11:12:51.715: INFO: Got endpoints: latency-svc-b4gw4 [750.088909ms] +Feb 12 11:12:51.730: INFO: Created: latency-svc-7gslk +Feb 12 11:12:51.768: INFO: Got endpoints: latency-svc-jjxl8 [746.147736ms] +Feb 12 11:12:51.776: INFO: Created: latency-svc-f6ch6 +Feb 12 11:12:51.816: INFO: Got endpoints: latency-svc-7s4d6 [749.276585ms] +Feb 12 11:12:51.829: INFO: Created: latency-svc-shvzz +Feb 12 11:12:51.865: INFO: Got endpoints: latency-svc-wfkrn [739.094693ms] +Feb 12 11:12:51.871: INFO: Created: latency-svc-snvjs +Feb 12 11:12:51.919: INFO: Got endpoints: latency-svc-v55m2 [752.590354ms] +Feb 12 11:12:51.934: INFO: Created: latency-svc-sxnft +Feb 12 11:12:51.966: INFO: Got endpoints: latency-svc-rtg4j [750.807058ms] +Feb 12 11:12:51.974: INFO: Created: latency-svc-p9jht +Feb 12 11:12:52.016: INFO: Got endpoints: latency-svc-w76tb [750.999369ms] +Feb 12 11:12:52.025: INFO: Created: latency-svc-8pbvp +Feb 12 11:12:52.067: INFO: Got endpoints: latency-svc-z4j9c [750.178282ms] +Feb 12 11:12:52.077: INFO: Created: latency-svc-skj7k +Feb 12 11:12:52.119: INFO: Got endpoints: latency-svc-7nxhf [730.507123ms] +Feb 12 11:12:52.126: INFO: Created: latency-svc-8lmkn +Feb 12 11:12:52.170: INFO: Got endpoints: latency-svc-rq72c [750.990778ms] +Feb 12 11:12:52.178: INFO: Created: latency-svc-ltpk6 +Feb 12 11:12:52.216: INFO: Got endpoints: latency-svc-cl8t6 [752.212417ms] +Feb 12 11:12:52.226: INFO: Created: latency-svc-dvq27 +Feb 12 11:12:52.267: INFO: Got endpoints: latency-svc-4dx9k [751.433503ms] +Feb 12 11:12:52.277: INFO: Created: latency-svc-pwp2d +Feb 12 11:12:52.315: INFO: Got endpoints: latency-svc-qdsqc [739.936723ms] +Feb 12 11:12:52.325: INFO: Created: latency-svc-chlw4 +Feb 12 11:12:52.367: INFO: Got endpoints: latency-svc-n8g9d [753.152572ms] +Feb 12 11:12:52.378: INFO: Created: latency-svc-v7bxm +Feb 12 11:12:52.417: INFO: Got endpoints: latency-svc-7fdnf [744.852885ms] +Feb 12 11:12:52.426: INFO: Created: latency-svc-qfdsc +Feb 12 11:12:52.471: INFO: Got endpoints: latency-svc-7gslk [755.555552ms] +Feb 12 11:12:52.479: INFO: Created: latency-svc-pchtv +Feb 12 11:12:52.516: INFO: Got endpoints: latency-svc-f6ch6 [748.562262ms] +Feb 12 11:12:52.530: INFO: Created: latency-svc-jcmh4 +Feb 12 11:12:52.568: INFO: Got endpoints: latency-svc-shvzz [751.618238ms] +Feb 12 11:12:52.575: INFO: Created: latency-svc-lc5xd +Feb 12 11:12:52.623: INFO: Got endpoints: latency-svc-snvjs [758.634792ms] +Feb 12 11:12:52.632: INFO: Created: latency-svc-6gskv +Feb 12 11:12:52.668: INFO: Got endpoints: latency-svc-sxnft [748.954321ms] +Feb 12 11:12:52.678: INFO: Created: latency-svc-svn6h +Feb 12 11:12:52.715: INFO: Got endpoints: latency-svc-p9jht [749.084961ms] +Feb 12 11:12:52.726: INFO: Created: latency-svc-w2wk6 +Feb 12 11:12:52.765: INFO: Got endpoints: latency-svc-8pbvp [749.17164ms] +Feb 12 11:12:52.773: INFO: Created: latency-svc-7wznj +Feb 12 11:12:52.815: INFO: Got endpoints: latency-svc-skj7k [748.210282ms] +Feb 12 11:12:52.822: INFO: Created: latency-svc-bvksk +Feb 12 11:12:52.867: INFO: Got endpoints: latency-svc-8lmkn [748.623023ms] +Feb 12 11:12:52.876: INFO: Created: latency-svc-zwtp6 +Feb 12 11:12:52.915: INFO: Got endpoints: latency-svc-ltpk6 [744.732394ms] +Feb 12 11:12:52.924: INFO: Created: latency-svc-jv2bg +Feb 12 11:12:52.966: INFO: Got endpoints: latency-svc-dvq27 [749.280014ms] +Feb 12 11:12:52.976: INFO: Created: latency-svc-sqznv +Feb 12 11:12:53.018: INFO: Got endpoints: latency-svc-pwp2d [750.795018ms] +Feb 12 11:12:53.032: INFO: Created: latency-svc-56gd9 +Feb 12 11:12:53.067: INFO: Got endpoints: latency-svc-chlw4 [751.969656ms] +Feb 12 11:12:53.075: INFO: Created: latency-svc-qnlwx +Feb 12 11:12:53.122: INFO: Got endpoints: latency-svc-v7bxm [755.012616ms] +Feb 12 11:12:53.132: INFO: Created: latency-svc-7r25j +Feb 12 11:12:53.165: INFO: Got endpoints: latency-svc-qfdsc [748.176053ms] +Feb 12 11:12:53.185: INFO: Created: latency-svc-5dghm +Feb 12 11:12:53.220: INFO: Got endpoints: latency-svc-pchtv [748.865873ms] +Feb 12 11:12:53.230: INFO: Created: latency-svc-rfwm2 +Feb 12 11:12:53.266: INFO: Got endpoints: latency-svc-jcmh4 [749.950565ms] +Feb 12 11:12:53.276: INFO: Created: latency-svc-fj62j +Feb 12 11:12:53.317: INFO: Got endpoints: latency-svc-lc5xd [748.517437ms] +Feb 12 11:12:53.333: INFO: Created: latency-svc-hm4ff +Feb 12 11:12:53.371: INFO: Got endpoints: latency-svc-6gskv [747.307458ms] +Feb 12 11:12:53.380: INFO: Created: latency-svc-49mh9 +Feb 12 11:12:53.415: INFO: Got endpoints: latency-svc-svn6h [747.098064ms] +Feb 12 11:12:53.423: INFO: Created: latency-svc-cdxv9 +Feb 12 11:12:53.466: INFO: Got endpoints: latency-svc-w2wk6 [750.226289ms] +Feb 12 11:12:53.474: INFO: Created: latency-svc-whh78 +Feb 12 11:12:53.522: INFO: Got endpoints: latency-svc-7wznj [757.290575ms] +Feb 12 11:12:53.530: INFO: Created: latency-svc-lr78z +Feb 12 11:12:53.570: INFO: Got endpoints: latency-svc-bvksk [754.518628ms] +Feb 12 11:12:53.578: INFO: Created: latency-svc-95nxr +Feb 12 11:12:53.619: INFO: Got endpoints: latency-svc-zwtp6 [751.266552ms] +Feb 12 11:12:53.628: INFO: Created: latency-svc-p9jwl +Feb 12 11:12:53.666: INFO: Got endpoints: latency-svc-jv2bg [751.34129ms] +Feb 12 11:12:53.683: INFO: Created: latency-svc-5lnpg +Feb 12 11:12:53.722: INFO: Got endpoints: latency-svc-sqznv [755.841988ms] +Feb 12 11:12:53.730: INFO: Created: latency-svc-hspgg +Feb 12 11:12:53.768: INFO: Got endpoints: latency-svc-56gd9 [749.688955ms] +Feb 12 11:12:53.784: INFO: Created: latency-svc-624fd +Feb 12 11:12:53.816: INFO: Got endpoints: latency-svc-qnlwx [748.988653ms] +Feb 12 11:12:53.825: INFO: Created: latency-svc-vr2cd +Feb 12 11:12:53.865: INFO: Got endpoints: latency-svc-7r25j [743.095382ms] +Feb 12 11:12:53.873: INFO: Created: latency-svc-ltf2f +Feb 12 11:12:53.916: INFO: Got endpoints: latency-svc-5dghm [750.988657ms] +Feb 12 11:12:53.928: INFO: Created: latency-svc-t7647 +Feb 12 11:12:53.968: INFO: Got endpoints: latency-svc-rfwm2 [748.398783ms] +Feb 12 11:12:53.981: INFO: Created: latency-svc-vr8n8 +Feb 12 11:12:54.015: INFO: Got endpoints: latency-svc-fj62j [748.355007ms] +Feb 12 11:12:54.027: INFO: Created: latency-svc-kw2bm +Feb 12 11:12:54.068: INFO: Got endpoints: latency-svc-hm4ff [750.891532ms] +Feb 12 11:12:54.080: INFO: Created: latency-svc-w87z4 +Feb 12 11:12:54.120: INFO: Got endpoints: latency-svc-49mh9 [748.879859ms] +Feb 12 11:12:54.130: INFO: Created: latency-svc-kgqns +Feb 12 11:12:54.164: INFO: Got endpoints: latency-svc-cdxv9 [748.9652ms] +Feb 12 11:12:54.174: INFO: Created: latency-svc-gvxct +Feb 12 11:12:54.215: INFO: Got endpoints: latency-svc-whh78 [749.007078ms] +Feb 12 11:12:54.225: INFO: Created: latency-svc-4ktgx +Feb 12 11:12:54.267: INFO: Got endpoints: latency-svc-lr78z [744.374788ms] +Feb 12 11:12:54.276: INFO: Created: latency-svc-d78bg +Feb 12 11:12:54.319: INFO: Got endpoints: latency-svc-95nxr [748.872154ms] +Feb 12 11:12:54.328: INFO: Created: latency-svc-5mn2v +Feb 12 11:12:54.366: INFO: Got endpoints: latency-svc-p9jwl [747.240728ms] +Feb 12 11:12:54.376: INFO: Created: latency-svc-cdsr2 +Feb 12 11:12:54.417: INFO: Got endpoints: latency-svc-5lnpg [750.561424ms] +Feb 12 11:12:54.426: INFO: Created: latency-svc-gwgzg +Feb 12 11:12:54.464: INFO: Got endpoints: latency-svc-hspgg [741.68587ms] +Feb 12 11:12:54.482: INFO: Created: latency-svc-qnplm +Feb 12 11:12:54.518: INFO: Got endpoints: latency-svc-624fd [750.370913ms] +Feb 12 11:12:54.524: INFO: Created: latency-svc-tv6pg +Feb 12 11:12:54.565: INFO: Got endpoints: latency-svc-vr2cd [748.585191ms] +Feb 12 11:12:54.578: INFO: Created: latency-svc-nl7qq +Feb 12 11:12:54.615: INFO: Got endpoints: latency-svc-ltf2f [749.470645ms] +Feb 12 11:12:54.626: INFO: Created: latency-svc-pv5zv +Feb 12 11:12:54.666: INFO: Got endpoints: latency-svc-t7647 [749.941254ms] +Feb 12 11:12:54.676: INFO: Created: latency-svc-mkvkz +Feb 12 11:12:54.720: INFO: Got endpoints: latency-svc-vr8n8 [751.218773ms] +Feb 12 11:12:54.737: INFO: Created: latency-svc-jjd7b +Feb 12 11:12:54.764: INFO: Got endpoints: latency-svc-kw2bm [749.263531ms] +Feb 12 11:12:54.775: INFO: Created: latency-svc-ghsnw +Feb 12 11:12:54.817: INFO: Got endpoints: latency-svc-w87z4 [749.52355ms] +Feb 12 11:12:54.826: INFO: Created: latency-svc-gzpvv +Feb 12 11:12:54.865: INFO: Got endpoints: latency-svc-kgqns [745.778965ms] +Feb 12 11:12:54.878: INFO: Created: latency-svc-rbth7 +Feb 12 11:12:54.920: INFO: Got endpoints: latency-svc-gvxct [755.600101ms] +Feb 12 11:12:54.929: INFO: Created: latency-svc-r4xg2 +Feb 12 11:12:54.969: INFO: Got endpoints: latency-svc-4ktgx [754.726353ms] +Feb 12 11:12:54.978: INFO: Created: latency-svc-wkffq +Feb 12 11:12:55.017: INFO: Got endpoints: latency-svc-d78bg [750.471681ms] +Feb 12 11:12:55.025: INFO: Created: latency-svc-479r7 +Feb 12 11:12:55.066: INFO: Got endpoints: latency-svc-5mn2v [747.113013ms] +Feb 12 11:12:55.077: INFO: Created: latency-svc-wsvmd +Feb 12 11:12:55.122: INFO: Got endpoints: latency-svc-cdsr2 [755.416322ms] +Feb 12 11:12:55.131: INFO: Created: latency-svc-fwpxf +Feb 12 11:12:55.168: INFO: Got endpoints: latency-svc-gwgzg [751.276409ms] +Feb 12 11:12:55.177: INFO: Created: latency-svc-9kr72 +Feb 12 11:12:55.215: INFO: Got endpoints: latency-svc-qnplm [751.094408ms] +Feb 12 11:12:55.226: INFO: Created: latency-svc-wtt95 +Feb 12 11:12:55.270: INFO: Got endpoints: latency-svc-tv6pg [751.333075ms] +Feb 12 11:12:55.280: INFO: Created: latency-svc-7frd4 +Feb 12 11:12:55.319: INFO: Got endpoints: latency-svc-nl7qq [753.989957ms] +Feb 12 11:12:55.327: INFO: Created: latency-svc-rp2hp +Feb 12 11:12:55.365: INFO: Got endpoints: latency-svc-pv5zv [750.234557ms] +Feb 12 11:12:55.372: INFO: Created: latency-svc-n98xj +Feb 12 11:12:55.415: INFO: Got endpoints: latency-svc-mkvkz [749.214181ms] +Feb 12 11:12:55.426: INFO: Created: latency-svc-xt56n +Feb 12 11:12:55.465: INFO: Got endpoints: latency-svc-jjd7b [745.533479ms] +Feb 12 11:12:55.473: INFO: Created: latency-svc-744xq +Feb 12 11:12:55.517: INFO: Got endpoints: latency-svc-ghsnw [752.536136ms] +Feb 12 11:12:55.526: INFO: Created: latency-svc-sgf4f +Feb 12 11:12:55.594: INFO: Got endpoints: latency-svc-gzpvv [776.877058ms] +Feb 12 11:12:55.636: INFO: Got endpoints: latency-svc-rbth7 [770.29762ms] +Feb 12 11:12:55.667: INFO: Got endpoints: latency-svc-r4xg2 [746.57464ms] +Feb 12 11:12:55.672: INFO: Created: latency-svc-vvppj +Feb 12 11:12:55.676: INFO: Created: latency-svc-g2tk5 +Feb 12 11:12:55.682: INFO: Created: latency-svc-7k2x5 +Feb 12 11:12:55.718: INFO: Got endpoints: latency-svc-wkffq [748.936752ms] +Feb 12 11:12:55.729: INFO: Created: latency-svc-rxsbz +Feb 12 11:12:55.766: INFO: Got endpoints: latency-svc-479r7 [748.381485ms] +Feb 12 11:12:55.774: INFO: Created: latency-svc-m9dbr +Feb 12 11:12:55.816: INFO: Got endpoints: latency-svc-wsvmd [750.111719ms] +Feb 12 11:12:55.825: INFO: Created: latency-svc-2kn6k +Feb 12 11:12:55.866: INFO: Got endpoints: latency-svc-fwpxf [743.958899ms] +Feb 12 11:12:55.878: INFO: Created: latency-svc-bc4g6 +Feb 12 11:12:55.937: INFO: Got endpoints: latency-svc-9kr72 [768.655432ms] +Feb 12 11:12:55.946: INFO: Created: latency-svc-xnkc8 +Feb 12 11:12:55.968: INFO: Got endpoints: latency-svc-wtt95 [752.037296ms] +Feb 12 11:12:55.980: INFO: Created: latency-svc-zwrpc +Feb 12 11:12:56.019: INFO: Got endpoints: latency-svc-7frd4 [749.68374ms] +Feb 12 11:12:56.037: INFO: Created: latency-svc-hbztv +Feb 12 11:12:56.071: INFO: Got endpoints: latency-svc-rp2hp [751.918646ms] +Feb 12 11:12:56.079: INFO: Created: latency-svc-cglwr +Feb 12 11:12:56.118: INFO: Got endpoints: latency-svc-n98xj [752.591983ms] +Feb 12 11:12:56.128: INFO: Created: latency-svc-7vb9b +Feb 12 11:12:56.169: INFO: Got endpoints: latency-svc-xt56n [754.068063ms] +Feb 12 11:12:56.181: INFO: Created: latency-svc-fl87g +Feb 12 11:12:56.217: INFO: Got endpoints: latency-svc-744xq [751.01226ms] +Feb 12 11:12:56.226: INFO: Created: latency-svc-2rjgz +Feb 12 11:12:56.266: INFO: Got endpoints: latency-svc-sgf4f [749.110315ms] +Feb 12 11:12:56.274: INFO: Created: latency-svc-szrf5 +Feb 12 11:12:56.316: INFO: Got endpoints: latency-svc-vvppj [706.46473ms] +Feb 12 11:12:56.326: INFO: Created: latency-svc-vgzml +Feb 12 11:12:56.367: INFO: Got endpoints: latency-svc-g2tk5 [731.314437ms] +Feb 12 11:12:56.377: INFO: Created: latency-svc-sq982 +Feb 12 11:12:56.417: INFO: Got endpoints: latency-svc-7k2x5 [750.239909ms] +Feb 12 11:12:56.426: INFO: Created: latency-svc-7b662 +Feb 12 11:12:56.467: INFO: Got endpoints: latency-svc-rxsbz [748.60176ms] +Feb 12 11:12:56.475: INFO: Created: latency-svc-fzq4z +Feb 12 11:12:56.591: INFO: Got endpoints: latency-svc-m9dbr [825.220069ms] +Feb 12 11:12:56.602: INFO: Got endpoints: latency-svc-2kn6k [785.052839ms] +Feb 12 11:12:56.606: INFO: Created: latency-svc-hnjnh +Feb 12 11:12:56.620: INFO: Got endpoints: latency-svc-bc4g6 [754.503714ms] +Feb 12 11:12:56.669: INFO: Got endpoints: latency-svc-xnkc8 [731.715172ms] +Feb 12 11:12:56.723: INFO: Got endpoints: latency-svc-zwrpc [754.938883ms] +Feb 12 11:12:56.767: INFO: Got endpoints: latency-svc-hbztv [747.298361ms] +Feb 12 11:12:56.823: INFO: Got endpoints: latency-svc-cglwr [751.540063ms] +Feb 12 11:12:56.870: INFO: Got endpoints: latency-svc-7vb9b [752.076776ms] +Feb 12 11:12:56.918: INFO: Got endpoints: latency-svc-fl87g [748.579396ms] +Feb 12 11:12:56.966: INFO: Got endpoints: latency-svc-2rjgz [749.42337ms] +Feb 12 11:12:57.019: INFO: Got endpoints: latency-svc-szrf5 [753.10587ms] +Feb 12 11:12:57.073: INFO: Got endpoints: latency-svc-vgzml [757.072338ms] +Feb 12 11:12:57.116: INFO: Got endpoints: latency-svc-sq982 [748.631284ms] +Feb 12 11:12:57.166: INFO: Got endpoints: latency-svc-7b662 [748.30318ms] +Feb 12 11:12:57.215: INFO: Got endpoints: latency-svc-fzq4z [748.175363ms] +Feb 12 11:12:57.269: INFO: Got endpoints: latency-svc-hnjnh [678.065606ms] +Feb 12 11:12:57.270: INFO: Latencies: [24.76013ms 28.760413ms 38.104038ms 39.682639ms 52.106977ms 62.603208ms 70.558186ms 73.293598ms 101.04499ms 101.242731ms 101.374103ms 101.668301ms 101.740275ms 107.089928ms 113.958873ms 114.121697ms 114.815636ms 116.755913ms 118.177089ms 118.450256ms 118.483479ms 120.065836ms 120.584327ms 120.65462ms 120.807474ms 122.534545ms 124.137606ms 124.640752ms 125.027192ms 125.613066ms 125.698107ms 128.411253ms 128.506156ms 138.485417ms 139.205017ms 158.207181ms 200.62801ms 239.818104ms 277.581723ms 331.792442ms 375.897264ms 408.728872ms 455.924278ms 493.556873ms 543.762074ms 581.655829ms 631.155275ms 674.295171ms 678.065606ms 706.46473ms 711.46777ms 730.507123ms 731.314437ms 731.427853ms 731.557077ms 731.715172ms 739.094693ms 739.936723ms 741.464548ms 741.68587ms 743.095382ms 743.958899ms 744.181506ms 744.374788ms 744.732394ms 744.852885ms 744.96612ms 745.533479ms 745.778965ms 745.962925ms 746.147736ms 746.57464ms 746.830376ms 746.911427ms 747.098064ms 747.113013ms 747.240728ms 747.298361ms 747.307458ms 747.729717ms 747.888138ms 747.942116ms 748.079645ms 748.175363ms 748.176053ms 748.182162ms 748.210282ms 748.242954ms 748.30318ms 748.355007ms 748.381485ms 748.398783ms 748.517437ms 748.562262ms 748.565039ms 748.579396ms 748.585191ms 748.60176ms 748.623023ms 748.631284ms 748.772927ms 748.865873ms 748.872154ms 748.879859ms 748.893247ms 748.936752ms 748.954321ms 748.9652ms 748.988653ms 749.007078ms 749.084961ms 749.110315ms 749.17164ms 749.214181ms 749.263531ms 749.276585ms 749.280014ms 749.350092ms 749.42337ms 749.446338ms 749.470645ms 749.52355ms 749.68374ms 749.688955ms 749.941254ms 749.950565ms 750.081552ms 750.088909ms 750.111719ms 750.178282ms 750.226289ms 750.234557ms 750.238973ms 750.239909ms 750.354023ms 750.370913ms 750.445838ms 750.471681ms 750.561424ms 750.795018ms 750.807058ms 750.84044ms 750.891532ms 750.988657ms 750.990778ms 750.999369ms 751.01226ms 751.016732ms 751.094408ms 751.218418ms 751.218773ms 751.266552ms 751.276409ms 751.333075ms 751.340449ms 751.34129ms 751.433503ms 751.483641ms 751.540063ms 751.618238ms 751.711555ms 751.918646ms 751.969656ms 752.037296ms 752.076776ms 752.212417ms 752.536136ms 752.590354ms 752.591983ms 753.10587ms 753.152572ms 753.989957ms 754.023336ms 754.068063ms 754.503714ms 754.518628ms 754.726353ms 754.830757ms 754.938883ms 755.012616ms 755.124094ms 755.174107ms 755.416322ms 755.555552ms 755.600101ms 755.841988ms 757.072338ms 757.290575ms 758.634792ms 759.040717ms 759.235222ms 759.463415ms 761.123561ms 767.689846ms 768.655432ms 769.309523ms 770.29762ms 776.877058ms 785.052839ms 825.220069ms] +Feb 12 11:12:57.270: INFO: 50 %ile: 748.772927ms +Feb 12 11:12:57.270: INFO: 90 %ile: 755.124094ms +Feb 12 11:12:57.270: INFO: 99 %ile: 785.052839ms +Feb 12 11:12:57.270: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:12:57.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-5328" for this suite. + +• [SLOW TEST:11.956 seconds] +[sig-network] Service endpoints latency +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":311,"completed":276,"skipped":4697,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:12:57.301: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3961 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod pod-subpath-test-configmap-b577 +STEP: Creating a pod to test atomic-volume-subpath +Feb 12 11:12:57.502: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b577" in namespace "subpath-3961" to be "Succeeded or Failed" +Feb 12 11:12:57.507: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Pending", Reason="", readiness=false. Elapsed: 5.335448ms +Feb 12 11:12:59.526: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024188574s +Feb 12 11:13:01.536: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 4.033761034s +Feb 12 11:13:03.556: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 6.054438448s +Feb 12 11:13:05.566: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 8.064224568s +Feb 12 11:13:07.576: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 10.074188374s +Feb 12 11:13:09.587: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 12.085141873s +Feb 12 11:13:11.600: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 14.097533514s +Feb 12 11:13:13.618: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 16.115854903s +Feb 12 11:13:15.632: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 18.130417454s +Feb 12 11:13:17.644: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 20.142016107s +Feb 12 11:13:19.660: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Running", Reason="", readiness=true. Elapsed: 22.157966418s +Feb 12 11:13:21.673: INFO: Pod "pod-subpath-test-configmap-b577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.171014881s +STEP: Saw pod success +Feb 12 11:13:21.673: INFO: Pod "pod-subpath-test-configmap-b577" satisfied condition "Succeeded or Failed" +Feb 12 11:13:21.677: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-subpath-test-configmap-b577 container test-container-subpath-configmap-b577: +STEP: delete the pod +Feb 12 11:13:21.706: INFO: Waiting for pod pod-subpath-test-configmap-b577 to disappear +Feb 12 11:13:21.711: INFO: Pod pod-subpath-test-configmap-b577 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-b577 +Feb 12 11:13:21.712: INFO: Deleting pod "pod-subpath-test-configmap-b577" in namespace "subpath-3961" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:21.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3961" for this suite. + +• [SLOW TEST:24.431 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":311,"completed":277,"skipped":4711,"failed":0} +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:21.733: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1653 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1653" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":311,"completed":278,"skipped":4711,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:21.977: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9500 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name projected-secret-test-b5898b55-a538-4021-a30c-1c0d0a3fa476 +STEP: Creating a pod to test consume secrets +Feb 12 11:13:22.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249" in namespace "projected-9500" to be "Succeeded or Failed" +Feb 12 11:13:22.185: INFO: Pod "pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249": Phase="Pending", Reason="", readiness=false. Elapsed: 3.869921ms +Feb 12 11:13:24.200: INFO: Pod "pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018744748s +Feb 12 11:13:26.216: INFO: Pod "pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034338535s +STEP: Saw pod success +Feb 12 11:13:26.216: INFO: Pod "pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249" satisfied condition "Succeeded or Failed" +Feb 12 11:13:26.221: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249 container secret-volume-test: +STEP: delete the pod +Feb 12 11:13:26.250: INFO: Waiting for pod pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249 to disappear +Feb 12 11:13:26.259: INFO: Pod pod-projected-secrets-1b8a5e11-e282-4648-addc-41b3e29f9249 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9500" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":279,"skipped":4736,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:26.275: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4527 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name projected-configmap-test-volume-map-5078b338-9d27-4086-8053-f04fcf944f61 +STEP: Creating a pod to test consume configMaps +Feb 12 11:13:26.472: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf" in namespace "projected-4527" to be "Succeeded or Failed" +Feb 12 11:13:26.478: INFO: Pod "pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.493711ms +Feb 12 11:13:28.489: INFO: Pod "pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016700633s +Feb 12 11:13:30.505: INFO: Pod "pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032632658s +STEP: Saw pod success +Feb 12 11:13:30.505: INFO: Pod "pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf" satisfied condition "Succeeded or Failed" +Feb 12 11:13:30.509: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf container agnhost-container: +STEP: delete the pod +Feb 12 11:13:30.540: INFO: Waiting for pod pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf to disappear +Feb 12 11:13:30.546: INFO: Pod pod-projected-configmaps-32c1645c-113f-482a-ab44-a2fae7217bbf no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:30.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4527" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":280,"skipped":4751,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:30.563: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4942 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:35.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4942" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":311,"completed":281,"skipped":4782,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:35.231: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename deployment +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2382 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:13:35.391: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Feb 12 11:13:35.404: INFO: Pod name sample-pod: Found 0 pods out of 1 +Feb 12 11:13:40.420: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Feb 12 11:13:40.420: INFO: Creating deployment "test-rolling-update-deployment" +Feb 12 11:13:40.430: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Feb 12 11:13:40.441: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Feb 12 11:13:42.463: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Feb 12 11:13:42.467: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 +Feb 12 11:13:42.479: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2382 cd305a26-b1ed-4a86-9269-49cb4056654c 595514 1 2021-02-12 11:13:40 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-02-12 11:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 11:13:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005af9e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-12 11:13:40 +0000 UTC,LastTransitionTime:2021-02-12 11:13:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-02-12 11:13:42 +0000 UTC,LastTransitionTime:2021-02-12 11:13:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Feb 12 11:13:42.483: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-2382 9fa58e8a-ae34-410b-8e0a-02b6e042533d 595504 1 2021-02-12 11:13:40 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment cd305a26-b1ed-4a86-9269-49cb4056654c 0xc004d08347 0xc004d08348}] [] [{kube-controller-manager Update apps/v1 2021-02-12 11:13:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd305a26-b1ed-4a86-9269-49cb4056654c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d083d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Feb 12 11:13:42.483: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Feb 12 11:13:42.484: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2382 c94df085-3a5d-4d46-9d60-276d3559c231 595513 2 2021-02-12 11:13:35 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment cd305a26-b1ed-4a86-9269-49cb4056654c 0xc004d08237 0xc004d08238}] [] [{e2e.test Update apps/v1 2021-02-12 11:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-12 11:13:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd305a26-b1ed-4a86-9269-49cb4056654c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd 10.60.253.37/magnum/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d082d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Feb 12 11:13:42.489: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-q7llw" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-q7llw test-rolling-update-deployment-6b6bf9df46- deployment-2382 8e3d127e-c0e7-4f54-9a91-8c2c1d13a12c 595503 0 2021-02-12 11:13:40 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[cni.projectcalico.org/podIP:10.100.92.242/32 cni.projectcalico.org/podIPs:10.100.92.242/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 9fa58e8a-ae34-410b-8e0a-02b6e042533d 0xc004d088d7 0xc004d088d8}] [] [{kube-controller-manager Update v1 2021-02-12 11:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9fa58e8a-ae34-410b-8e0a-02b6e042533d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {calico Update v1 2021-02-12 11:13:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {kubelet Update v1 2021-02-12 11:13:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.100.92.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pb476,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pb476,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pb476,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-calico-coreos-yo5lpoxhpdlk-node-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 11:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 11:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 11:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-12 11:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.0.115,PodIP:10.100.92.242,StartTime:2021-02-12 11:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-12 11:13:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:docker://0d575ebfcd5e2bbfff239270d78ca14a2246c32637c2268b59f25e759ca3c641,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.100.92.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:42.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2382" for this suite. + +• [SLOW TEST:7.272 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":282,"skipped":4790,"failed":0} +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:42.507: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-49 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-map-d4e341ad-f44d-4502-a9c3-667931006e88 +STEP: Creating a pod to test consume configMaps +Feb 12 11:13:42.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77" in namespace "configmap-49" to be "Succeeded or Failed" +Feb 12 11:13:42.728: INFO: Pod "pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.658041ms +Feb 12 11:13:44.736: INFO: Pod "pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013191973s +STEP: Saw pod success +Feb 12 11:13:44.736: INFO: Pod "pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77" satisfied condition "Succeeded or Failed" +Feb 12 11:13:44.740: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77 container agnhost-container: +STEP: delete the pod +Feb 12 11:13:44.768: INFO: Waiting for pod pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77 to disappear +Feb 12 11:13:44.773: INFO: Pod pod-configmaps-5061fb31-9fc2-4db7-badc-0e21b557fd77 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:13:44.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-49" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":283,"skipped":4792,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:13:44.784: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename subpath +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4528 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating pod pod-subpath-test-configmap-t6sw +STEP: Creating a pod to test atomic-volume-subpath +Feb 12 11:13:44.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t6sw" in namespace "subpath-4528" to be "Succeeded or Failed" +Feb 12 11:13:44.969: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Pending", Reason="", readiness=false. Elapsed: 5.023319ms +Feb 12 11:13:47.004: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 2.040537667s +Feb 12 11:13:49.019: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 4.055034903s +Feb 12 11:13:51.033: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 6.069603111s +Feb 12 11:13:53.051: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 8.087476361s +Feb 12 11:13:55.067: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 10.103328918s +Feb 12 11:13:57.107: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 12.143799328s +Feb 12 11:13:59.127: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 14.163771163s +Feb 12 11:14:01.146: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 16.18235399s +Feb 12 11:14:03.163: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 18.19910992s +Feb 12 11:14:05.175: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 20.211863866s +Feb 12 11:14:07.239: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Running", Reason="", readiness=true. Elapsed: 22.27570313s +Feb 12 11:14:09.261: INFO: Pod "pod-subpath-test-configmap-t6sw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.297142022s +STEP: Saw pod success +Feb 12 11:14:09.261: INFO: Pod "pod-subpath-test-configmap-t6sw" satisfied condition "Succeeded or Failed" +Feb 12 11:14:09.269: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-subpath-test-configmap-t6sw container test-container-subpath-configmap-t6sw: +STEP: delete the pod +Feb 12 11:14:09.292: INFO: Waiting for pod pod-subpath-test-configmap-t6sw to disappear +Feb 12 11:14:09.299: INFO: Pod pod-subpath-test-configmap-t6sw no longer exists +STEP: Deleting pod pod-subpath-test-configmap-t6sw +Feb 12 11:14:09.299: INFO: Deleting pod "pod-subpath-test-configmap-t6sw" in namespace "subpath-4528" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:09.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4528" for this suite. + +• [SLOW TEST:24.533 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":311,"completed":284,"skipped":4801,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:09.318: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-605 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:11.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-605" for this suite. +•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":285,"skipped":4835,"failed":0} +S +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:11.570: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename resourcequota +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-7777 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:27.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7777" for this suite. + +• [SLOW TEST:16.354 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":311,"completed":286,"skipped":4836,"failed":0} +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:27.930: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename var-expansion +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8287 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test substitution in container's command +Feb 12 11:14:28.160: INFO: Waiting up to 5m0s for pod "var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb" in namespace "var-expansion-8287" to be "Succeeded or Failed" +Feb 12 11:14:28.167: INFO: Pod "var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.126355ms +Feb 12 11:14:30.179: INFO: Pod "var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018632524s +STEP: Saw pod success +Feb 12 11:14:30.179: INFO: Pod "var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb" satisfied condition "Succeeded or Failed" +Feb 12 11:14:30.183: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb container dapi-container: +STEP: delete the pod +Feb 12 11:14:30.218: INFO: Waiting for pod var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb to disappear +Feb 12 11:14:30.224: INFO: Pod var-expansion-1388dc66-c8b5-42d4-a04a-f77f4063fedb no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:30.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-8287" for this suite. +•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":311,"completed":287,"skipped":4836,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:30.236: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename dns +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3864 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3864.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3864.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3864.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3864.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3864.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3864.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Feb 12 11:14:34.470: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.475: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.480: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.484: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.488: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.492: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.497: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.501: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.505: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.508: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3864.svc.cluster.local from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.512: INFO: Unable to read jessie_udp@PodARecord from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.516: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa: the server could not find the requested resource (get pods dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa) +Feb 12 11:14:34.516: INFO: Lookups using dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3864.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3864.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3864.svc.cluster.local jessie_udp@dns-test-service-2.dns-3864.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3864.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] + +Feb 12 11:14:39.565: INFO: DNS probes using dns-3864/dns-test-1423bf8e-cd5e-4bf0-b4da-03d9595be6fa succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:39.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3864" for this suite. + +• [SLOW TEST:9.408 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":311,"completed":288,"skipped":4848,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:39.649: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename pods +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-716 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:14:41.870: INFO: Waiting up to 5m0s for pod "client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1" in namespace "pods-716" to be "Succeeded or Failed" +Feb 12 11:14:41.881: INFO: Pod "client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495721ms +Feb 12 11:14:43.900: INFO: Pod "client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029351399s +STEP: Saw pod success +Feb 12 11:14:43.900: INFO: Pod "client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1" satisfied condition "Succeeded or Failed" +Feb 12 11:14:43.905: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1 container env3cont: +STEP: delete the pod +Feb 12 11:14:43.941: INFO: Waiting for pod client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1 to disappear +Feb 12 11:14:43.947: INFO: Pod client-envvars-03708393-8ebe-4cbc-b303-8a426e96b8a1 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:14:43.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-716" for this suite. +•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":311,"completed":289,"skipped":4891,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:14:43.962: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-635 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +W0212 11:15:24.239421 22 metrics_grabber.go:98] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled. +W0212 11:15:24.239882 22 metrics_grabber.go:102] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +W0212 11:15:24.239959 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. +Feb 12 11:15:24.240: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Feb 12 11:15:24.240: INFO: Deleting pod "simpletest.rc-6fwj4" in namespace "gc-635" +Feb 12 11:15:24.261: INFO: Deleting pod "simpletest.rc-6l77x" in namespace "gc-635" +Feb 12 11:15:24.283: INFO: Deleting pod "simpletest.rc-96qct" in namespace "gc-635" +Feb 12 11:15:24.297: INFO: Deleting pod "simpletest.rc-9mdt2" in namespace "gc-635" +Feb 12 11:15:24.311: INFO: Deleting pod "simpletest.rc-brjdm" in namespace "gc-635" +Feb 12 11:15:24.340: INFO: Deleting pod "simpletest.rc-fmftw" in namespace "gc-635" +Feb 12 11:15:24.375: INFO: Deleting pod "simpletest.rc-q9jhv" in namespace "gc-635" +Feb 12 11:15:24.391: INFO: Deleting pod "simpletest.rc-rmtkq" in namespace "gc-635" +Feb 12 11:15:24.432: INFO: Deleting pod "simpletest.rc-x9sqt" in namespace "gc-635" +Feb 12 11:15:24.447: INFO: Deleting pod "simpletest.rc-zb29l" in namespace "gc-635" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:15:24.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-635" for this suite. + +• [SLOW TEST:40.535 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":311,"completed":290,"skipped":4984,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:15:24.500: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4955 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Starting the proxy +Feb 12 11:15:24.683: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-4955 proxy --unix-socket=/tmp/kubectl-proxy-unix410867584/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:15:24.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4955" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":311,"completed":291,"skipped":5004,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:15:24.840: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4798 +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-2ba92a14-8d8c-414b-8b76-de2ce7adc6e1 +STEP: Creating the pod +STEP: Updating configmap projected-configmap-test-upd-2ba92a14-8d8c-414b-8b76-de2ce7adc6e1 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:16:43.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4798" for this suite. + +• [SLOW TEST:78.926 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":292,"skipped":5013,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:16:43.767: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6690 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:16:43.966: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Feb 12 11:16:43.977: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:43.977: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Feb 12 11:16:44.007: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:44.007: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:45.013: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:45.013: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:46.016: INFO: Number of nodes with available pods: 1 +Feb 12 11:16:46.016: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Feb 12 11:16:46.050: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:46.055: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Feb 12 11:16:46.068: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:46.068: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:47.077: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:47.077: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:48.077: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:48.077: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:49.075: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:49.075: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:50.078: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:50.078: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:51.079: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:51.079: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:52.080: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:52.080: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:53.072: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:53.072: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:54.079: INFO: Number of nodes with available pods: 0 +Feb 12 11:16:54.079: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:16:55.082: INFO: Number of nodes with available pods: 1 +Feb 12 11:16:55.082: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6690, will wait for the garbage collector to delete the pods +Feb 12 11:16:55.161: INFO: Deleting DaemonSet.extensions daemon-set took: 12.067154ms +Feb 12 11:16:56.162: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000641895s +Feb 12 11:17:02.574: INFO: Number of nodes with available pods: 0 +Feb 12 11:17:02.574: INFO: Number of running nodes: 0, number of available pods: 0 +Feb 12 11:17:02.578: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"596730"},"items":null} + +Feb 12 11:17:02.580: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596730"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:02.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6690" for this suite. + +• [SLOW TEST:18.864 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":311,"completed":293,"skipped":5030,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:02.634: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename sched-pred +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-8273 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 +Feb 12 11:17:02.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Feb 12 11:17:02.832: INFO: Waiting for terminating namespaces to be deleted... +Feb 12 11:17:02.838: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-0 before test +Feb 12 11:17:02.850: INFO: calico-node-kwp5z from kube-system started at 2021-02-09 10:10:52 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.850: INFO: Container calico-node ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: csi-cinder-nodeplugin-dlptl from kube-system started at 2021-02-09 10:11:22 +0000 UTC (2 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: kube-dns-autoscaler-69ccc7c7c7-qwdlm from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container autoscaler ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: magnum-metrics-server-5c48f677d9-9t4sh from kube-system started at 2021-02-09 13:00:16 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container metrics-server ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: npd-ktpl7 from kube-system started at 2021-02-09 10:11:22 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: sonobuoy-e2e-job-49d5db3cb7e540b0 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container e2e ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: Container sonobuoy-worker ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-vsns8 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:17:02.851: INFO: Container sonobuoy-worker ready: false, restart count 14 +Feb 12 11:17:02.851: INFO: Container systemd-logs ready: true, restart count 0 +Feb 12 11:17:02.851: INFO: +Logging pods the apiserver thinks is on node k8s-calico-coreos-yo5lpoxhpdlk-node-1 before test +Feb 12 11:17:02.860: INFO: calico-node-xf85t from kube-system started at 2021-02-09 10:10:53 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.860: INFO: Container calico-node ready: true, restart count 0 +Feb 12 11:17:02.860: INFO: csi-cinder-nodeplugin-pgnxp from kube-system started at 2021-02-12 09:40:03 +0000 UTC (2 container statuses recorded) +Feb 12 11:17:02.860: INFO: Container cinder-csi-plugin ready: true, restart count 0 +Feb 12 11:17:02.860: INFO: Container node-driver-registrar ready: true, restart count 0 +Feb 12 11:17:02.860: INFO: npd-6phx9 from kube-system started at 2021-02-09 10:11:14 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.861: INFO: Container node-problem-detector ready: true, restart count 0 +Feb 12 11:17:02.861: INFO: sonobuoy from sonobuoy started at 2021-02-12 09:27:24 +0000 UTC (1 container statuses recorded) +Feb 12 11:17:02.861: INFO: Container kube-sonobuoy ready: true, restart count 0 +Feb 12 11:17:02.861: INFO: sonobuoy-systemd-logs-daemon-set-6ce695d673744b16-ns8g5 from sonobuoy started at 2021-02-12 09:27:26 +0000 UTC (2 container statuses recorded) +Feb 12 11:17:02.861: INFO: Container sonobuoy-worker ready: false, restart count 14 +Feb 12 11:17:02.861: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-c0b9a722-f653-4a51-ab9f-2c7909dabfe5 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-c0b9a722-f653-4a51-ab9f-2c7909dabfe5 off the node k8s-calico-coreos-yo5lpoxhpdlk-node-1 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-c0b9a722-f653-4a51-ab9f-2c7909dabfe5 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:10.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8273" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 + +• [SLOW TEST:8.381 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":311,"completed":294,"skipped":5032,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:11.027: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5782 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating secret with name secret-test-fb392ba0-cb4d-48aa-9b74-33a2f792ad1b +STEP: Creating a pod to test consume secrets +Feb 12 11:17:11.214: INFO: Waiting up to 5m0s for pod "pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5" in namespace "secrets-5782" to be "Succeeded or Failed" +Feb 12 11:17:11.220: INFO: Pod "pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.407385ms +Feb 12 11:17:13.233: INFO: Pod "pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018463319s +Feb 12 11:17:15.246: INFO: Pod "pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032116348s +STEP: Saw pod success +Feb 12 11:17:15.246: INFO: Pod "pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5" satisfied condition "Succeeded or Failed" +Feb 12 11:17:15.250: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5 container secret-volume-test: +STEP: delete the pod +Feb 12 11:17:15.275: INFO: Waiting for pod pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5 to disappear +Feb 12 11:17:15.284: INFO: Pod pod-secrets-56e0895a-0d79-4b91-871d-1ab200e9fba5 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:15.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5782" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":295,"skipped":5098,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:15.302: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename container-probe +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-6601 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:17:15.512: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Pending, waiting for it to be Running (with Ready = true) +Feb 12 11:17:17.526: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Pending, waiting for it to be Running (with Ready = true) +Feb 12 11:17:19.524: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:21.524: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:23.522: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:25.525: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:27.527: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:29.520: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:31.525: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = false) +Feb 12 11:17:33.525: INFO: The status of Pod test-webserver-56e3bfab-620d-4839-8cbc-f6c0898a251d is Running (Ready = true) +Feb 12 11:17:33.529: INFO: Container started at 2021-02-12 11:17:16 +0000 UTC, pod became ready at 2021-02-12 11:17:33 +0000 UTC +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6601" for this suite. + +• [SLOW TEST:18.242 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":311,"completed":296,"skipped":5118,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:33.545: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubectl +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2967 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 +STEP: creating the pod +Feb 12 11:17:33.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 create -f -' +Feb 12 11:17:34.378: INFO: stderr: "" +Feb 12 11:17:34.378: INFO: stdout: "pod/pause created\n" +Feb 12 11:17:34.378: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Feb 12 11:17:34.378: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2967" to be "running and ready" +Feb 12 11:17:34.385: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.140864ms +Feb 12 11:17:36.395: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.017606934s +Feb 12 11:17:36.396: INFO: Pod "pause" satisfied condition "running and ready" +Feb 12 11:17:36.396: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: adding the label testing-label with value testing-label-value to a pod +Feb 12 11:17:36.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 label pods pause testing-label=testing-label-value' +Feb 12 11:17:36.565: INFO: stderr: "" +Feb 12 11:17:36.565: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Feb 12 11:17:36.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 get pod pause -L testing-label' +Feb 12 11:17:36.737: INFO: stderr: "" +Feb 12 11:17:36.737: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Feb 12 11:17:36.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 label pods pause testing-label-' +Feb 12 11:17:36.914: INFO: stderr: "" +Feb 12 11:17:36.914: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Feb 12 11:17:36.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 get pod pause -L testing-label' +Feb 12 11:17:37.077: INFO: stderr: "" +Feb 12 11:17:37.077: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 +STEP: using delete to clean up resources +Feb 12 11:17:37.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 delete --grace-period=0 --force -f -' +Feb 12 11:17:37.266: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Feb 12 11:17:37.266: INFO: stdout: "pod \"pause\" force deleted\n" +Feb 12 11:17:37.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 get rc,svc -l name=pause --no-headers' +Feb 12 11:17:37.487: INFO: stderr: "No resources found in kubectl-2967 namespace.\n" +Feb 12 11:17:37.487: INFO: stdout: "" +Feb 12 11:17:37.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=kubectl-2967 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Feb 12 11:17:37.651: INFO: stderr: "" +Feb 12 11:17:37.651: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:37.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2967" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":311,"completed":297,"skipped":5144,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:37.673: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename configmap +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4305 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating configMap with name configmap-test-volume-91c9d52e-50bf-4eba-9d5f-fb5b56d1c777 +STEP: Creating a pod to test consume configMaps +Feb 12 11:17:37.868: INFO: Waiting up to 5m0s for pod "pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d" in namespace "configmap-4305" to be "Succeeded or Failed" +Feb 12 11:17:37.873: INFO: Pod "pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494075ms +Feb 12 11:17:39.882: INFO: Pod "pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012911487s +Feb 12 11:17:41.908: INFO: Pod "pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039265448s +STEP: Saw pod success +Feb 12 11:17:41.908: INFO: Pod "pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d" satisfied condition "Succeeded or Failed" +Feb 12 11:17:41.913: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-0 pod pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d container agnhost-container: +STEP: delete the pod +Feb 12 11:17:42.005: INFO: Waiting for pod pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d to disappear +Feb 12 11:17:42.014: INFO: Pod pod-configmaps-313bbc6a-2d0a-49d1-9bd1-9f813804e17d no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:17:42.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4305" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":298,"skipped":5179,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:17:42.026: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename daemonsets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-214 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Feb 12 11:17:42.225: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:42.229: INFO: Number of nodes with available pods: 0 +Feb 12 11:17:42.229: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:17:43.238: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:43.241: INFO: Number of nodes with available pods: 0 +Feb 12 11:17:43.241: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:17:44.243: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:44.249: INFO: Number of nodes with available pods: 0 +Feb 12 11:17:44.249: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-0 is running more than one daemon pod +Feb 12 11:17:45.241: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:45.245: INFO: Number of nodes with available pods: 2 +Feb 12 11:17:45.245: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Feb 12 11:17:45.267: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:45.269: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:45.269: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:46.287: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:46.291: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:46.291: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:47.278: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:47.282: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:47.282: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:48.281: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:48.285: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:48.285: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:49.278: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:49.283: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:49.283: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:50.291: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:50.295: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:50.295: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:51.283: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:51.289: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:51.289: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:52.279: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:52.284: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:52.284: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:53.283: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:53.288: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:53.288: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:54.280: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:54.284: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:54.284: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:55.279: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:55.284: INFO: Number of nodes with available pods: 1 +Feb 12 11:17:55.284: INFO: Node k8s-calico-coreos-yo5lpoxhpdlk-node-1 is running more than one daemon pod +Feb 12 11:17:56.279: INFO: DaemonSet pods can't tolerate node k8s-calico-coreos-yo5lpoxhpdlk-master-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Feb 12 11:17:56.284: INFO: Number of nodes with available pods: 2 +Feb 12 11:17:56.284: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-214, will wait for the garbage collector to delete the pods +Feb 12 11:17:56.355: INFO: Deleting DaemonSet.extensions daemon-set took: 12.420272ms +Feb 12 11:17:57.355: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000238656s +Feb 12 11:18:12.569: INFO: Number of nodes with available pods: 0 +Feb 12 11:18:12.569: INFO: Number of running nodes: 0, number of available pods: 0 +Feb 12 11:18:12.572: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"597179"},"items":null} + +Feb 12 11:18:12.575: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"597179"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:12.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-214" for this suite. + +• [SLOW TEST:30.583 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":311,"completed":299,"skipped":5185,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:12.616: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename gc +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9915 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:18:12.810: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6d781fe0-999a-4bbc-bde3-5dc96b1ea307", Controller:(*bool)(0xc0044577da), BlockOwnerDeletion:(*bool)(0xc0044577db)}} +Feb 12 11:18:12.826: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"149e0bc5-81cf-4222-8d25-70b5c533752c", Controller:(*bool)(0xc005af95ba), BlockOwnerDeletion:(*bool)(0xc005af95bb)}} +Feb 12 11:18:12.837: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ca69018e-fd35-4d00-a8b0-590a6f50b7cd", Controller:(*bool)(0xc004489ac2), BlockOwnerDeletion:(*bool)(0xc004489ac3)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:17.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9915" for this suite. + +• [SLOW TEST:5.258 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":311,"completed":300,"skipped":5209,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:17.874: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2373 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name projected-secret-test-69dde38d-e113-4e50-8557-b56ca24663f2 +STEP: Creating a pod to test consume secrets +Feb 12 11:18:18.072: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83" in namespace "projected-2373" to be "Succeeded or Failed" +Feb 12 11:18:18.079: INFO: Pod "pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580788ms +Feb 12 11:18:20.089: INFO: Pod "pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017055312s +Feb 12 11:18:22.094: INFO: Pod "pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022180577s +STEP: Saw pod success +Feb 12 11:18:22.094: INFO: Pod "pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83" satisfied condition "Succeeded or Failed" +Feb 12 11:18:22.098: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83 container projected-secret-volume-test: +STEP: delete the pod +Feb 12 11:18:22.206: INFO: Waiting for pod pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83 to disappear +Feb 12 11:18:22.211: INFO: Pod pod-projected-secrets-0523ea42-a67b-4318-b0ae-628122cc2f83 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:22.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2373" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":301,"skipped":5214,"failed":0} +SSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:22.223: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename downward-api +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3719 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward api env vars +Feb 12 11:18:22.403: INFO: Waiting up to 5m0s for pod "downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107" in namespace "downward-api-3719" to be "Succeeded or Failed" +Feb 12 11:18:22.408: INFO: Pod "downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107": Phase="Pending", Reason="", readiness=false. Elapsed: 5.215959ms +Feb 12 11:18:24.422: INFO: Pod "downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019101013s +STEP: Saw pod success +Feb 12 11:18:24.422: INFO: Pod "downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107" satisfied condition "Succeeded or Failed" +Feb 12 11:18:24.426: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107 container dapi-container: +STEP: delete the pod +Feb 12 11:18:24.457: INFO: Waiting for pod downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107 to disappear +Feb 12 11:18:24.461: INFO: Pod downward-api-c118f5df-d5e5-4acc-a4a8-e8b49fcc2107 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:24.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3719" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":311,"completed":302,"skipped":5218,"failed":0} + +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:24.472: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename statefulset +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2829 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 +STEP: Creating service test in namespace statefulset-2829 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-2829 +STEP: Creating statefulset with conflicting port in namespace statefulset-2829 +STEP: Waiting until pod test-pod will start running in namespace statefulset-2829 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2829 +Feb 12 11:18:28.695: INFO: Observed stateful pod in namespace: statefulset-2829, name: ss-0, uid: 05fef78f-479e-4192-b721-9010e882cb95, status phase: Pending. Waiting for statefulset controller to delete. +Feb 12 11:18:29.274: INFO: Observed stateful pod in namespace: statefulset-2829, name: ss-0, uid: 05fef78f-479e-4192-b721-9010e882cb95, status phase: Failed. Waiting for statefulset controller to delete. +Feb 12 11:18:29.281: INFO: Observed stateful pod in namespace: statefulset-2829, name: ss-0, uid: 05fef78f-479e-4192-b721-9010e882cb95, status phase: Failed. Waiting for statefulset controller to delete. +Feb 12 11:18:29.286: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2829 +STEP: Removing pod with conflicting port in namespace statefulset-2829 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2829 and will be in running state +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 +Feb 12 11:18:33.356: INFO: Deleting all statefulset in ns statefulset-2829 +Feb 12 11:18:33.361: INFO: Scaling statefulset ss to 0 +Feb 12 11:18:53.408: INFO: Waiting for statefulset status.replicas updated to 0 +Feb 12 11:18:53.412: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:53.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2829" for this suite. + +• [SLOW TEST:28.980 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":311,"completed":303,"skipped":5218,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:53.457: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename kubelet-test +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9070 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[AfterEach] [k8s.io] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-9070" for this suite. +•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":311,"completed":304,"skipped":5231,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:53.670: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename secrets +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1784 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating projection with secret that has name secret-emptykey-test-17b43f6c-5cf9-4dfc-a71c-765c3fdd9693 +[AfterEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:18:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1784" for this suite. +•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":311,"completed":305,"skipped":5241,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:18:53.845: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-watch +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-3975 +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:18:54.016: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Creating first CR +Feb 12 11:18:54.636: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:18:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:18:54Z]] name:name1 resourceVersion:597570 uid:c588cf95-5b0e-462c-bec8-e5fefc111899] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Feb 12 11:19:04.659: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:19:04Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:19:04Z]] name:name2 resourceVersion:597641 uid:bad35901-6613-4911-98f3-9d7960fbb927] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Feb 12 11:19:14.686: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:18:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:19:14Z]] name:name1 resourceVersion:597662 uid:c588cf95-5b0e-462c-bec8-e5fefc111899] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Feb 12 11:19:24.707: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:19:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:19:24Z]] name:name2 resourceVersion:597684 uid:bad35901-6613-4911-98f3-9d7960fbb927] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Feb 12 11:19:34.739: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:18:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:19:14Z]] name:name1 resourceVersion:597710 uid:c588cf95-5b0e-462c-bec8-e5fefc111899] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Feb 12 11:19:44.754: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-12T11:19:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-12T11:19:24Z]] name:name2 resourceVersion:597737 uid:bad35901-6613-4911-98f3-9d7960fbb927] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:19:55.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-3975" for this suite. + +• [SLOW TEST:61.476 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":311,"completed":306,"skipped":5261,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:19:55.323: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename webhook +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3401 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Feb 12 11:19:56.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Feb 12 11:19:58.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748725596, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748725596, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748725596, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748725596, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Feb 12 11:20:01.812: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:20:01.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3401" for this suite. +STEP: Destroying namespace "webhook-3401-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 + +• [SLOW TEST:6.661 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":311,"completed":307,"skipped":5298,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:20:01.984: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename projected +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5069 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a pod to test downward API volume plugin +Feb 12 11:20:02.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4" in namespace "projected-5069" to be "Succeeded or Failed" +Feb 12 11:20:02.192: INFO: Pod "downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.355675ms +Feb 12 11:20:04.207: INFO: Pod "downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029685578s +Feb 12 11:20:06.221: INFO: Pod "downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044579952s +STEP: Saw pod success +Feb 12 11:20:06.222: INFO: Pod "downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4" satisfied condition "Succeeded or Failed" +Feb 12 11:20:06.226: INFO: Trying to get logs from node k8s-calico-coreos-yo5lpoxhpdlk-node-1 pod downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4 container client-container: +STEP: delete the pod +Feb 12 11:20:06.308: INFO: Waiting for pod downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4 to disappear +Feb 12 11:20:06.317: INFO: Pod downwardapi-volume-72c66a1e-5713-42d6-abcc-97b8d8f9e4b4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:20:06.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5069" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":308,"skipped":5306,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:20:06.329: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename services +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4040 +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:20:06.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4040" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":311,"completed":309,"skipped":5318,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:20:06.508: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-55 +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +Feb 12 11:20:06.672: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Feb 12 11:20:11.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-55 --namespace=crd-publish-openapi-55 create -f -' +Feb 12 11:20:12.139: INFO: stderr: "" +Feb 12 11:20:12.139: INFO: stdout: "e2e-test-crd-publish-openapi-6779-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Feb 12 11:20:12.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-55 --namespace=crd-publish-openapi-55 delete e2e-test-crd-publish-openapi-6779-crds test-cr' +Feb 12 11:20:12.312: INFO: stderr: "" +Feb 12 11:20:12.312: INFO: stdout: "e2e-test-crd-publish-openapi-6779-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Feb 12 11:20:12.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-55 --namespace=crd-publish-openapi-55 apply -f -' +Feb 12 11:20:12.681: INFO: stderr: "" +Feb 12 11:20:12.681: INFO: stdout: "e2e-test-crd-publish-openapi-6779-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Feb 12 11:20:12.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-55 --namespace=crd-publish-openapi-55 delete e2e-test-crd-publish-openapi-6779-crds test-cr' +Feb 12 11:20:12.825: INFO: stderr: "" +Feb 12 11:20:12.825: INFO: stdout: "e2e-test-crd-publish-openapi-6779-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Feb 12 11:20:12.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-585161109 --namespace=crd-publish-openapi-55 explain e2e-test-crd-publish-openapi-6779-crds' +Feb 12 11:20:13.148: INFO: stderr: "" +Feb 12 11:20:13.148: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6779-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:20:17.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-55" for this suite. + +• [SLOW TEST:11.458 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":311,"completed":310,"skipped":5324,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 +STEP: Creating a kubernetes client +Feb 12 11:20:17.967: INFO: >>> kubeConfig: /tmp/kubeconfig-585161109 +STEP: Building a namespace api object, basename job +STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7637 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Feb 12 11:20:22.680: INFO: Successfully updated pod "adopt-release-mtc8j" +STEP: Checking that the Job readopts the Pod +Feb 12 11:20:22.680: INFO: Waiting up to 15m0s for pod "adopt-release-mtc8j" in namespace "job-7637" to be "adopted" +Feb 12 11:20:22.686: INFO: Pod "adopt-release-mtc8j": Phase="Running", Reason="", readiness=true. Elapsed: 5.648744ms +Feb 12 11:20:24.699: INFO: Pod "adopt-release-mtc8j": Phase="Running", Reason="", readiness=true. Elapsed: 2.018629242s +Feb 12 11:20:24.699: INFO: Pod "adopt-release-mtc8j" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Feb 12 11:20:25.223: INFO: Successfully updated pod "adopt-release-mtc8j" +STEP: Checking that the Job releases the Pod +Feb 12 11:20:25.223: INFO: Waiting up to 15m0s for pod "adopt-release-mtc8j" in namespace "job-7637" to be "released" +Feb 12 11:20:25.229: INFO: Pod "adopt-release-mtc8j": Phase="Running", Reason="", readiness=true. Elapsed: 5.813018ms +Feb 12 11:20:27.242: INFO: Pod "adopt-release-mtc8j": Phase="Running", Reason="", readiness=true. Elapsed: 2.019259904s +Feb 12 11:20:27.242: INFO: Pod "adopt-release-mtc8j" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 +Feb 12 11:20:27.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-7637" for this suite. + +• [SLOW TEST:9.291 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":311,"completed":311,"skipped":5356,"failed":0} +Feb 12 11:20:27.258: INFO: Running AfterSuite actions on all nodes +Feb 12 11:20:27.258: INFO: Running AfterSuite actions on node 1 +Feb 12 11:20:27.258: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/results/junit_01.xml +{"msg":"Test Suite completed","total":311,"completed":311,"skipped":5356,"failed":0} + +Ran 311 of 5667 Specs in 6775.920 seconds +SUCCESS! -- 311 Passed | 0 Failed | 0 Pending | 5356 Skipped +PASS + +Ginkgo ran 1 suite in 1h52m58.14249402s +Test Suite Passed diff --git a/v1.20/openstack-magnum/junit_01.xml b/v1.20/openstack-magnum/junit_01.xml new file mode 100644 index 0000000000..d3643f6cba --- /dev/null +++ b/v1.20/openstack-magnum/junit_01.xml @@ -0,0 +1,16382 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file