diff --git a/v1.18/nexastack/PRODUCT.yaml b/v1.18/nexastack/PRODUCT.yaml new file mode 100644 index 00000000000..208e375f597 --- /dev/null +++ b/v1.18/nexastack/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: Xenonstack Inc +name: Nexastack Managed Kubernetes +version: v1.0.0 +website_url: https://www.nexastack.com +documentation_url: https://www.nexastack.com +product_logo_url: https://drive.google.com/file/d/1lrczU4HFXJS2JqLc3TMnmF9H8UpRm-R-/view?usp=sharing +type: hosted platform +description: Nexastack is a Platform for the Automation of Cloud Native and IaC Tools for Effective +Infrastructure Management diff --git a/v1.18/nexastack/Please_DocuSign_Certified_Kubernetes_Form.pd.pdf b/v1.18/nexastack/Please_DocuSign_Certified_Kubernetes_Form.pd.pdf new file mode 100644 index 00000000000..803ca9ed017 Binary files /dev/null and b/v1.18/nexastack/Please_DocuSign_Certified_Kubernetes_Form.pd.pdf differ diff --git a/v1.18/nexastack/README.md b/v1.18/nexastack/README.md new file mode 100644 index 00000000000..305cabf4b5e --- /dev/null +++ b/v1.18/nexastack/README.md @@ -0,0 +1,68 @@ +# How to reproduce + +## 1. Create your account on Nexastack platform + +Open [Nexastack](https://www.nexastack.com) and apply for invite. (Currently we are supporting registrations via invitation only). Once received the invite, continue the onboarding process via following the instructions alongwith invite. + +Nexastack is Infrastructure as Code Platform allowing to install applications on a managed kubernetes. With Nexastack you manage applications and we manage kubernetes and the underlying Infrastructre. + + +## 2. Create a new cluster and download kubeconfig +On the main Navigation Panel go to your Project, choose Clusters, select Nexastack managed clusters. Here you will find the cluster managed by Nexastack for you. + +Request t kubeconfig using the cluster's action button. + + +## 3. Access the cluster +To access the cluster export kubeconfig file from step 4, for example: +``` +export KUBECONFIG=~/cncf/kubeconfig +``` +Now you're authorized to interact with your cluster: +``` +kubectl get pods --all-namespaces +``` + +## 4. Run the tests +Download a [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI, or build it yourself by running: + +``` +$ go get -u -v github.com/heptio/sonobuoy +``` + +Deploy a Sonobuoy pod to your cluster and instruct it to ignore master taints: + +``` +$ sonobuoy run --plugin-env=e2e.E2E_EXTRA_ARGS="--non-blocking-taints=CriticalAddonsOnly,dedicated" --mode=certified-conformance +``` + +View actively running pods: + +``` +$ sonobuoy status +``` + +To inspect the logs: + +``` +$ sonobuoy logs +``` + +Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to +a local directory: + +``` +$ sonobuoy retrieve . +``` + +This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local `.` directory. Extract the contents into `./results` with: + +``` +mkdir ./results; tar xzf *.tar.gz -C ./results +``` + +To clean up Kubernetes objects created by Sonobuoy, run: + +``` +sonobuoy delete +``` diff --git a/v1.18/nexastack/e2e.log b/v1.18/nexastack/e2e.log new file mode 100644 index 00000000000..f99577a37f5 --- /dev/null +++ b/v1.18/nexastack/e2e.log @@ -0,0 +1,11595 @@ +I0110 17:09:15.619862 24 test_context.go:410] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-870154433 +I0110 17:09:15.619893 24 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready +I0110 17:09:15.619998 24 e2e.go:124] Starting e2e run "a47a8f7c-5beb-457c-b078-b4f21c489d75" on Ginkgo node 1 +{"msg":"Test Suite starting","total":277,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1610298554 - Will randomize all specs +Will run 277 of 4994 specs + +Jan 10 17:09:15.637: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:09:15.640: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jan 10 17:09:15.655: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jan 10 17:09:15.706: INFO: 35 / 69 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jan 10 17:09:15.706: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Jan 10 17:09:15.706: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jan 10 17:09:15.714: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Jan 10 17:09:15.714: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kops-controller' (0 seconds elapsed) +Jan 10 17:09:15.714: INFO: e2e test version: v1.18.14 +Jan 10 17:09:15.715: INFO: kube-apiserver version: v1.18.14 +Jan 10 17:09:15.715: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:09:15.719: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:15.719: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename resourcequota +Jan 10 17:09:15.741: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Ensuring resource quota status captures service creation +STEP: Deleting a Service +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:26.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9390" for this suite. + +• [SLOW TEST:11.074 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":277,"completed":1,"skipped":12,"failed":0} +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:26.793: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating secret with name secret-test-map-6fe6cea0-418b-44a1-9a4c-23da72d5784a +STEP: Creating a pod to test consume secrets +Jan 10 17:09:26.825: INFO: Waiting up to 5m0s for pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057" in namespace "secrets-2673" to be "Succeeded or Failed" +Jan 10 17:09:26.828: INFO: Pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84037ms +Jan 10 17:09:28.830: INFO: Pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005199803s +Jan 10 17:09:30.833: INFO: Pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008195671s +Jan 10 17:09:32.836: INFO: Pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010665711s +STEP: Saw pod success +Jan 10 17:09:32.836: INFO: Pod "pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057" satisfied condition "Succeeded or Failed" +Jan 10 17:09:32.837: INFO: Trying to get logs from node ip-172-20-39-143.ap-south-1.compute.internal pod pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057 container secret-volume-test: +STEP: delete the pod +Jan 10 17:09:32.859: INFO: Waiting for pod pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057 to disappear +Jan 10 17:09:32.861: INFO: Pod pod-secrets-d5445d9d-f3fa-44bd-b603-bddbf2b51057 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:32.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2673" for this suite. + +• [SLOW TEST:6.074 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":2,"skipped":12,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:32.868: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:32.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2163" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":277,"completed":3,"skipped":40,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:32.921: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name configmap-test-volume-2a9d109a-5f89-46c0-aaec-bd9eb49adf12 +STEP: Creating a pod to test consume configMaps +Jan 10 17:09:32.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5" in namespace "configmap-7398" to be "Succeeded or Failed" +Jan 10 17:09:32.954: INFO: Pod "pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938734ms +Jan 10 17:09:34.956: INFO: Pod "pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006441465s +STEP: Saw pod success +Jan 10 17:09:34.956: INFO: Pod "pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5" satisfied condition "Succeeded or Failed" +Jan 10 17:09:34.958: INFO: Trying to get logs from node ip-172-20-39-143.ap-south-1.compute.internal pod pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5 container configmap-volume-test: +STEP: delete the pod +Jan 10 17:09:34.972: INFO: Waiting for pod pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5 to disappear +Jan 10 17:09:34.974: INFO: Pod pod-configmaps-064409da-3a2f-4758-9e26-dff0bc3d77f5 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7398" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":277,"completed":4,"skipped":97,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:34.981: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a replication controller +Jan 10 17:09:35.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-6576' +Jan 10 17:09:35.427: INFO: stderr: "" +Jan 10 17:09:35.427: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jan 10 17:09:35.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6576' +Jan 10 17:09:35.508: INFO: stderr: "" +Jan 10 17:09:35.508: INFO: stdout: "update-demo-nautilus-b779k update-demo-nautilus-h95l6 " +Jan 10 17:09:35.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-b779k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6576' +Jan 10 17:09:35.584: INFO: stderr: "" +Jan 10 17:09:35.584: INFO: stdout: "" +Jan 10 17:09:35.585: INFO: update-demo-nautilus-b779k is created but not running +Jan 10 17:09:40.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6576' +Jan 10 17:09:40.665: INFO: stderr: "" +Jan 10 17:09:40.665: INFO: stdout: "update-demo-nautilus-b779k update-demo-nautilus-h95l6 " +Jan 10 17:09:40.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-b779k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6576' +Jan 10 17:09:40.735: INFO: stderr: "" +Jan 10 17:09:40.735: INFO: stdout: "true" +Jan 10 17:09:40.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-b779k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6576' +Jan 10 17:09:40.806: INFO: stderr: "" +Jan 10 17:09:40.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:09:40.806: INFO: validating pod update-demo-nautilus-b779k +Jan 10 17:09:40.809: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:09:40.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:09:40.809: INFO: update-demo-nautilus-b779k is verified up and running +Jan 10 17:09:40.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-h95l6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6576' +Jan 10 17:09:40.880: INFO: stderr: "" +Jan 10 17:09:40.880: INFO: stdout: "true" +Jan 10 17:09:40.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-h95l6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6576' +Jan 10 17:09:40.949: INFO: stderr: "" +Jan 10 17:09:40.949: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:09:40.949: INFO: validating pod update-demo-nautilus-h95l6 +Jan 10 17:09:40.952: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:09:40.953: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:09:40.953: INFO: update-demo-nautilus-h95l6 is verified up and running +STEP: using delete to clean up resources +Jan 10 17:09:40.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-6576' +Jan 10 17:09:41.024: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 10 17:09:41.024: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jan 10 17:09:41.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6576' +Jan 10 17:09:41.099: INFO: stderr: "No resources found in kubectl-6576 namespace.\n" +Jan 10 17:09:41.099: INFO: stdout: "" +Jan 10 17:09:41.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -l name=update-demo --namespace=kubectl-6576 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 10 17:09:41.174: INFO: stderr: "" +Jan 10 17:09:41.174: INFO: stdout: "update-demo-nautilus-b779k\nupdate-demo-nautilus-h95l6\n" +Jan 10 17:09:41.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6576' +Jan 10 17:09:41.750: INFO: stderr: "No resources found in kubectl-6576 namespace.\n" +Jan 10 17:09:41.750: INFO: stdout: "" +Jan 10 17:09:41.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -l name=update-demo --namespace=kubectl-6576 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 10 17:09:41.824: INFO: stderr: "" +Jan 10 17:09:41.824: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:41.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6576" for this suite. + +• [SLOW TEST:6.850 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":277,"completed":5,"skipped":99,"failed":0} +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:41.832: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name projected-configmap-test-volume-5027396c-c108-4699-a198-c37f4c90043c +STEP: Creating a pod to test consume configMaps +Jan 10 17:09:41.860: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde" in namespace "projected-941" to be "Succeeded or Failed" +Jan 10 17:09:41.862: INFO: Pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde": Phase="Pending", Reason="", readiness=false. Elapsed: 1.715795ms +Jan 10 17:09:43.864: INFO: Pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004276055s +Jan 10 17:09:45.867: INFO: Pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006817711s +Jan 10 17:09:47.869: INFO: Pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009434814s +STEP: Saw pod success +Jan 10 17:09:47.869: INFO: Pod "pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde" satisfied condition "Succeeded or Failed" +Jan 10 17:09:47.871: INFO: Trying to get logs from node ip-172-20-52-46.ap-south-1.compute.internal pod pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde container projected-configmap-volume-test: +STEP: delete the pod +Jan 10 17:09:47.894: INFO: Waiting for pod pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde to disappear +Jan 10 17:09:47.896: INFO: Pod pod-projected-configmaps-16c05d14-2449-4c1c-a7dc-4cd271964bde no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:47.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-941" for this suite. + +• [SLOW TEST:6.070 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":277,"completed":6,"skipped":99,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:47.903: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jan 10 17:09:47.929: INFO: Waiting up to 5m0s for pod "pod-51763d2a-d7d4-40a1-83dd-697b060be05e" in namespace "emptydir-6580" to be "Succeeded or Failed" +Jan 10 17:09:47.935: INFO: Pod "pod-51763d2a-d7d4-40a1-83dd-697b060be05e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.723135ms +Jan 10 17:09:49.937: INFO: Pod "pod-51763d2a-d7d4-40a1-83dd-697b060be05e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007775714s +STEP: Saw pod success +Jan 10 17:09:49.937: INFO: Pod "pod-51763d2a-d7d4-40a1-83dd-697b060be05e" satisfied condition "Succeeded or Failed" +Jan 10 17:09:49.938: INFO: Trying to get logs from node ip-172-20-52-46.ap-south-1.compute.internal pod pod-51763d2a-d7d4-40a1-83dd-697b060be05e container test-container: +STEP: delete the pod +Jan 10 17:09:49.952: INFO: Waiting for pod pod-51763d2a-d7d4-40a1-83dd-697b060be05e to disappear +Jan 10 17:09:49.953: INFO: Pod pod-51763d2a-d7d4-40a1-83dd-697b060be05e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:09:49.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6580" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":7,"skipped":151,"failed":0} +S +------------------------------ +[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:09:49.959: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Jan 10 17:09:49.980: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 10 17:10:50.010: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:10:50.013: INFO: Starting informer... +STEP: Starting pods... +Jan 10 17:10:50.223: INFO: Pod1 is running on ip-172-20-39-143.ap-south-1.compute.internal. Tainting Node +Jan 10 17:10:56.438: INFO: Pod2 is running on ip-172-20-39-143.ap-south-1.compute.internal. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Jan 10 17:11:09.329: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Jan 10 17:11:23.409: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:11:23.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-8555" for this suite. + +• [SLOW TEST:93.472 seconds] +[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":277,"completed":8,"skipped":152,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:11:23.432: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Jan 10 17:11:23.452: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 10 17:12:23.483: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:12:23.485: INFO: Starting informer... +STEP: Starting pod... +Jan 10 17:12:23.693: INFO: Pod is running on ip-172-20-33-172.ap-south-1.compute.internal. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Jan 10 17:12:23.704: INFO: Pod wasn't evicted. Proceeding +Jan 10 17:12:23.704: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Jan 10 17:13:38.719: INFO: Pod wasn't evicted. Test successful +[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:13:38.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-4312" for this suite. + +• [SLOW TEST:135.294 seconds] +[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":277,"completed":9,"skipped":194,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:13:38.727: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name projected-configmap-test-volume-map-6a43c2b1-e434-4d6d-80c6-f07d7f5f0ef4 +STEP: Creating a pod to test consume configMaps +Jan 10 17:13:38.759: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200" in namespace "projected-9228" to be "Succeeded or Failed" +Jan 10 17:13:38.760: INFO: Pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200": Phase="Pending", Reason="", readiness=false. Elapsed: 1.743788ms +Jan 10 17:13:40.763: INFO: Pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004326662s +Jan 10 17:13:42.766: INFO: Pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006890333s +Jan 10 17:13:44.768: INFO: Pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009722437s +STEP: Saw pod success +Jan 10 17:13:44.768: INFO: Pod "pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200" satisfied condition "Succeeded or Failed" +Jan 10 17:13:44.770: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200 container projected-configmap-volume-test: +STEP: delete the pod +Jan 10 17:13:44.792: INFO: Waiting for pod pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200 to disappear +Jan 10 17:13:44.794: INFO: Pod pod-projected-configmaps-e9431b4a-917d-4e06-a456-b4aa57d14200 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:13:44.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9228" for this suite. + +• [SLOW TEST:6.074 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":10,"skipped":244,"failed":0} +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:13:44.802: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating the pod +Jan 10 17:13:47.355: INFO: Successfully updated pod "annotationupdate47fb350d-23ad-402e-8827-7b14e5150234" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:13:51.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3624" for this suite. + +• [SLOW TEST:6.581 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":277,"completed":11,"skipped":244,"failed":0} +S +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:13:51.383: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:13:51.407: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:13:57.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9003" for this suite. + +• [SLOW TEST:6.059 seconds] +[k8s.io] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":277,"completed":12,"skipped":245,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:13:57.441: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:13:58.568: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jan 10 17:14:00.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:14:02.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:14:04.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:14:06.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895638, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:14:09.585: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:19.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6880" for this suite. +STEP: Destroying namespace "webhook-6880-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:22.283 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":277,"completed":13,"skipped":251,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:19.724: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:19.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8601" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":277,"completed":14,"skipped":281,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:19.762: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Jan 10 17:14:29.803: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W0110 17:14:29.803361 24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:29.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-2996" for this suite. + +• [SLOW TEST:10.049 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":277,"completed":15,"skipped":291,"failed":0} +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:29.811: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward API volume plugin +Jan 10 17:14:29.838: INFO: Waiting up to 5m0s for pod "downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d" in namespace "projected-6537" to be "Succeeded or Failed" +Jan 10 17:14:29.840: INFO: Pod "downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.845312ms +Jan 10 17:14:31.842: INFO: Pod "downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004396233s +STEP: Saw pod success +Jan 10 17:14:31.842: INFO: Pod "downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d" satisfied condition "Succeeded or Failed" +Jan 10 17:14:31.844: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d container client-container: +STEP: delete the pod +Jan 10 17:14:31.858: INFO: Waiting for pod downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d to disappear +Jan 10 17:14:31.860: INFO: Pod downwardapi-volume-128173fa-2766-4639-a26b-dd5c4915846d no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:31.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6537" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":277,"completed":16,"skipped":292,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[k8s.io] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:31.877: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:14:31.905: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-285aef74-83da-4827-8757-942fde63df32" in namespace "security-context-test-6639" to be "Succeeded or Failed" +Jan 10 17:14:31.908: INFO: Pod "busybox-privileged-false-285aef74-83da-4827-8757-942fde63df32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599318ms +Jan 10 17:14:33.910: INFO: Pod "busybox-privileged-false-285aef74-83da-4827-8757-942fde63df32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005011406s +Jan 10 17:14:33.910: INFO: Pod "busybox-privileged-false-285aef74-83da-4827-8757-942fde63df32" satisfied condition "Succeeded or Failed" +Jan 10 17:14:33.916: INFO: Got logs for pod "busybox-privileged-false-285aef74-83da-4827-8757-942fde63df32": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6639" for this suite. +•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":17,"skipped":305,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:33.923: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating secret with name secret-test-07f89642-0da0-49e1-9b2a-fdeae8e43542 +STEP: Creating a pod to test consume secrets +Jan 10 17:14:33.973: INFO: Waiting up to 5m0s for pod "pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43" in namespace "secrets-9732" to be "Succeeded or Failed" +Jan 10 17:14:33.974: INFO: Pod "pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43": Phase="Pending", Reason="", readiness=false. Elapsed: 1.706073ms +Jan 10 17:14:35.977: INFO: Pod "pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004320346s +STEP: Saw pod success +Jan 10 17:14:35.977: INFO: Pod "pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43" satisfied condition "Succeeded or Failed" +Jan 10 17:14:35.979: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43 container secret-volume-test: +STEP: delete the pod +Jan 10 17:14:35.995: INFO: Waiting for pod pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43 to disappear +Jan 10 17:14:35.997: INFO: Pod pod-secrets-eb0c8284-3254-4edb-b9bc-c2ab4d040b43 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:35.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9732" for this suite. +STEP: Destroying namespace "secret-namespace-614" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":277,"completed":18,"skipped":352,"failed":0} +SSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:36.010: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Jan 10 17:14:38.048: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:38.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9293" for this suite. +•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":19,"skipped":362,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:38.066: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jan 10 17:14:38.091: INFO: Waiting up to 5m0s for pod "pod-fce7bef6-d861-4681-866c-660793f05a6c" in namespace "emptydir-7062" to be "Succeeded or Failed" +Jan 10 17:14:38.093: INFO: Pod "pod-fce7bef6-d861-4681-866c-660793f05a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.728934ms +Jan 10 17:14:40.096: INFO: Pod "pod-fce7bef6-d861-4681-866c-660793f05a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004466803s +Jan 10 17:14:42.099: INFO: Pod "pod-fce7bef6-d861-4681-866c-660793f05a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007214401s +Jan 10 17:14:44.101: INFO: Pod "pod-fce7bef6-d861-4681-866c-660793f05a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009876171s +STEP: Saw pod success +Jan 10 17:14:44.101: INFO: Pod "pod-fce7bef6-d861-4681-866c-660793f05a6c" satisfied condition "Succeeded or Failed" +Jan 10 17:14:44.103: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-fce7bef6-d861-4681-866c-660793f05a6c container test-container: +STEP: delete the pod +Jan 10 17:14:44.118: INFO: Waiting for pod pod-fce7bef6-d861-4681-866c-660793f05a6c to disappear +Jan 10 17:14:44.120: INFO: Pod pod-fce7bef6-d861-4681-866c-660793f05a6c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:14:44.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7062" for this suite. + +• [SLOW TEST:6.062 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":20,"skipped":378,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:14:44.129: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Performing setup for networking test in namespace pod-network-test-6926 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jan 10 17:14:44.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 10 17:14:44.173: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 10 17:14:46.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:48.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:50.176: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:52.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:54.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:56.177: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:14:58.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:15:00.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:15:02.176: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:15:04.175: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:15:06.175: INFO: The status of Pod netserver-0 is Running (Ready = true) +Jan 10 17:15:06.179: INFO: The status of Pod netserver-1 is Running (Ready = false) +Jan 10 17:15:08.182: INFO: The status of Pod netserver-1 is Running (Ready = true) +Jan 10 17:15:08.185: INFO: The status of Pod netserver-2 is Running (Ready = false) +Jan 10 17:15:10.188: INFO: The status of Pod netserver-2 is Running (Ready = false) +Jan 10 17:15:12.188: INFO: The status of Pod netserver-2 is Running (Ready = true) +STEP: Creating test pods +Jan 10 17:15:14.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.143:8080/dial?request=hostname&protocol=udp&host=100.108.158.142&port=8081&tries=1'] Namespace:pod-network-test-6926 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:15:14.202: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:15:14.318: INFO: Waiting for responses: map[] +Jan 10 17:15:14.320: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.143:8080/dial?request=hostname&protocol=udp&host=100.112.27.201&port=8081&tries=1'] Namespace:pod-network-test-6926 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:15:14.320: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:15:14.420: INFO: Waiting for responses: map[] +Jan 10 17:15:14.422: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.143:8080/dial?request=hostname&protocol=udp&host=100.100.191.134&port=8081&tries=1'] Namespace:pod-network-test-6926 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:15:14.422: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:15:14.515: INFO: Waiting for responses: map[] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:14.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6926" for this suite. + +• [SLOW TEST:30.393 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":277,"completed":21,"skipped":393,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:14.523: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:27.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-4420" for this suite. +STEP: Destroying namespace "nsdeletetest-8339" for this suite. +Jan 10 17:15:27.608: INFO: Namespace nsdeletetest-8339 was already deleted +STEP: Destroying namespace "nsdeletetest-6017" for this suite. + +• [SLOW TEST:13.089 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":277,"completed":22,"skipped":413,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:27.615: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:15:27.635: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:33.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5458" for this suite. + +• [SLOW TEST:6.188 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":277,"completed":23,"skipped":423,"failed":0} +SSSSSSSSSSS +------------------------------ +[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:33.803: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:15:33.859: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be" in namespace "security-context-test-104" to be "Succeeded or Failed" +Jan 10 17:15:33.860: INFO: Pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 1.725726ms +Jan 10 17:15:35.863: INFO: Pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004733442s +Jan 10 17:15:37.866: INFO: Pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007369079s +Jan 10 17:15:39.868: INFO: Pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009842399s +Jan 10 17:15:39.869: INFO: Pod "alpine-nnp-false-00ba3792-3579-41dd-bf82-2651175bb6be" satisfied condition "Succeeded or Failed" +[AfterEach] [k8s.io] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:39.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-104" for this suite. + +• [SLOW TEST:6.079 seconds] +[k8s.io] Security Context +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + when creating containers with AllowPrivilegeEscalation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":24,"skipped":434,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:39.882: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: running the image docker.io/library/httpd:2.4.38-alpine +Jan 10 17:15:39.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9682' +Jan 10 17:15:39.993: INFO: stderr: "" +Jan 10 17:15:39.993: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 +Jan 10 17:15:39.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete pods e2e-test-httpd-pod --namespace=kubectl-9682' +Jan 10 17:15:43.306: INFO: stderr: "" +Jan 10 17:15:43.306: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:43.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9682" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":277,"completed":25,"skipped":447,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:43.315: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test override arguments +Jan 10 17:15:43.343: INFO: Waiting up to 5m0s for pod "client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377" in namespace "containers-1447" to be "Succeeded or Failed" +Jan 10 17:15:43.346: INFO: Pod "client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03798ms +Jan 10 17:15:45.349: INFO: Pod "client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005668113s +STEP: Saw pod success +Jan 10 17:15:45.349: INFO: Pod "client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377" satisfied condition "Succeeded or Failed" +Jan 10 17:15:45.351: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377 container test-container: +STEP: delete the pod +Jan 10 17:15:45.368: INFO: Waiting for pod client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377 to disappear +Jan 10 17:15:45.370: INFO: Pod client-containers-c1440dfc-5904-4a89-84c7-1f14cbab2377 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:45.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1447" for this suite. +•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":277,"completed":26,"skipped":481,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:45.376: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 +STEP: creating the pod +Jan 10 17:15:45.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-4840' +Jan 10 17:15:45.686: INFO: stderr: "" +Jan 10 17:15:45.686: INFO: stdout: "pod/pause created\n" +Jan 10 17:15:45.686: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jan 10 17:15:45.687: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4840" to be "running and ready" +Jan 10 17:15:45.689: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225677ms +Jan 10 17:15:47.691: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004733866s +Jan 10 17:15:49.694: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.007368636s +Jan 10 17:15:49.694: INFO: Pod "pause" satisfied condition "running and ready" +Jan 10 17:15:49.694: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: adding the label testing-label with value testing-label-value to a pod +Jan 10 17:15:49.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 label pods pause testing-label=testing-label-value --namespace=kubectl-4840' +Jan 10 17:15:49.776: INFO: stderr: "" +Jan 10 17:15:49.776: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Jan 10 17:15:49.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pod pause -L testing-label --namespace=kubectl-4840' +Jan 10 17:15:49.847: INFO: stderr: "" +Jan 10 17:15:49.847: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" +STEP: removing the label testing-label of a pod +Jan 10 17:15:49.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 label pods pause testing-label- --namespace=kubectl-4840' +Jan 10 17:15:49.926: INFO: stderr: "" +Jan 10 17:15:49.926: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Jan 10 17:15:49.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pod pause -L testing-label --namespace=kubectl-4840' +Jan 10 17:15:49.997: INFO: stderr: "" +Jan 10 17:15:49.997: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 +STEP: using delete to clean up resources +Jan 10 17:15:49.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-4840' +Jan 10 17:15:50.074: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 10 17:15:50.074: INFO: stdout: "pod \"pause\" force deleted\n" +Jan 10 17:15:50.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get rc,svc -l name=pause --no-headers --namespace=kubectl-4840' +Jan 10 17:15:50.150: INFO: stderr: "No resources found in kubectl-4840 namespace.\n" +Jan 10 17:15:50.150: INFO: stdout: "" +Jan 10 17:15:50.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -l name=pause --namespace=kubectl-4840 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 10 17:15:50.221: INFO: stderr: "" +Jan 10 17:15:50.221: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:15:50.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4840" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":277,"completed":27,"skipped":486,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:15:50.228: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5948.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5948.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jan 10 17:16:02.297: INFO: DNS probes using dns-5948/dns-test-887bd22c-57b4-4479-96ab-522b0ec4c8d9 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:02.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5948" for this suite. + +• [SLOW TEST:12.097 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":277,"completed":28,"skipped":510,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:02.325: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: getting the auto-created API token +Jan 10 17:16:02.869: INFO: created pod pod-service-account-defaultsa +Jan 10 17:16:02.869: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jan 10 17:16:02.874: INFO: created pod pod-service-account-mountsa +Jan 10 17:16:02.874: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jan 10 17:16:02.877: INFO: created pod pod-service-account-nomountsa +Jan 10 17:16:02.877: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jan 10 17:16:02.882: INFO: created pod pod-service-account-defaultsa-mountspec +Jan 10 17:16:02.882: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jan 10 17:16:02.886: INFO: created pod pod-service-account-mountsa-mountspec +Jan 10 17:16:02.886: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jan 10 17:16:02.891: INFO: created pod pod-service-account-nomountsa-mountspec +Jan 10 17:16:02.891: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jan 10 17:16:02.899: INFO: created pod pod-service-account-defaultsa-nomountspec +Jan 10 17:16:02.899: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jan 10 17:16:02.902: INFO: created pod pod-service-account-mountsa-nomountspec +Jan 10 17:16:02.902: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jan 10 17:16:02.907: INFO: created pod pod-service-account-nomountsa-nomountspec +Jan 10 17:16:02.907: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:02.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4811" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":277,"completed":29,"skipped":522,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:02.918: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating the pod +Jan 10 17:16:05.475: INFO: Successfully updated pod "labelsupdateeb152a93-a66f-416b-8db7-be6c0cc48148" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:09.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7156" for this suite. + +• [SLOW TEST:6.581 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":277,"completed":30,"skipped":539,"failed":0} +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:09.500: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward API volume plugin +Jan 10 17:16:09.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34" in namespace "downward-api-2783" to be "Succeeded or Failed" +Jan 10 17:16:09.527: INFO: Pod "downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34": Phase="Pending", Reason="", readiness=false. Elapsed: 1.844525ms +Jan 10 17:16:11.529: INFO: Pod "downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004236898s +STEP: Saw pod success +Jan 10 17:16:11.529: INFO: Pod "downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34" satisfied condition "Succeeded or Failed" +Jan 10 17:16:11.531: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34 container client-container: +STEP: delete the pod +Jan 10 17:16:11.548: INFO: Waiting for pod downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34 to disappear +Jan 10 17:16:11.550: INFO: Pod downwardapi-volume-3a8c1bd3-cd8a-4b38-a080-0121a470bd34 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:11.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2783" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":31,"skipped":539,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:11.557: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test substitution in container's command +Jan 10 17:16:11.586: INFO: Waiting up to 5m0s for pod "var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300" in namespace "var-expansion-2792" to be "Succeeded or Failed" +Jan 10 17:16:11.592: INFO: Pod "var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456344ms +Jan 10 17:16:13.595: INFO: Pod "var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009218086s +STEP: Saw pod success +Jan 10 17:16:13.595: INFO: Pod "var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300" satisfied condition "Succeeded or Failed" +Jan 10 17:16:13.597: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300 container dapi-container: +STEP: delete the pod +Jan 10 17:16:13.611: INFO: Waiting for pod var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300 to disappear +Jan 10 17:16:13.613: INFO: Pod var-expansion-81e72dff-1a9b-4974-9a53-f2762ab07300 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:13.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-2792" for this suite. +•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":277,"completed":32,"skipped":559,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:13.620: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 +Jan 10 17:16:13.642: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 10 17:16:13.650: INFO: Waiting for terminating namespaces to be deleted... +Jan 10 17:16:13.652: INFO: +Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test +Jan 10 17:16:13.657: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:16:13.657: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:16:13.657: INFO: Container systemd-logs ready: true, restart count 0 +Jan 10 17:16:13.657: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.657: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:16:13.657: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.657: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:16:13.657: INFO: +Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test +Jan 10 17:16:13.664: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: Container systemd-logs ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container dnsmasq ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: Container kubedns ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: Container sidecar ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: labelsupdateeb152a93-a66f-416b-8db7-be6c0cc48148 from projected-7156 started at 2021-01-10 17:16:02 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container client-container ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:16:13.664: INFO: Container e2e ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:16:13.664: INFO: +Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test +Jan 10 17:16:13.675: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded) +Jan 10 17:16:13.675: INFO: Container dnsmasq ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: Container kubedns ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: Container sidecar ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:16:13.675: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: Container systemd-logs ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.675: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.675: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:16:13.675: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded) +Jan 10 17:16:13.675: INFO: Container autoscaler ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: verifying the node has the label node ip-172-20-33-172.ap-south-1.compute.internal +STEP: verifying the node has the label node ip-172-20-39-143.ap-south-1.compute.internal +STEP: verifying the node has the label node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod calico-node-ldj9k requesting resource cpu=100m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod calico-node-nrg4h requesting resource cpu=100m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod calico-node-vgdrq requesting resource cpu=100m on Node ip-172-20-33-172.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-dns-64f86fb8dd-gdkpz requesting resource cpu=260m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-dns-64f86fb8dd-ngh4q requesting resource cpu=260m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-dns-autoscaler-cd7778b7b-c8mf6 requesting resource cpu=20m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal requesting resource cpu=100m on Node ip-172-20-33-172.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal requesting resource cpu=100m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal requesting resource cpu=100m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod labelsupdateeb152a93-a66f-416b-8db7-be6c0cc48148 requesting resource cpu=0m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod sonobuoy requesting resource cpu=0m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod sonobuoy-e2e-job-5c46f38a56914321 requesting resource cpu=0m on Node ip-172-20-39-143.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf requesting resource cpu=0m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x requesting resource cpu=0m on Node ip-172-20-33-172.ap-south-1.compute.internal +Jan 10 17:16:13.723: INFO: Pod sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 requesting resource cpu=0m on Node ip-172-20-39-143.ap-south-1.compute.internal +STEP: Starting Pods to consume most of the cluster CPU. +Jan 10 17:16:13.723: INFO: Creating a pod which consumes cpu=2464m on Node ip-172-20-52-46.ap-south-1.compute.internal +Jan 10 17:16:13.730: INFO: Creating a pod which consumes cpu=2660m on Node ip-172-20-33-172.ap-south-1.compute.internal +Jan 10 17:16:13.737: INFO: Creating a pod which consumes cpu=2478m on Node ip-172-20-39-143.ap-south-1.compute.internal +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028.1658ee6286cf8339], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6377/filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028 to ip-172-20-52-46.ap-south-1.compute.internal] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028.1658ee62aff9b59f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028.1658ee63290e8020], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028.1658ee632b635ab1], Reason = [Created], Message = [Created container filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028.1658ee63324f0362], Reason = [Started], Message = [Started container filler-pod-3b12db41-7ee2-4447-a05c-900df83eb028] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b.1658ee6287bf7ba5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6377/filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b to ip-172-20-39-143.ap-south-1.compute.internal] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b.1658ee62b1f14f0c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b.1658ee62b4490a64], Reason = [Created], Message = [Created container filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b.1658ee62b9e845d9], Reason = [Started], Message = [Started container filler-pod-b011a295-bf9e-4a77-aea9-7c16d7a3fc2b] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4.1658ee628727b757], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6377/filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4 to ip-172-20-33-172.ap-south-1.compute.internal] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4.1658ee62b13a4020], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4.1658ee62b3a6ffdc], Reason = [Created], Message = [Created container filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4.1658ee62b871f3d0], Reason = [Started], Message = [Started container filler-pod-fd7c3a82-5982-4d17-938c-abdc6ffe07a4] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.1658ee6376ebb980], Reason = [FailedScheduling], Message = [0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] +STEP: removing the label node off the node ip-172-20-33-172.ap-south-1.compute.internal +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node ip-172-20-39-143.ap-south-1.compute.internal +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node ip-172-20-52-46.ap-south-1.compute.internal +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:18.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-6377" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 + +• [SLOW TEST:5.182 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":277,"completed":33,"skipped":560,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:18.803: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:16:19.126: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:16:22.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:16:22.145: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-478-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:28.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3469" for this suite. +STEP: Destroying namespace "webhook-3469-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:9.459 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":277,"completed":34,"skipped":582,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:28.263: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:16:28.292: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Jan 10 17:16:36.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-5270 create -f -' +Jan 10 17:16:37.482: INFO: stderr: "" +Jan 10 17:16:37.482: INFO: stdout: "e2e-test-crd-publish-openapi-3208-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jan 10 17:16:37.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-5270 delete e2e-test-crd-publish-openapi-3208-crds test-cr' +Jan 10 17:16:37.557: INFO: stderr: "" +Jan 10 17:16:37.557: INFO: stdout: "e2e-test-crd-publish-openapi-3208-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Jan 10 17:16:37.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-5270 apply -f -' +Jan 10 17:16:37.745: INFO: stderr: "" +Jan 10 17:16:37.745: INFO: stdout: "e2e-test-crd-publish-openapi-3208-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jan 10 17:16:37.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-5270 delete e2e-test-crd-publish-openapi-3208-crds test-cr' +Jan 10 17:16:37.822: INFO: stderr: "" +Jan 10 17:16:37.822: INFO: stdout: "e2e-test-crd-publish-openapi-3208-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Jan 10 17:16:37.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-3208-crds' +Jan 10 17:16:38.023: INFO: stderr: "" +Jan 10 17:16:38.023: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3208-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:41.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5270" for this suite. + +• [SLOW TEST:13.465 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":277,"completed":35,"skipped":610,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:41.729: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:16:42.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jan 10 17:16:44.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895802, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895802, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895802, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745895802, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:16:47.108: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:47.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8771" for this suite. +STEP: Destroying namespace "webhook-8771-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:5.472 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":277,"completed":36,"skipped":647,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:47.204: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: validating api versions +Jan 10 17:16:47.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 api-versions' +Jan 10 17:16:47.315: INFO: stderr: "" +Jan 10 17:16:47.315: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:47.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6910" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":277,"completed":37,"skipped":672,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:47.323: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward API volume plugin +Jan 10 17:16:47.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c" in namespace "downward-api-7442" to be "Succeeded or Failed" +Jan 10 17:16:47.356: INFO: Pod "downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.5701ms +Jan 10 17:16:49.359: INFO: Pod "downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00619284s +STEP: Saw pod success +Jan 10 17:16:49.359: INFO: Pod "downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c" satisfied condition "Succeeded or Failed" +Jan 10 17:16:49.360: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c container client-container: +STEP: delete the pod +Jan 10 17:16:49.374: INFO: Waiting for pod downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c to disappear +Jan 10 17:16:49.376: INFO: Pod downwardapi-volume-479a45cf-af55-452f-8498-f9bafae92d3c no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:49.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7442" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":38,"skipped":753,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:49.390: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename tables +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:49.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-2032" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":277,"completed":39,"skipped":766,"failed":0} +SSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:49.421: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-5518" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":277,"completed":40,"skipped":771,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:52.470: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name configmap-test-volume-map-4f21b04f-bb79-4e12-b693-aadd2bf3d7a3 +STEP: Creating a pod to test consume configMaps +Jan 10 17:16:52.498: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415" in namespace "configmap-2394" to be "Succeeded or Failed" +Jan 10 17:16:52.500: INFO: Pod "pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415": Phase="Pending", Reason="", readiness=false. Elapsed: 1.627858ms +Jan 10 17:16:54.504: INFO: Pod "pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005576322s +STEP: Saw pod success +Jan 10 17:16:54.504: INFO: Pod "pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415" satisfied condition "Succeeded or Failed" +Jan 10 17:16:54.506: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415 container configmap-volume-test: +STEP: delete the pod +Jan 10 17:16:54.522: INFO: Waiting for pod pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415 to disappear +Jan 10 17:16:54.525: INFO: Pod pod-configmaps-f5a6a305-7a71-4f59-9750-eac6a9ae7415 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:16:54.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2394" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":41,"skipped":779,"failed":0} +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:16:54.533: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jan 10 17:16:54.577: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:54.577: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:54.577: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:54.579: INFO: Number of nodes with available pods: 0 +Jan 10 17:16:54.579: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:16:55.582: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:55.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:55.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:55.585: INFO: Number of nodes with available pods: 0 +Jan 10 17:16:55.585: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:16:56.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:56.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:56.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:56.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:16:56.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:16:57.584: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:57.584: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:57.584: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:57.587: INFO: Number of nodes with available pods: 1 +Jan 10 17:16:57.587: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:16:58.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:58.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:58.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:58.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:16:58.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:16:59.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:59.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:59.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:16:59.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:16:59.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:00.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:00.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:00.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:00.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:17:00.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:01.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:01.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:01.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:01.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:17:01.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:02.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:02.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:02.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:02.585: INFO: Number of nodes with available pods: 1 +Jan 10 17:17:02.585: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:03.582: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:03.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:03.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:03.585: INFO: Number of nodes with available pods: 2 +Jan 10 17:17:03.585: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:04.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:04.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:04.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:04.585: INFO: Number of nodes with available pods: 2 +Jan 10 17:17:04.585: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:05.583: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.583: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.583: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.585: INFO: Number of nodes with available pods: 3 +Jan 10 17:17:05.585: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Jan 10 17:17:05.597: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.597: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.597: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:05.600: INFO: Number of nodes with available pods: 2 +Jan 10 17:17:05.600: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:06.604: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:06.604: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:06.604: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:06.607: INFO: Number of nodes with available pods: 2 +Jan 10 17:17:06.607: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod +Jan 10 17:17:07.604: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:07.604: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:07.604: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 10 17:17:07.607: INFO: Number of nodes with available pods: 3 +Jan 10 17:17:07.607: INFO: Number of running nodes: 3, number of available pods: 3 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6270, will wait for the garbage collector to delete the pods +Jan 10 17:17:07.667: INFO: Deleting DaemonSet.extensions daemon-set took: 4.747474ms +Jan 10 17:17:08.067: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.252138ms +Jan 10 17:17:19.370: INFO: Number of nodes with available pods: 0 +Jan 10 17:17:19.370: INFO: Number of running nodes: 0, number of available pods: 0 +Jan 10 17:17:19.373: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6270/daemonsets","resourceVersion":"6723"},"items":null} + +Jan 10 17:17:19.375: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6270/pods","resourceVersion":"6723"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:17:19.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6270" for this suite. + +• [SLOW TEST:24.857 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":277,"completed":42,"skipped":784,"failed":0} +SSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:17:19.390: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating pod liveness-d37c2bae-180a-4716-83e8-0e174174b7af in namespace container-probe-3317 +Jan 10 17:17:21.421: INFO: Started pod liveness-d37c2bae-180a-4716-83e8-0e174174b7af in namespace container-probe-3317 +STEP: checking the pod's current state and verifying that restartCount is present +Jan 10 17:17:21.423: INFO: Initial restart count of pod liveness-d37c2bae-180a-4716-83e8-0e174174b7af is 0 +Jan 10 17:17:45.456: INFO: Restart count of pod container-probe-3317/liveness-d37c2bae-180a-4716-83e8-0e174174b7af is now 1 (24.032757483s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:17:45.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3317" for this suite. + +• [SLOW TEST:26.081 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":277,"completed":43,"skipped":792,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:17:45.472: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:17:45.493: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Jan 10 17:17:54.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-9103 create -f -' +Jan 10 17:17:54.624: INFO: stderr: "" +Jan 10 17:17:54.624: INFO: stdout: "e2e-test-crd-publish-openapi-2690-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jan 10 17:17:54.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-9103 delete e2e-test-crd-publish-openapi-2690-crds test-cr' +Jan 10 17:17:54.699: INFO: stderr: "" +Jan 10 17:17:54.699: INFO: stdout: "e2e-test-crd-publish-openapi-2690-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Jan 10 17:17:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-9103 apply -f -' +Jan 10 17:17:54.928: INFO: stderr: "" +Jan 10 17:17:54.928: INFO: stdout: "e2e-test-crd-publish-openapi-2690-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jan 10 17:17:54.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-9103 delete e2e-test-crd-publish-openapi-2690-crds test-cr' +Jan 10 17:17:55.005: INFO: stderr: "" +Jan 10 17:17:55.005: INFO: stdout: "e2e-test-crd-publish-openapi-2690-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Jan 10 17:17:55.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-2690-crds' +Jan 10 17:17:55.170: INFO: stderr: "" +Jan 10 17:17:55.170: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2690-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:17:58.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9103" for this suite. + +• [SLOW TEST:13.370 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":277,"completed":44,"skipped":801,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:17:58.842: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward api env vars +Jan 10 17:17:58.876: INFO: Waiting up to 5m0s for pod "downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8" in namespace "downward-api-7529" to be "Succeeded or Failed" +Jan 10 17:17:58.878: INFO: Pod "downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.667786ms +Jan 10 17:18:00.881: INFO: Pod "downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004489997s +STEP: Saw pod success +Jan 10 17:18:00.881: INFO: Pod "downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8" satisfied condition "Succeeded or Failed" +Jan 10 17:18:00.883: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8 container dapi-container: +STEP: delete the pod +Jan 10 17:18:00.898: INFO: Waiting for pod downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8 to disappear +Jan 10 17:18:00.899: INFO: Pod downward-api-90f06efd-c882-4bf6-b92b-1ea11225b1b8 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7529" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":277,"completed":45,"skipped":812,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:00.907: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:18:01.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:18:04.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:04.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3128" for this suite. +STEP: Destroying namespace "webhook-3128-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":277,"completed":46,"skipped":826,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:04.425: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:18:04.455: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Jan 10 17:18:13.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 create -f -' +Jan 10 17:18:13.538: INFO: stderr: "" +Jan 10 17:18:13.538: INFO: stdout: "e2e-test-crd-publish-openapi-4462-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jan 10 17:18:13.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 delete e2e-test-crd-publish-openapi-4462-crds test-foo' +Jan 10 17:18:13.645: INFO: stderr: "" +Jan 10 17:18:13.646: INFO: stdout: "e2e-test-crd-publish-openapi-4462-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Jan 10 17:18:13.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 apply -f -' +Jan 10 17:18:13.795: INFO: stderr: "" +Jan 10 17:18:13.795: INFO: stdout: "e2e-test-crd-publish-openapi-4462-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jan 10 17:18:13.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 delete e2e-test-crd-publish-openapi-4462-crds test-foo' +Jan 10 17:18:13.872: INFO: stderr: "" +Jan 10 17:18:13.872: INFO: stdout: "e2e-test-crd-publish-openapi-4462-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Jan 10 17:18:13.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 create -f -' +Jan 10 17:18:14.055: INFO: rc: 1 +Jan 10 17:18:14.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 apply -f -' +Jan 10 17:18:14.239: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Jan 10 17:18:14.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 create -f -' +Jan 10 17:18:14.421: INFO: rc: 1 +Jan 10 17:18:14.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-7472 apply -f -' +Jan 10 17:18:14.603: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Jan 10 17:18:14.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-4462-crds' +Jan 10 17:18:14.787: INFO: stderr: "" +Jan 10 17:18:14.787: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4462-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Jan 10 17:18:14.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-4462-crds.metadata' +Jan 10 17:18:14.977: INFO: stderr: "" +Jan 10 17:18:14.977: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4462-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Jan 10 17:18:14.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-4462-crds.spec' +Jan 10 17:18:15.162: INFO: stderr: "" +Jan 10 17:18:15.162: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4462-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Jan 10 17:18:15.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-4462-crds.spec.bars' +Jan 10 17:18:15.351: INFO: stderr: "" +Jan 10 17:18:15.351: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4462-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Jan 10 17:18:15.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-4462-crds.spec.bars2' +Jan 10 17:18:15.494: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:19.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7472" for this suite. + +• [SLOW TEST:14.773 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":277,"completed":47,"skipped":836,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:19.198: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward api env vars +Jan 10 17:18:19.228: INFO: Waiting up to 5m0s for pod "downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7" in namespace "downward-api-9149" to be "Succeeded or Failed" +Jan 10 17:18:19.229: INFO: Pod "downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.70266ms +Jan 10 17:18:21.232: INFO: Pod "downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004037153s +STEP: Saw pod success +Jan 10 17:18:21.232: INFO: Pod "downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7" satisfied condition "Succeeded or Failed" +Jan 10 17:18:21.234: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7 container dapi-container: +STEP: delete the pod +Jan 10 17:18:21.248: INFO: Waiting for pod downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7 to disappear +Jan 10 17:18:21.250: INFO: Pod downward-api-81818546-50c1-495c-8faf-52efb9c7fdc7 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9149" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":277,"completed":48,"skipped":846,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:21.257: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating Agnhost RC +Jan 10 17:18:21.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-7623' +Jan 10 17:18:21.486: INFO: stderr: "" +Jan 10 17:18:21.487: INFO: stdout: "replicationcontroller/agnhost-master created\n" +STEP: Waiting for Agnhost master to start. +Jan 10 17:18:22.489: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:18:22.489: INFO: Found 0 / 1 +Jan 10 17:18:23.489: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:18:23.489: INFO: Found 1 / 1 +Jan 10 17:18:23.489: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Jan 10 17:18:23.491: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:18:23.491: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jan 10 17:18:23.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 patch pod agnhost-master-bbcfr --namespace=kubectl-7623 -p {"metadata":{"annotations":{"x":"y"}}}' +Jan 10 17:18:23.570: INFO: stderr: "" +Jan 10 17:18:23.570: INFO: stdout: "pod/agnhost-master-bbcfr patched\n" +STEP: checking annotations +Jan 10 17:18:23.572: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:18:23.572: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7623" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":277,"completed":49,"skipped":868,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:23.579: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jan 10 17:18:23.606: INFO: Waiting up to 5m0s for pod "pod-95140c47-b8eb-4dee-9139-1efe7d94eb83" in namespace "emptydir-2933" to be "Succeeded or Failed" +Jan 10 17:18:23.608: INFO: Pod "pod-95140c47-b8eb-4dee-9139-1efe7d94eb83": Phase="Pending", Reason="", readiness=false. Elapsed: 1.666389ms +Jan 10 17:18:25.610: INFO: Pod "pod-95140c47-b8eb-4dee-9139-1efe7d94eb83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004240758s +STEP: Saw pod success +Jan 10 17:18:25.610: INFO: Pod "pod-95140c47-b8eb-4dee-9139-1efe7d94eb83" satisfied condition "Succeeded or Failed" +Jan 10 17:18:25.612: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-95140c47-b8eb-4dee-9139-1efe7d94eb83 container test-container: +STEP: delete the pod +Jan 10 17:18:25.625: INFO: Waiting for pod pod-95140c47-b8eb-4dee-9139-1efe7d94eb83 to disappear +Jan 10 17:18:25.627: INFO: Pod pod-95140c47-b8eb-4dee-9139-1efe7d94eb83 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:25.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2933" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":50,"skipped":869,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:25.634: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating service nodeport-test with type=NodePort in namespace services-3038 +STEP: creating replication controller nodeport-test in namespace services-3038 +I0110 17:18:25.676440 24 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3038, replica count: 2 +Jan 10 17:18:28.726: INFO: Creating new exec pod +I0110 17:18:28.726844 24 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 10 17:18:31.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-3038 execpodpqmpg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' +Jan 10 17:18:31.920: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Jan 10 17:18:31.920: INFO: stdout: "" +Jan 10 17:18:31.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-3038 execpodpqmpg -- /bin/sh -x -c nc -zv -t -w 2 100.69.123.230 80' +Jan 10 17:18:32.088: INFO: stderr: "+ nc -zv -t -w 2 100.69.123.230 80\nConnection to 100.69.123.230 80 port [tcp/http] succeeded!\n" +Jan 10 17:18:32.088: INFO: stdout: "" +Jan 10 17:18:32.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-3038 execpodpqmpg -- /bin/sh -x -c nc -zv -t -w 2 172.20.39.143 32133' +Jan 10 17:18:32.254: INFO: stderr: "+ nc -zv -t -w 2 172.20.39.143 32133\nConnection to 172.20.39.143 32133 port [tcp/32133] succeeded!\n" +Jan 10 17:18:32.254: INFO: stdout: "" +Jan 10 17:18:32.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-3038 execpodpqmpg -- /bin/sh -x -c nc -zv -t -w 2 172.20.33.172 32133' +Jan 10 17:18:32.427: INFO: stderr: "+ nc -zv -t -w 2 172.20.33.172 32133\nConnection to 172.20.33.172 32133 port [tcp/32133] succeeded!\n" +Jan 10 17:18:32.427: INFO: stdout: "" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:32.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3038" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 + +• [SLOW TEST:6.802 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":277,"completed":51,"skipped":888,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:32.436: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating Pod +STEP: Waiting for the pod running +STEP: Geting the pod +STEP: Reading file content from the nginx-container +Jan 10 17:18:34.473: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1952 PodName:pod-sharedvolume-a7d99b7f-7220-4f0b-9e78-2d4d013c4383 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:18:34.473: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:18:34.588: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:34.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1952" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":277,"completed":52,"skipped":909,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:34.596: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:18:34.621: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:40.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-581" for this suite. + +• [SLOW TEST:5.566 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":277,"completed":53,"skipped":933,"failed":0} +SSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:40.163: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name configmap-test-volume-map-1e888d38-0cb0-4cba-b40c-57f65a62f32c +STEP: Creating a pod to test consume configMaps +Jan 10 17:18:40.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02" in namespace "configmap-2574" to be "Succeeded or Failed" +Jan 10 17:18:40.242: INFO: Pod "pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.001941ms +Jan 10 17:18:42.245: INFO: Pod "pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004740853s +STEP: Saw pod success +Jan 10 17:18:42.245: INFO: Pod "pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02" satisfied condition "Succeeded or Failed" +Jan 10 17:18:42.247: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02 container configmap-volume-test: +STEP: delete the pod +Jan 10 17:18:42.261: INFO: Waiting for pod pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02 to disappear +Jan 10 17:18:42.265: INFO: Pod pod-configmaps-93d49b99-c90f-4379-bcb4-4ff284347d02 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:42.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2574" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":54,"skipped":937,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:42.272: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jan 10 17:18:42.298: INFO: Waiting up to 5m0s for pod "pod-f1c9bb3f-46a7-452c-8e72-6d260847388a" in namespace "emptydir-3763" to be "Succeeded or Failed" +Jan 10 17:18:42.300: INFO: Pod "pod-f1c9bb3f-46a7-452c-8e72-6d260847388a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.65213ms +Jan 10 17:18:44.302: INFO: Pod "pod-f1c9bb3f-46a7-452c-8e72-6d260847388a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003883911s +STEP: Saw pod success +Jan 10 17:18:44.302: INFO: Pod "pod-f1c9bb3f-46a7-452c-8e72-6d260847388a" satisfied condition "Succeeded or Failed" +Jan 10 17:18:44.304: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-f1c9bb3f-46a7-452c-8e72-6d260847388a container test-container: +STEP: delete the pod +Jan 10 17:18:44.319: INFO: Waiting for pod pod-f1c9bb3f-46a7-452c-8e72-6d260847388a to disappear +Jan 10 17:18:44.321: INFO: Pod pod-f1c9bb3f-46a7-452c-8e72-6d260847388a no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:44.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3763" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":55,"skipped":974,"failed":0} +SS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:44.327: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:18:44.346: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Jan 10 17:18:46.368: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:47.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-3929" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":277,"completed":56,"skipped":976,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:47.383: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name cm-test-opt-del-f3c4b2d7-3af5-49ae-99e1-b06b3ca39d71 +STEP: Creating configMap with name cm-test-opt-upd-256e92f8-99ae-46a5-bafc-dfbcd7b86ace +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-f3c4b2d7-3af5-49ae-99e1-b06b3ca39d71 +STEP: Updating configmap cm-test-opt-upd-256e92f8-99ae-46a5-bafc-dfbcd7b86ace +STEP: Creating configMap with name cm-test-opt-create-e58ef60c-447d-4571-ac4c-3175db4ea417 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:53.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-807" for this suite. + +• [SLOW TEST:6.111 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":57,"skipped":1014,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:53.495: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward API volume plugin +Jan 10 17:18:53.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476" in namespace "downward-api-4622" to be "Succeeded or Failed" +Jan 10 17:18:53.532: INFO: Pod "downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476": Phase="Pending", Reason="", readiness=false. Elapsed: 1.812396ms +Jan 10 17:18:55.535: INFO: Pod "downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004558123s +STEP: Saw pod success +Jan 10 17:18:55.535: INFO: Pod "downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476" satisfied condition "Succeeded or Failed" +Jan 10 17:18:55.537: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476 container client-container: +STEP: delete the pod +Jan 10 17:18:55.551: INFO: Waiting for pod downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476 to disappear +Jan 10 17:18:55.555: INFO: Pod downwardapi-volume-e8a71049-9ed8-4edc-8123-d499e354e476 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:55.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4622" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":277,"completed":58,"skipped":1028,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:55.562: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:18:56.197: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:18:59.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:18:59.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-716" for this suite. +STEP: Destroying namespace "webhook-716-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":277,"completed":59,"skipped":1032,"failed":0} +S +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:18:59.382: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 +STEP: Creating service test in namespace statefulset-6923 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating stateful set ss in namespace statefulset-6923 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6923 +Jan 10 17:18:59.423: INFO: Found 0 stateful pods, waiting for 1 +Jan 10 17:19:09.426: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Jan 10 17:19:09.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:19:09.593: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:19:09.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:19:09.593: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 10 17:19:09.595: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jan 10 17:19:19.598: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 10 17:19:19.598: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 10 17:19:19.607: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:19.607: INFO: ss-0 ip-172-20-52-46.ap-south-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC }] +Jan 10 17:19:19.607: INFO: +Jan 10 17:19:19.607: INFO: StatefulSet ss has not reached scale 3, at 1 +Jan 10 17:19:20.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997385847s +Jan 10 17:19:21.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99429166s +Jan 10 17:19:22.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.991282469s +Jan 10 17:19:23.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.988338115s +Jan 10 17:19:24.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.985350202s +Jan 10 17:19:25.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.982506897s +Jan 10 17:19:26.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.979560452s +Jan 10 17:19:27.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.976544339s +Jan 10 17:19:28.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 973.622765ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6923 +Jan 10 17:19:29.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 10 17:19:29.808: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 10 17:19:29.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 10 17:19:29.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 10 17:19:29.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 10 17:19:30.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jan 10 17:19:30.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 10 17:19:30.009: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 10 17:19:30.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 10 17:19:30.189: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jan 10 17:19:30.189: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 10 17:19:30.189: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 10 17:19:30.191: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 10 17:19:30.191: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 10 17:19:30.191: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Jan 10 17:19:30.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:19:30.371: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:19:30.371: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:19:30.371: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 10 17:19:30.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:19:30.539: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:19:30.539: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:19:30.539: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 10 17:19:30.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-6923 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:19:30.709: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:19:30.709: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:19:30.709: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 10 17:19:30.709: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 10 17:19:30.711: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jan 10 17:19:40.716: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 10 17:19:40.716: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jan 10 17:19:40.716: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jan 10 17:19:40.726: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:40.726: INFO: ss-0 ip-172-20-52-46.ap-south-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC }] +Jan 10 17:19:40.726: INFO: ss-1 ip-172-20-33-172.ap-south-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:40.726: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:40.726: INFO: +Jan 10 17:19:40.726: INFO: StatefulSet ss has not reached scale 0, at 3 +Jan 10 17:19:41.730: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:41.730: INFO: ss-0 ip-172-20-52-46.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC }] +Jan 10 17:19:41.730: INFO: ss-1 ip-172-20-33-172.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:41.730: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:41.730: INFO: +Jan 10 17:19:41.730: INFO: StatefulSet ss has not reached scale 0, at 3 +Jan 10 17:19:42.733: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:42.733: INFO: ss-0 ip-172-20-52-46.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:18:59 +0000 UTC }] +Jan 10 17:19:42.733: INFO: ss-1 ip-172-20-33-172.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:42.733: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:42.733: INFO: +Jan 10 17:19:42.733: INFO: StatefulSet ss has not reached scale 0, at 3 +Jan 10 17:19:43.736: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:43.736: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:43.736: INFO: +Jan 10 17:19:43.736: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:44.739: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:44.739: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:44.739: INFO: +Jan 10 17:19:44.739: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:45.741: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:45.741: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:45.741: INFO: +Jan 10 17:19:45.741: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:46.744: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:46.744: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:46.744: INFO: +Jan 10 17:19:46.744: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:47.747: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:47.747: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:47.747: INFO: +Jan 10 17:19:47.747: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:48.749: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 10 17:19:48.749: INFO: ss-2 ip-172-20-39-143.ap-south-1.compute.internal Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-10 17:19:19 +0000 UTC }] +Jan 10 17:19:48.749: INFO: +Jan 10 17:19:48.749: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 10 17:19:49.752: INFO: Verifying statefulset ss doesn't scale past 0 for another 972.550097ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6923 +Jan 10 17:19:50.754: INFO: Scaling statefulset ss to 0 +Jan 10 17:19:50.760: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 +Jan 10 17:19:50.762: INFO: Deleting all statefulset in ns statefulset-6923 +Jan 10 17:19:50.763: INFO: Scaling statefulset ss to 0 +Jan 10 17:19:50.769: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 10 17:19:50.771: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:19:50.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6923" for this suite. + +• [SLOW TEST:51.403 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":277,"completed":60,"skipped":1033,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:19:50.786: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Jan 10 17:19:53.831: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:19:54.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5832" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":277,"completed":61,"skipped":1056,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:19:54.849: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name cm-test-opt-del-ec975f54-ad8c-4c5b-90b2-5164bc5777e9 +STEP: Creating configMap with name cm-test-opt-upd-cc1f07f5-aa6e-4e90-bff4-1d942ae8ecad +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-ec975f54-ad8c-4c5b-90b2-5164bc5777e9 +STEP: Updating configmap cm-test-opt-upd-cc1f07f5-aa6e-4e90-bff4-1d942ae8ecad +STEP: Creating configMap with name cm-test-opt-create-0241790d-7322-485b-98e4-698dfe27b2b3 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:21.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3347" for this suite. + +• [SLOW TEST:86.387 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":62,"skipped":1067,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:21.237: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-5485 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-5485 +I0110 17:21:21.285341 24 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5485, replica count: 2 +Jan 10 17:21:24.335: INFO: Creating new exec pod +I0110 17:21:24.335749 24 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 10 17:21:27.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-5485 execpodgjj54 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' +Jan 10 17:21:27.524: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Jan 10 17:21:27.524: INFO: stdout: "" +Jan 10 17:21:27.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-5485 execpodgjj54 -- /bin/sh -x -c nc -zv -t -w 2 100.68.3.172 80' +Jan 10 17:21:27.709: INFO: stderr: "+ nc -zv -t -w 2 100.68.3.172 80\nConnection to 100.68.3.172 80 port [tcp/http] succeeded!\n" +Jan 10 17:21:27.709: INFO: stdout: "" +Jan 10 17:21:27.709: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:27.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5485" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 + +• [SLOW TEST:6.497 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":277,"completed":63,"skipped":1084,"failed":0} +SSSSSSSSSS +------------------------------ +[k8s.io] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:27.734: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +STEP: Creating hostNetwork=true pod +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Jan 10 17:21:31.784: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:31.784: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:31.891: INFO: Exec stderr: "" +Jan 10 17:21:31.891: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:31.891: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:31.985: INFO: Exec stderr: "" +Jan 10 17:21:31.986: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:31.986: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.076: INFO: Exec stderr: "" +Jan 10 17:21:32.076: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.076: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.165: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Jan 10 17:21:32.165: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.165: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.251: INFO: Exec stderr: "" +Jan 10 17:21:32.252: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.252: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.343: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Jan 10 17:21:32.343: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.343: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.453: INFO: Exec stderr: "" +Jan 10 17:21:32.453: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.453: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.553: INFO: Exec stderr: "" +Jan 10 17:21:32.553: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.553: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.643: INFO: Exec stderr: "" +Jan 10 17:21:32.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:21:32.643: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:21:32.733: INFO: Exec stderr: "" +[AfterEach] [k8s.io] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:32.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-2219" for this suite. + +• [SLOW TEST:5.007 seconds] +[k8s.io] KubeletManagedEtcHosts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":64,"skipped":1094,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:32.742: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-api-machinery] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:32.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9075" for this suite. +•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":277,"completed":65,"skipped":1139,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:32.794: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating projection with secret that has name projected-secret-test-fee2213b-4253-4c6f-9b9d-91c0a60d0834 +STEP: Creating a pod to test consume secrets +Jan 10 17:21:32.834: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f" in namespace "projected-4612" to be "Succeeded or Failed" +Jan 10 17:21:32.836: INFO: Pod "pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.696323ms +Jan 10 17:21:34.839: INFO: Pod "pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004223909s +STEP: Saw pod success +Jan 10 17:21:34.839: INFO: Pod "pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f" satisfied condition "Succeeded or Failed" +Jan 10 17:21:34.840: INFO: Trying to get logs from node ip-172-20-39-143.ap-south-1.compute.internal pod pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f container projected-secret-volume-test: +STEP: delete the pod +Jan 10 17:21:34.861: INFO: Waiting for pod pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f to disappear +Jan 10 17:21:34.863: INFO: Pod pod-projected-secrets-8832457a-1f71-43a5-a3ff-e2a38c025f5f no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:34.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4612" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":66,"skipped":1168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:34.870: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jan 10 17:21:38.920: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 10 17:21:38.922: INFO: Pod pod-with-prestop-exec-hook still exists +Jan 10 17:21:40.922: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 10 17:21:40.924: INFO: Pod pod-with-prestop-exec-hook still exists +Jan 10 17:21:42.922: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 10 17:21:42.924: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:42.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3852" for this suite. + +• [SLOW TEST:8.068 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":277,"completed":67,"skipped":1202,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:42.938: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jan 10 17:21:43.741: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jan 10 17:21:46.758: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:46.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5830" for this suite. +STEP: Destroying namespace "webhook-5830-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":277,"completed":68,"skipped":1232,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:46.848: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:21:48.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-649" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":277,"completed":69,"skipped":1237,"failed":0} +SSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:21:48.917: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating pod busybox-63755086-6409-4c71-82a4-e4fd2dd9d0dd in namespace container-probe-1363 +Jan 10 17:21:50.947: INFO: Started pod busybox-63755086-6409-4c71-82a4-e4fd2dd9d0dd in namespace container-probe-1363 +STEP: checking the pod's current state and verifying that restartCount is present +Jan 10 17:21:50.949: INFO: Initial restart count of pod busybox-63755086-6409-4c71-82a4-e4fd2dd9d0dd is 0 +Jan 10 17:22:43.017: INFO: Restart count of pod container-probe-1363/busybox-63755086-6409-4c71-82a4-e4fd2dd9d0dd is now 1 (52.068410771s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:22:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1363" for this suite. + +• [SLOW TEST:54.118 seconds] +[k8s.io] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":277,"completed":70,"skipped":1249,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:22:43.036: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir volume type on node default medium +Jan 10 17:22:43.064: INFO: Waiting up to 5m0s for pod "pod-67a9032b-ac76-472f-8747-82f2c770dd9d" in namespace "emptydir-2074" to be "Succeeded or Failed" +Jan 10 17:22:43.065: INFO: Pod "pod-67a9032b-ac76-472f-8747-82f2c770dd9d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.712737ms +Jan 10 17:22:45.068: INFO: Pod "pod-67a9032b-ac76-472f-8747-82f2c770dd9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004296412s +STEP: Saw pod success +Jan 10 17:22:45.068: INFO: Pod "pod-67a9032b-ac76-472f-8747-82f2c770dd9d" satisfied condition "Succeeded or Failed" +Jan 10 17:22:45.070: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-67a9032b-ac76-472f-8747-82f2c770dd9d container test-container: +STEP: delete the pod +Jan 10 17:22:45.084: INFO: Waiting for pod pod-67a9032b-ac76-472f-8747-82f2c770dd9d to disappear +Jan 10 17:22:45.086: INFO: Pod pod-67a9032b-ac76-472f-8747-82f2c770dd9d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:22:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2074" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":71,"skipped":1250,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:22:45.092: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating pod pod-subpath-test-secret-c2zz +STEP: Creating a pod to test atomic-volume-subpath +Jan 10 17:22:45.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-c2zz" in namespace "subpath-1353" to be "Succeeded or Failed" +Jan 10 17:22:45.130: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Pending", Reason="", readiness=false. Elapsed: 1.939086ms +Jan 10 17:22:47.132: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 2.004418083s +Jan 10 17:22:49.135: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 4.006812844s +Jan 10 17:22:51.137: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 6.009398093s +Jan 10 17:22:53.140: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 8.011967246s +Jan 10 17:22:55.142: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 10.014580142s +Jan 10 17:22:57.145: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 12.017204453s +Jan 10 17:22:59.148: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 14.019801339s +Jan 10 17:23:01.150: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 16.022555791s +Jan 10 17:23:03.153: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 18.025157639s +Jan 10 17:23:05.155: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Running", Reason="", readiness=true. Elapsed: 20.027711067s +Jan 10 17:23:07.158: INFO: Pod "pod-subpath-test-secret-c2zz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.030317658s +STEP: Saw pod success +Jan 10 17:23:07.158: INFO: Pod "pod-subpath-test-secret-c2zz" satisfied condition "Succeeded or Failed" +Jan 10 17:23:07.160: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-subpath-test-secret-c2zz container test-container-subpath-secret-c2zz: +STEP: delete the pod +Jan 10 17:23:07.180: INFO: Waiting for pod pod-subpath-test-secret-c2zz to disappear +Jan 10 17:23:07.182: INFO: Pod pod-subpath-test-secret-c2zz no longer exists +STEP: Deleting pod pod-subpath-test-secret-c2zz +Jan 10 17:23:07.182: INFO: Deleting pod "pod-subpath-test-secret-c2zz" in namespace "subpath-1353" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:07.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1353" for this suite. + +• [SLOW TEST:22.098 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":277,"completed":72,"skipped":1261,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:07.191: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:23:07.214: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Jan 10 17:23:15.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-3993 create -f -' +Jan 10 17:23:16.375: INFO: stderr: "" +Jan 10 17:23:16.375: INFO: stdout: "e2e-test-crd-publish-openapi-8415-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jan 10 17:23:16.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-3993 delete e2e-test-crd-publish-openapi-8415-crds test-cr' +Jan 10 17:23:16.449: INFO: stderr: "" +Jan 10 17:23:16.450: INFO: stdout: "e2e-test-crd-publish-openapi-8415-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Jan 10 17:23:16.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-3993 apply -f -' +Jan 10 17:23:16.668: INFO: stderr: "" +Jan 10 17:23:16.668: INFO: stdout: "e2e-test-crd-publish-openapi-8415-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jan 10 17:23:16.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 --namespace=crd-publish-openapi-3993 delete e2e-test-crd-publish-openapi-8415-crds test-cr' +Jan 10 17:23:16.745: INFO: stderr: "" +Jan 10 17:23:16.745: INFO: stdout: "e2e-test-crd-publish-openapi-8415-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Jan 10 17:23:16.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 explain e2e-test-crd-publish-openapi-8415-crds' +Jan 10 17:23:16.941: INFO: stderr: "" +Jan 10 17:23:16.941: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8415-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:20.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3993" for this suite. + +• [SLOW TEST:13.450 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":277,"completed":73,"skipped":1312,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:20.641: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name projected-configmap-test-volume-cfe30dfd-c56b-477b-9fa8-86583b6a00ed +STEP: Creating a pod to test consume configMaps +Jan 10 17:23:20.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b" in namespace "projected-4886" to be "Succeeded or Failed" +Jan 10 17:23:20.675: INFO: Pod "pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417646ms +Jan 10 17:23:22.678: INFO: Pod "pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005157719s +STEP: Saw pod success +Jan 10 17:23:22.678: INFO: Pod "pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b" satisfied condition "Succeeded or Failed" +Jan 10 17:23:22.680: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b container projected-configmap-volume-test: +STEP: delete the pod +Jan 10 17:23:22.694: INFO: Waiting for pod pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b to disappear +Jan 10 17:23:22.696: INFO: Pod pod-projected-configmaps-f486799a-33a4-434f-8593-27ed9bf9402b no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4886" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":277,"completed":74,"skipped":1348,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:22.705: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Jan 10 17:23:25.249: INFO: Successfully updated pod "adopt-release-dv9gw" +STEP: Checking that the Job readopts the Pod +Jan 10 17:23:25.249: INFO: Waiting up to 15m0s for pod "adopt-release-dv9gw" in namespace "job-6125" to be "adopted" +Jan 10 17:23:25.251: INFO: Pod "adopt-release-dv9gw": Phase="Running", Reason="", readiness=true. Elapsed: 1.865204ms +Jan 10 17:23:27.253: INFO: Pod "adopt-release-dv9gw": Phase="Running", Reason="", readiness=true. Elapsed: 2.004376222s +Jan 10 17:23:27.253: INFO: Pod "adopt-release-dv9gw" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Jan 10 17:23:27.760: INFO: Successfully updated pod "adopt-release-dv9gw" +STEP: Checking that the Job releases the Pod +Jan 10 17:23:27.761: INFO: Waiting up to 15m0s for pod "adopt-release-dv9gw" in namespace "job-6125" to be "released" +Jan 10 17:23:27.762: INFO: Pod "adopt-release-dv9gw": Phase="Running", Reason="", readiness=true. Elapsed: 1.846313ms +Jan 10 17:23:29.765: INFO: Pod "adopt-release-dv9gw": Phase="Running", Reason="", readiness=true. Elapsed: 2.004455097s +Jan 10 17:23:29.765: INFO: Pod "adopt-release-dv9gw" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:29.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6125" for this suite. + +• [SLOW TEST:7.068 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":277,"completed":75,"skipped":1361,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:29.773: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating the pod +Jan 10 17:23:29.796: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:32.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-5526" for this suite. +•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":277,"completed":76,"skipped":1387,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:32.856: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Jan 10 17:23:32.877: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:23:41.595: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:23:58.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1791" for this suite. + +• [SLOW TEST:25.273 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":277,"completed":77,"skipped":1396,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:23:58.132: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:23:58.160: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jan 10 17:24:03.163: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jan 10 17:24:03.163: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 +Jan 10 17:24:03.174: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-4356 /apis/apps/v1/namespaces/deployment-4356/deployments/test-cleanup-deployment 72451f0d-3167-4b43-aa90-7c1ba289530a 9388 1 2021-01-10 17:24:03 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-01-10 17:24:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00683a938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Jan 10 17:24:03.176: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Jan 10 17:24:03.176: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jan 10 17:24:03.176: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4356 /apis/apps/v1/namespaces/deployment-4356/replicasets/test-cleanup-controller 2f296d09-6fbf-4310-97ed-dd2a2d3839f4 9389 1 2021-01-10 17:23:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 72451f0d-3167-4b43-aa90-7c1ba289530a 0xc0074ae6b7 0xc0074ae6b8}] [] [{e2e.test Update apps/v1 2021-01-10 17:23:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:24:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 50 52 53 49 102 48 100 45 51 49 54 55 45 52 98 52 51 45 97 97 57 48 45 55 99 49 98 97 50 56 57 53 51 48 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0074ae758 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 10 17:24:03.179: INFO: Pod "test-cleanup-controller-jbwtc" is available: +&Pod{ObjectMeta:{test-cleanup-controller-jbwtc test-cleanup-controller- deployment-4356 /api/v1/namespaces/deployment-4356/pods/test-cleanup-controller-jbwtc 9b422c21-926c-49b2-9672-fe1836d419b9 9366 0 2021-01-10 17:23:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/podIP:100.108.158.137/32 cni.projectcalico.org/podIPs:100.108.158.137/32] [{apps/v1 ReplicaSet test-cleanup-controller 2f296d09-6fbf-4310-97ed-dd2a2d3839f4 0xc0074aeacf 0xc0074aeae0}] [] [{calico Update v1 2021-01-10 17:23:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-01-10 17:23:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 102 50 57 54 100 48 57 45 54 102 98 102 45 52 51 49 48 45 57 55 101 100 45 100 100 50 97 50 100 51 56 51 57 102 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:23:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 51 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-64kjp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-64kjp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-64kjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:23:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:23:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.137,StartTime:2021-01-10 17:23:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:23:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://21961ed2732236e7c5b4f983ea637310edebe98c13ebf05d26daa18a328bc4f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:03.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4356" for this suite. + +• [SLOW TEST:5.057 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":277,"completed":78,"skipped":1411,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:03.189: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a Namespace +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:03.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-4107" for this suite. +STEP: Destroying namespace "nspatchtest-bcf43999-5323-47d6-b17b-63c9c3c778d1-6732" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":277,"completed":79,"skipped":1431,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:03.257: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap configmap-5251/configmap-test-280950aa-245b-4db2-8ebb-056f0a683d15 +STEP: Creating a pod to test consume configMaps +Jan 10 17:24:03.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc" in namespace "configmap-5251" to be "Succeeded or Failed" +Jan 10 17:24:03.292: INFO: Pod "pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835074ms +Jan 10 17:24:05.294: INFO: Pod "pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004040113s +STEP: Saw pod success +Jan 10 17:24:05.294: INFO: Pod "pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc" satisfied condition "Succeeded or Failed" +Jan 10 17:24:05.296: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc container env-test: +STEP: delete the pod +Jan 10 17:24:05.316: INFO: Waiting for pod pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc to disappear +Jan 10 17:24:05.318: INFO: Pod pod-configmaps-a2d94527-7a48-4358-a3e2-1394d09b09fc no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:05.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5251" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":277,"completed":80,"skipped":1450,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:05.328: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a replication controller +Jan 10 17:24:05.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-4454' +Jan 10 17:24:05.548: INFO: stderr: "" +Jan 10 17:24:05.548: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jan 10 17:24:05.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:05.626: INFO: stderr: "" +Jan 10 17:24:05.626: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-xqk9r " +Jan 10 17:24:05.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:05.696: INFO: stderr: "" +Jan 10 17:24:05.696: INFO: stdout: "" +Jan 10 17:24:05.696: INFO: update-demo-nautilus-7xh28 is created but not running +Jan 10 17:24:10.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:10.773: INFO: stderr: "" +Jan 10 17:24:10.773: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-xqk9r " +Jan 10 17:24:10.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:10.845: INFO: stderr: "" +Jan 10 17:24:10.845: INFO: stdout: "true" +Jan 10 17:24:10.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:10.916: INFO: stderr: "" +Jan 10 17:24:10.916: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:10.916: INFO: validating pod update-demo-nautilus-7xh28 +Jan 10 17:24:10.919: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:10.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:10.919: INFO: update-demo-nautilus-7xh28 is verified up and running +Jan 10 17:24:10.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-xqk9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:10.990: INFO: stderr: "" +Jan 10 17:24:10.990: INFO: stdout: "true" +Jan 10 17:24:10.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-xqk9r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:11.062: INFO: stderr: "" +Jan 10 17:24:11.062: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:11.062: INFO: validating pod update-demo-nautilus-xqk9r +Jan 10 17:24:11.065: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:11.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:11.065: INFO: update-demo-nautilus-xqk9r is verified up and running +STEP: scaling down the replication controller +Jan 10 17:24:11.067: INFO: scanned /root for discovery docs: +Jan 10 17:24:11.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4454' +Jan 10 17:24:12.155: INFO: stderr: "" +Jan 10 17:24:12.155: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jan 10 17:24:12.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:12.230: INFO: stderr: "" +Jan 10 17:24:12.230: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-xqk9r " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jan 10 17:24:17.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:17.306: INFO: stderr: "" +Jan 10 17:24:17.306: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-xqk9r " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jan 10 17:24:22.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:22.382: INFO: stderr: "" +Jan 10 17:24:22.382: INFO: stdout: "update-demo-nautilus-7xh28 " +Jan 10 17:24:22.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:22.484: INFO: stderr: "" +Jan 10 17:24:22.484: INFO: stdout: "true" +Jan 10 17:24:22.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:22.555: INFO: stderr: "" +Jan 10 17:24:22.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:22.555: INFO: validating pod update-demo-nautilus-7xh28 +Jan 10 17:24:22.560: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:22.560: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:22.560: INFO: update-demo-nautilus-7xh28 is verified up and running +STEP: scaling up the replication controller +Jan 10 17:24:22.562: INFO: scanned /root for discovery docs: +Jan 10 17:24:22.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4454' +Jan 10 17:24:23.652: INFO: stderr: "" +Jan 10 17:24:23.652: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jan 10 17:24:23.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:23.730: INFO: stderr: "" +Jan 10 17:24:23.730: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-hqtx5 " +Jan 10 17:24:23.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:23.802: INFO: stderr: "" +Jan 10 17:24:23.802: INFO: stdout: "true" +Jan 10 17:24:23.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:23.871: INFO: stderr: "" +Jan 10 17:24:23.871: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:23.871: INFO: validating pod update-demo-nautilus-7xh28 +Jan 10 17:24:23.874: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:23.874: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:23.874: INFO: update-demo-nautilus-7xh28 is verified up and running +Jan 10 17:24:23.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-hqtx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:23.948: INFO: stderr: "" +Jan 10 17:24:23.948: INFO: stdout: "" +Jan 10 17:24:23.948: INFO: update-demo-nautilus-hqtx5 is created but not running +Jan 10 17:24:28.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4454' +Jan 10 17:24:29.025: INFO: stderr: "" +Jan 10 17:24:29.025: INFO: stdout: "update-demo-nautilus-7xh28 update-demo-nautilus-hqtx5 " +Jan 10 17:24:29.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:29.094: INFO: stderr: "" +Jan 10 17:24:29.094: INFO: stdout: "true" +Jan 10 17:24:29.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-7xh28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:29.165: INFO: stderr: "" +Jan 10 17:24:29.165: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:29.165: INFO: validating pod update-demo-nautilus-7xh28 +Jan 10 17:24:29.168: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:29.168: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:29.168: INFO: update-demo-nautilus-7xh28 is verified up and running +Jan 10 17:24:29.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-hqtx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:29.239: INFO: stderr: "" +Jan 10 17:24:29.239: INFO: stdout: "true" +Jan 10 17:24:29.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods update-demo-nautilus-hqtx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4454' +Jan 10 17:24:29.312: INFO: stderr: "" +Jan 10 17:24:29.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jan 10 17:24:29.312: INFO: validating pod update-demo-nautilus-hqtx5 +Jan 10 17:24:29.315: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 10 17:24:29.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 10 17:24:29.315: INFO: update-demo-nautilus-hqtx5 is verified up and running +STEP: using delete to clean up resources +Jan 10 17:24:29.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-4454' +Jan 10 17:24:29.395: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 10 17:24:29.395: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jan 10 17:24:29.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4454' +Jan 10 17:24:29.474: INFO: stderr: "No resources found in kubectl-4454 namespace.\n" +Jan 10 17:24:29.474: INFO: stdout: "" +Jan 10 17:24:29.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -l name=update-demo --namespace=kubectl-4454 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 10 17:24:29.550: INFO: stderr: "" +Jan 10 17:24:29.550: INFO: stdout: "update-demo-nautilus-7xh28\nupdate-demo-nautilus-hqtx5\n" +Jan 10 17:24:30.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4454' +Jan 10 17:24:30.129: INFO: stderr: "No resources found in kubectl-4454 namespace.\n" +Jan 10 17:24:30.129: INFO: stdout: "" +Jan 10 17:24:30.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pods -l name=update-demo --namespace=kubectl-4454 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 10 17:24:30.205: INFO: stderr: "" +Jan 10 17:24:30.205: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:30.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4454" for this suite. + +• [SLOW TEST:24.884 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":277,"completed":81,"skipped":1455,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:30.213: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating projection with secret that has name projected-secret-test-d19e56e6-1eb3-4ffb-bc08-5dcb85f64ee0 +STEP: Creating a pod to test consume secrets +Jan 10 17:24:30.243: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572" in namespace "projected-370" to be "Succeeded or Failed" +Jan 10 17:24:30.244: INFO: Pod "pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572": Phase="Pending", Reason="", readiness=false. Elapsed: 1.71977ms +Jan 10 17:24:32.247: INFO: Pod "pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004287986s +STEP: Saw pod success +Jan 10 17:24:32.247: INFO: Pod "pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572" satisfied condition "Succeeded or Failed" +Jan 10 17:24:32.249: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572 container projected-secret-volume-test: +STEP: delete the pod +Jan 10 17:24:32.263: INFO: Waiting for pod pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572 to disappear +Jan 10 17:24:32.265: INFO: Pod pod-projected-secrets-49f81aac-c94e-4bc6-a547-27069140d572 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:32.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-370" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":82,"skipped":1465,"failed":0} + +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:32.271: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:24:34.317: INFO: Waiting up to 5m0s for pod "client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927" in namespace "pods-4818" to be "Succeeded or Failed" +Jan 10 17:24:34.319: INFO: Pod "client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152747ms +Jan 10 17:24:36.322: INFO: Pod "client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004485517s +STEP: Saw pod success +Jan 10 17:24:36.322: INFO: Pod "client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927" satisfied condition "Succeeded or Failed" +Jan 10 17:24:36.324: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927 container env3cont: +STEP: delete the pod +Jan 10 17:24:36.337: INFO: Waiting for pod client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927 to disappear +Jan 10 17:24:36.340: INFO: Pod client-envvars-59ac6140-4aa9-4641-97cf-10ff649f2927 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:36.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4818" for this suite. +•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":277,"completed":83,"skipped":1465,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:36.348: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating pod pod-subpath-test-configmap-6p5b +STEP: Creating a pod to test atomic-volume-subpath +Jan 10 17:24:36.380: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6p5b" in namespace "subpath-9454" to be "Succeeded or Failed" +Jan 10 17:24:36.382: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793653ms +Jan 10 17:24:38.385: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.004214032s +Jan 10 17:24:40.387: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.006578157s +Jan 10 17:24:42.391: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 6.009994842s +Jan 10 17:24:44.393: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 8.012631629s +Jan 10 17:24:46.396: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 10.014972749s +Jan 10 17:24:48.398: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 12.017567603s +Jan 10 17:24:50.401: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 14.020085125s +Jan 10 17:24:52.403: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 16.022597476s +Jan 10 17:24:54.406: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 18.025430561s +Jan 10 17:24:56.408: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Running", Reason="", readiness=true. Elapsed: 20.027929897s +Jan 10 17:24:58.411: INFO: Pod "pod-subpath-test-configmap-6p5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.03053799s +STEP: Saw pod success +Jan 10 17:24:58.411: INFO: Pod "pod-subpath-test-configmap-6p5b" satisfied condition "Succeeded or Failed" +Jan 10 17:24:58.413: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-subpath-test-configmap-6p5b container test-container-subpath-configmap-6p5b: +STEP: delete the pod +Jan 10 17:24:58.428: INFO: Waiting for pod pod-subpath-test-configmap-6p5b to disappear +Jan 10 17:24:58.430: INFO: Pod pod-subpath-test-configmap-6p5b no longer exists +STEP: Deleting pod pod-subpath-test-configmap-6p5b +Jan 10 17:24:58.430: INFO: Deleting pod "pod-subpath-test-configmap-6p5b" in namespace "subpath-9454" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:24:58.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-9454" for this suite. + +• [SLOW TEST:22.091 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":277,"completed":84,"skipped":1480,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:24:58.439: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jan 10 17:25:02.526: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:02.532: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:04.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:04.534: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:06.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:06.534: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:08.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:08.534: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:10.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:10.535: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:12.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:12.534: INFO: Pod pod-with-poststart-http-hook still exists +Jan 10 17:25:14.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 10 17:25:14.534: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:25:14.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-6384" for this suite. + +• [SLOW TEST:16.103 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":277,"completed":85,"skipped":1516,"failed":0} +SSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:25:14.542: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Jan 10 17:25:14.569: INFO: Created pod &Pod{ObjectMeta:{dns-4233 dns-4233 /api/v1/namespaces/dns-4233/pods/dns-4233 6439dcc3-7fa3-4fd3-9cba-268f85cdd2bb 9937 0 2021-01-10 17:25:14 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-01-10 17:25:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6mf5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6mf5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6mf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 10 17:25:14.571: INFO: The status of Pod dns-4233 is Pending, waiting for it to be Running (with Ready = true) +Jan 10 17:25:16.578: INFO: The status of Pod dns-4233 is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Jan 10 17:25:16.578: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4233 PodName:dns-4233 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:25:16.578: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Verifying customized DNS server is configured on pod... +Jan 10 17:25:16.677: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4233 PodName:dns-4233 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:25:16.677: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:25:16.777: INFO: Deleting pod dns-4233... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:25:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4233" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":277,"completed":86,"skipped":1520,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:25:16.793: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating secret with name secret-test-8eb1cfac-eae7-4df7-842b-e95cf39ce2f7 +STEP: Creating a pod to test consume secrets +Jan 10 17:25:16.823: INFO: Waiting up to 5m0s for pod "pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526" in namespace "secrets-9736" to be "Succeeded or Failed" +Jan 10 17:25:16.826: INFO: Pod "pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709412ms +Jan 10 17:25:18.828: INFO: Pod "pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005162497s +STEP: Saw pod success +Jan 10 17:25:18.828: INFO: Pod "pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526" satisfied condition "Succeeded or Failed" +Jan 10 17:25:18.830: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526 container secret-volume-test: +STEP: delete the pod +Jan 10 17:25:18.845: INFO: Waiting for pod pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526 to disappear +Jan 10 17:25:18.846: INFO: Pod pod-secrets-30d39924-8535-4a49-90fc-2688ae62a526 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:25:18.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9736" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":87,"skipped":1530,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:25:18.854: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 +Jan 10 17:25:18.878: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 10 17:25:18.886: INFO: Waiting for terminating namespaces to be deleted... +Jan 10 17:25:18.888: INFO: +Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test +Jan 10 17:25:18.893: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:25:18.893: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:25:18.893: INFO: Container systemd-logs ready: true, restart count 0 +Jan 10 17:25:18.893: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.893: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:25:18.893: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.893: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:25:18.893: INFO: pod-handle-http-request from container-lifecycle-hook-6384 started at 2021-01-10 17:24:58 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.893: INFO: Container pod-handle-http-request ready: true, restart count 0 +Jan 10 17:25:18.893: INFO: +Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test +Jan 10 17:25:18.906: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container dnsmasq ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: Container kubedns ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: Container sidecar ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container e2e ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:25:18.906: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: Container systemd-logs ready: true, restart count 0 +Jan 10 17:25:18.906: INFO: +Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test +Jan 10 17:25:18.917: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.917: INFO: Container kube-proxy ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.917: INFO: Container calico-node ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded) +Jan 10 17:25:18.917: INFO: Container autoscaler ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded) +Jan 10 17:25:18.917: INFO: Container dnsmasq ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: Container kubedns ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: Container sidecar ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded) +Jan 10 17:25:18.917: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 10 17:25:18.917: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-28ba6fb1-c488-46a5-953e-40ae8f93337d 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-28ba6fb1-c488-46a5-953e-40ae8f93337d off the node ip-172-20-33-172.ap-south-1.compute.internal +STEP: verifying the node doesn't have the label kubernetes.io/e2e-28ba6fb1-c488-46a5-953e-40ae8f93337d +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:22.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-9973" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 + +• [SLOW TEST:304.132 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":277,"completed":88,"skipped":1546,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:22.987: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jan 10 17:30:25.532: INFO: Successfully updated pod "pod-update-207a46e1-8de0-4143-b2e7-737342ce0e16" +STEP: verifying the updated pod is in kubernetes +Jan 10 17:30:25.535: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:25.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1391" for this suite. +•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":277,"completed":89,"skipped":1568,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:25.542: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Jan 10 17:30:25.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2982 /api/v1/namespaces/watch-2982/configmaps/e2e-watch-test-resource-version f75bde69-e032-4855-8755-ee006537237d 11095 0 2021-01-10 17:30:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-10 17:30:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 10 17:30:25.583: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2982 /api/v1/namespaces/watch-2982/configmaps/e2e-watch-test-resource-version f75bde69-e032-4855-8755-ee006537237d 11096 0 2021-01-10 17:30:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-10 17:30:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:25.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-2982" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":277,"completed":90,"skipped":1631,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:25.590: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 +Jan 10 17:30:25.612: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Registering the sample API server. +Jan 10 17:30:26.495: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jan 10 17:30:28.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:30.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:32.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:34.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:36.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:38.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:40.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745896626, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 10 17:30:43.148: INFO: Waited 615.633965ms for the sample-apiserver to be ready to handle requests. +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:43.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-6155" for this suite. + +• [SLOW TEST:18.158 seconds] +[sig-api-machinery] Aggregator +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":277,"completed":91,"skipped":1683,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:43.748: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1863 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1863;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1863 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1863;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1863.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1863.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1863.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1863.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 75.108.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.108.75_udp@PTR;check="$$(dig +tcp +noall +answer +search 75.108.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.108.75_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1863 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1863;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1863 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1863;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1863.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1863.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1863.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1863.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 75.108.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.108.75_udp@PTR;check="$$(dig +tcp +noall +answer +search 75.108.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.108.75_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jan 10 17:30:45.807: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-1863 from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1863 from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.818: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.820: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.822: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.836: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.838: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.840: INFO: Unable to read jessie_udp@dns-test-service.dns-1863 from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-1863 from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.844: INFO: Unable to read jessie_udp@dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.848: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1863.svc from pod dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b: the server could not find the requested resource (get pods dns-test-a6575ce1-a5cc-416a-a982-5682672a146b) +Jan 10 17:30:45.862: INFO: Lookups using dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1863 wheezy_tcp@dns-test-service.dns-1863 wheezy_udp@dns-test-service.dns-1863.svc wheezy_tcp@dns-test-service.dns-1863.svc wheezy_udp@_http._tcp.dns-test-service.dns-1863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1863 jessie_tcp@dns-test-service.dns-1863 jessie_udp@dns-test-service.dns-1863.svc jessie_tcp@dns-test-service.dns-1863.svc jessie_udp@_http._tcp.dns-test-service.dns-1863.svc] + +Jan 10 17:30:50.922: INFO: DNS probes using dns-1863/dns-test-a6575ce1-a5cc-416a-a982-5682672a146b succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:50.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-1863" for this suite. + +• [SLOW TEST:7.221 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":277,"completed":92,"skipped":1697,"failed":0} +S +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:50.971: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating configMap with name projected-configmap-test-volume-map-7186ab54-676c-4286-86bc-87eb46b185b4 +STEP: Creating a pod to test consume configMaps +Jan 10 17:30:51.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3" in namespace "projected-607" to be "Succeeded or Failed" +Jan 10 17:30:51.014: INFO: Pod "pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005827ms +Jan 10 17:30:53.017: INFO: Pod "pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006581403s +STEP: Saw pod success +Jan 10 17:30:53.017: INFO: Pod "pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3" satisfied condition "Succeeded or Failed" +Jan 10 17:30:53.019: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3 container projected-configmap-volume-test: +STEP: delete the pod +Jan 10 17:30:53.042: INFO: Waiting for pod pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3 to disappear +Jan 10 17:30:53.044: INFO: Pod pod-projected-configmaps-60e9c504-f9b1-479c-8c82-ce88656051b3 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:30:53.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-607" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":93,"skipped":1698,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:30:53.052: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [k8s.io] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:14.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9285" for this suite. + +• [SLOW TEST:21.154 seconds] +[k8s.io] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 + when starting a container that exits + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":277,"completed":94,"skipped":1744,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:14.207: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating secret with name projected-secret-test-b82a2dc6-86ca-49a8-aa39-13ed1cc5aa10 +STEP: Creating a pod to test consume secrets +Jan 10 17:31:14.236: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd" in namespace "projected-4522" to be "Succeeded or Failed" +Jan 10 17:31:14.240: INFO: Pod "pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.612766ms +Jan 10 17:31:16.243: INFO: Pod "pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006396388s +STEP: Saw pod success +Jan 10 17:31:16.243: INFO: Pod "pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd" satisfied condition "Succeeded or Failed" +Jan 10 17:31:16.245: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd container secret-volume-test: +STEP: delete the pod +Jan 10 17:31:16.262: INFO: Waiting for pod pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd to disappear +Jan 10 17:31:16.263: INFO: Pod pod-projected-secrets-6f956a55-05b7-492b-91a4-8dd583b182dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:16.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4522" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":277,"completed":95,"skipped":1777,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:16.271: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Performing setup for networking test in namespace pod-network-test-6085 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jan 10 17:31:16.295: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 10 17:31:16.316: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 10 17:31:18.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:20.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:22.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:24.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:26.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:28.319: INFO: The status of Pod netserver-0 is Running (Ready = false) +Jan 10 17:31:30.319: INFO: The status of Pod netserver-0 is Running (Ready = true) +Jan 10 17:31:30.323: INFO: The status of Pod netserver-1 is Running (Ready = false) +Jan 10 17:31:32.325: INFO: The status of Pod netserver-1 is Running (Ready = false) +Jan 10 17:31:34.325: INFO: The status of Pod netserver-1 is Running (Ready = true) +Jan 10 17:31:34.329: INFO: The status of Pod netserver-2 is Running (Ready = true) +STEP: Creating test pods +Jan 10 17:31:36.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.108.158.157:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6085 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:31:36.353: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:31:36.445: INFO: Found all expected endpoints: [netserver-0] +Jan 10 17:31:36.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.112.27.207:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6085 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:31:36.447: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:31:36.539: INFO: Found all expected endpoints: [netserver-1] +Jan 10 17:31:36.541: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.100.191.149:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6085 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jan 10 17:31:36.541: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +Jan 10 17:31:36.635: INFO: Found all expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:36.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6085" for this suite. + +• [SLOW TEST:20.372 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":96,"skipped":1789,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:36.643: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Jan 10 17:31:46.729: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W0110 17:31:46.729697 24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:46.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1512" for this suite. + +• [SLOW TEST:10.095 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":277,"completed":97,"skipped":1793,"failed":0} +S +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:46.738: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating secret with name secret-test-c69eeb26-6e2a-4582-a39e-21b5550609e8 +STEP: Creating a pod to test consume secrets +Jan 10 17:31:46.775: INFO: Waiting up to 5m0s for pod "pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e" in namespace "secrets-7016" to be "Succeeded or Failed" +Jan 10 17:31:46.776: INFO: Pod "pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.775868ms +Jan 10 17:31:48.779: INFO: Pod "pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004419917s +STEP: Saw pod success +Jan 10 17:31:48.779: INFO: Pod "pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e" satisfied condition "Succeeded or Failed" +Jan 10 17:31:48.781: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e container secret-volume-test: +STEP: delete the pod +Jan 10 17:31:48.796: INFO: Waiting for pod pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e to disappear +Jan 10 17:31:48.798: INFO: Pod pod-secrets-3173e1e1-e210-4dce-9619-80b6c466d77e no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:48.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7016" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":98,"skipped":1794,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:48.805: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jan 10 17:31:48.837: INFO: Waiting up to 5m0s for pod "pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac" in namespace "emptydir-6977" to be "Succeeded or Failed" +Jan 10 17:31:48.842: INFO: Pod "pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301454ms +Jan 10 17:31:50.845: INFO: Pod "pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007386981s +STEP: Saw pod success +Jan 10 17:31:50.845: INFO: Pod "pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac" satisfied condition "Succeeded or Failed" +Jan 10 17:31:50.847: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac container test-container: +STEP: delete the pod +Jan 10 17:31:50.866: INFO: Waiting for pod pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac to disappear +Jan 10 17:31:50.867: INFO: Pod pod-a6e578e4-73b8-472f-9526-5ef9db69c3ac no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:50.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6977" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":99,"skipped":1856,"failed":0} +SS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:50.875: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a pod to test downward API volume plugin +Jan 10 17:31:50.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad" in namespace "downward-api-1122" to be "Succeeded or Failed" +Jan 10 17:31:50.907: INFO: Pod "downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad": Phase="Pending", Reason="", readiness=false. Elapsed: 1.684658ms +Jan 10 17:31:52.910: INFO: Pod "downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004384713s +STEP: Saw pod success +Jan 10 17:31:52.910: INFO: Pod "downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad" satisfied condition "Succeeded or Failed" +Jan 10 17:31:52.912: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad container client-container: +STEP: delete the pod +Jan 10 17:31:52.926: INFO: Waiting for pod downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad to disappear +Jan 10 17:31:52.928: INFO: Pod downwardapi-volume-fba49803-de90-4040-b8e1-0bfb99578aad no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:52.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1122" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":100,"skipped":1858,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:52.941: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:31:52.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-1301' +Jan 10 17:31:53.209: INFO: stderr: "" +Jan 10 17:31:53.209: INFO: stdout: "replicationcontroller/agnhost-master created\n" +Jan 10 17:31:53.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-1301' +Jan 10 17:31:53.497: INFO: stderr: "" +Jan 10 17:31:53.497: INFO: stdout: "service/agnhost-master created\n" +STEP: Waiting for Agnhost master to start. +Jan 10 17:31:54.500: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:31:54.500: INFO: Found 0 / 1 +Jan 10 17:31:55.500: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:31:55.500: INFO: Found 1 / 1 +Jan 10 17:31:55.500: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jan 10 17:31:55.502: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 10 17:31:55.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jan 10 17:31:55.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 describe pod agnhost-master-qrprc --namespace=kubectl-1301' +Jan 10 17:31:55.587: INFO: stderr: "" +Jan 10 17:31:55.587: INFO: stdout: "Name: agnhost-master-qrprc\nNamespace: kubectl-1301\nPriority: 0\nNode: ip-172-20-33-172.ap-south-1.compute.internal/172.20.33.172\nStart Time: Sun, 10 Jan 2021 17:31:53 +0000\nLabels: app=agnhost\n role=master\nAnnotations: cni.projectcalico.org/podIP: 100.108.158.166/32\n cni.projectcalico.org/podIPs: 100.108.158.166/32\nStatus: Running\nIP: 100.108.158.166\nIPs:\n IP: 100.108.158.166\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://bf3b5d1a154dffbeba999be28f1bbf9ceb8c9b4a7274970a63959a732ce84ec9\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 10 Jan 2021 17:31:54 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dglmd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dglmd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dglmd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-1301/agnhost-master-qrprc to ip-172-20-33-172.ap-south-1.compute.internal\n Normal Pulled 2s kubelet Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet Created container agnhost-master\n Normal Started 1s kubelet Started container agnhost-master\n" +Jan 10 17:31:55.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 describe rc agnhost-master --namespace=kubectl-1301' +Jan 10 17:31:55.679: INFO: stderr: "" +Jan 10 17:31:55.679: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1301\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-qrprc\n" +Jan 10 17:31:55.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 describe service agnhost-master --namespace=kubectl-1301' +Jan 10 17:31:55.760: INFO: stderr: "" +Jan 10 17:31:55.760: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1301\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 100.70.172.102\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.108.158.166:6379\nSession Affinity: None\nEvents: \n" +Jan 10 17:31:55.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 describe node ip-172-20-33-172.ap-south-1.compute.internal' +Jan 10 17:31:55.865: INFO: stderr: "" +Jan 10 17:31:55.865: INFO: stdout: "Name: ip-172-20-33-172.ap-south-1.compute.internal\nRoles: node\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5a.xlarge\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=ap-south-1\n failure-domain.beta.kubernetes.io/zone=ap-south-1a\n kops.k8s.io/instancegroup=nodes\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-172-20-33-172.ap-south-1.compute.internal\n kubernetes.io/os=linux\n kubernetes.io/role=node\n node-role.kubernetes.io/node=\n node.kubernetes.io/instance-type=m5a.xlarge\n topology.kubernetes.io/region=ap-south-1\n topology.kubernetes.io/zone=ap-south-1a\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 172.20.33.172/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.108.158.128\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 16:58:03 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: ip-172-20-33-172.ap-south-1.compute.internal\n AcquireTime: \n RenewTime: Sun, 10 Jan 2021 17:31:53 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sun, 10 Jan 2021 16:58:47 +0000 Sun, 10 Jan 2021 16:58:47 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sun, 10 Jan 2021 17:30:50 +0000 Sun, 10 Jan 2021 16:58:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 10 Jan 2021 17:30:50 +0000 Sun, 10 Jan 2021 16:58:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 10 Jan 2021 17:30:50 +0000 Sun, 10 Jan 2021 16:58:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 10 Jan 2021 17:30:50 +0000 Sun, 10 Jan 2021 16:58:44 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 172.20.33.172\n Hostname: ip-172-20-33-172.ap-south-1.compute.internal\n InternalDNS: ip-172-20-33-172.ap-south-1.compute.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 4\n ephemeral-storage: 130045936Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16011356Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 4\n ephemeral-storage: 119850334420\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 15908956Ki\n pods: 110\nSystem Info:\n Machine ID: ec20cadc75c48e1d7ebe596843e06b0f\n System UUID: ec20cadc-75c4-8e1d-7ebe-596843e06b0f\n Boot ID: 21811d23-6d95-4c8d-8c61-2ab1e8a9ad30\n Kernel Version: 5.4.0-1029-aws\n OS Image: Ubuntu 20.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://19.3.11\n Kubelet Version: v1.18.14\n Kube-Proxy Version: v1.18.14\nPodCIDR: 100.96.5.0/24\nPodCIDRs: 100.96.5.0/24\nProviderID: aws:///ap-south-1a/i-0d45ac101d965897c\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-vgdrq 100m (2%) 0 (0%) 0 (0%) 0 (0%) 33m\n kube-system kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal 100m (2%) 0 (0%) 0 (0%) 0 (0%) 32m\n kubectl-1301 agnhost-master-qrprc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s\n sonobuoy sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (5%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal NodeAllocatableEnforced 36m kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientPID 36m (x7 over 36m) kubelet Node ip-172-20-33-172.ap-south-1.compute.internal status is now: NodeHasSufficientPID\n Normal NodeHasSufficientMemory 36m (x8 over 36m) kubelet Node ip-172-20-33-172.ap-south-1.compute.internal status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 36m (x8 over 36m) kubelet Node ip-172-20-33-172.ap-south-1.compute.internal status is now: NodeHasNoDiskPressure\n Normal Starting 35m kube-proxy Starting kube-proxy.\n" +Jan 10 17:31:55.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 describe namespace kubectl-1301' +Jan 10 17:31:55.945: INFO: stderr: "" +Jan 10 17:31:55.945: INFO: stdout: "Name: kubectl-1301\nLabels: e2e-framework=kubectl\n e2e-run=a47a8f7c-5beb-457c-b078-b4f21c489d75\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:31:55.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1301" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":277,"completed":101,"skipped":1878,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:31:55.952: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 +STEP: Creating service test in namespace statefulset-3026 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: Creating a new StatefulSet +Jan 10 17:31:55.985: INFO: Found 0 stateful pods, waiting for 3 +Jan 10 17:32:05.988: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 10 17:32:05.988: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 10 17:32:05.988: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jan 10 17:32:05.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:32:06.186: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:32:06.186: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:32:06.186: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine +Jan 10 17:32:16.211: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Jan 10 17:32:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 10 17:32:26.416: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 10 17:32:26.416: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 10 17:32:26.416: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 10 17:32:36.429: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:46.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:32:46.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:46.434: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:56.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:32:56.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:32:56.434: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:33:06.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:33:06.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jan 10 17:33:16.433: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +STEP: Rolling back to a previous revision +Jan 10 17:33:26.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 10 17:33:26.780: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 10 17:33:26.780: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 10 17:33:26.780: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 10 17:33:36.805: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Jan 10 17:33:46.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 10 17:33:47.021: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 10 17:33:47.021: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 10 17:33:47.021: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 10 17:33:57.034: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +Jan 10 17:34:07.039: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update +Jan 10 17:34:07.039: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +Jan 10 17:34:07.039: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 +Jan 10 17:34:17.040: INFO: Deleting all statefulset in ns statefulset-3026 +Jan 10 17:34:17.041: INFO: Scaling statefulset ss2 to 0 +Jan 10 17:34:47.052: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 10 17:34:47.054: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:34:47.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3026" for this suite. + +• [SLOW TEST:171.119 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":277,"completed":102,"skipped":1883,"failed":0} +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:34:47.072: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:34:47.091: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-5274 +I0110 17:34:47.102851 24 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5274, replica count: 1 +I0110 17:34:48.153265 24 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0110 17:34:49.153456 24 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 10 17:34:49.263: INFO: Created: latency-svc-wxnb4 +Jan 10 17:34:49.271: INFO: Got endpoints: latency-svc-wxnb4 [17.395356ms] +Jan 10 17:34:49.281: INFO: Created: latency-svc-lmsvp +Jan 10 17:34:49.287: INFO: Got endpoints: latency-svc-lmsvp [15.702449ms] +Jan 10 17:34:49.290: INFO: Created: latency-svc-kfdr7 +Jan 10 17:34:49.295: INFO: Got endpoints: latency-svc-kfdr7 [23.64955ms] +Jan 10 17:34:49.298: INFO: Created: latency-svc-qgclm +Jan 10 17:34:49.304: INFO: Got endpoints: latency-svc-qgclm [32.901813ms] +Jan 10 17:34:49.307: INFO: Created: latency-svc-jmkgd +Jan 10 17:34:49.313: INFO: Got endpoints: latency-svc-jmkgd [42.105306ms] +Jan 10 17:34:49.317: INFO: Created: latency-svc-9mpr5 +Jan 10 17:34:49.325: INFO: Created: latency-svc-fgk5h +Jan 10 17:34:49.326: INFO: Got endpoints: latency-svc-9mpr5 [54.403971ms] +Jan 10 17:34:49.333: INFO: Created: latency-svc-sjcbv +Jan 10 17:34:49.333: INFO: Got endpoints: latency-svc-fgk5h [62.1991ms] +Jan 10 17:34:49.341: INFO: Created: latency-svc-h5qrf +Jan 10 17:34:49.341: INFO: Got endpoints: latency-svc-sjcbv [69.775187ms] +Jan 10 17:34:49.348: INFO: Got endpoints: latency-svc-h5qrf [76.784698ms] +Jan 10 17:34:49.350: INFO: Created: latency-svc-zqzfz +Jan 10 17:34:49.356: INFO: Got endpoints: latency-svc-zqzfz [84.90092ms] +Jan 10 17:34:49.359: INFO: Created: latency-svc-q84ds +Jan 10 17:34:49.364: INFO: Got endpoints: latency-svc-q84ds [92.684748ms] +Jan 10 17:34:49.369: INFO: Created: latency-svc-hg47z +Jan 10 17:34:49.374: INFO: Got endpoints: latency-svc-hg47z [102.262485ms] +Jan 10 17:34:49.377: INFO: Created: latency-svc-rb9xn +Jan 10 17:34:49.384: INFO: Created: latency-svc-9zg5c +Jan 10 17:34:49.388: INFO: Got endpoints: latency-svc-rb9xn [117.266518ms] +Jan 10 17:34:49.393: INFO: Got endpoints: latency-svc-9zg5c [121.102816ms] +Jan 10 17:34:49.393: INFO: Created: latency-svc-gxfrq +Jan 10 17:34:49.399: INFO: Got endpoints: latency-svc-gxfrq [126.962986ms] +Jan 10 17:34:49.401: INFO: Created: latency-svc-q8nlp +Jan 10 17:34:49.407: INFO: Got endpoints: latency-svc-q8nlp [136.2817ms] +Jan 10 17:34:49.411: INFO: Created: latency-svc-rkh5h +Jan 10 17:34:49.418: INFO: Got endpoints: latency-svc-rkh5h [130.703714ms] +Jan 10 17:34:49.421: INFO: Created: latency-svc-jcwzg +Jan 10 17:34:49.427: INFO: Got endpoints: latency-svc-jcwzg [132.472462ms] +Jan 10 17:34:49.432: INFO: Created: latency-svc-2sggx +Jan 10 17:34:49.437: INFO: Got endpoints: latency-svc-2sggx [133.108378ms] +Jan 10 17:34:49.440: INFO: Created: latency-svc-j8rwx +Jan 10 17:34:49.449: INFO: Got endpoints: latency-svc-j8rwx [135.846226ms] +Jan 10 17:34:49.450: INFO: Created: latency-svc-2w68k +Jan 10 17:34:49.455: INFO: Got endpoints: latency-svc-2w68k [129.36041ms] +Jan 10 17:34:49.458: INFO: Created: latency-svc-7cknp +Jan 10 17:34:49.463: INFO: Got endpoints: latency-svc-7cknp [129.929286ms] +Jan 10 17:34:49.467: INFO: Created: latency-svc-w44jf +Jan 10 17:34:49.474: INFO: Got endpoints: latency-svc-w44jf [132.627464ms] +Jan 10 17:34:49.476: INFO: Created: latency-svc-pnbfm +Jan 10 17:34:49.483: INFO: Got endpoints: latency-svc-pnbfm [134.781535ms] +Jan 10 17:34:49.486: INFO: Created: latency-svc-ctjxp +Jan 10 17:34:49.494: INFO: Got endpoints: latency-svc-ctjxp [138.030248ms] +Jan 10 17:34:49.505: INFO: Created: latency-svc-b5s99 +Jan 10 17:34:49.510: INFO: Got endpoints: latency-svc-b5s99 [145.454564ms] +Jan 10 17:34:49.513: INFO: Created: latency-svc-f78tt +Jan 10 17:34:49.522: INFO: Created: latency-svc-zqvnj +Jan 10 17:34:49.522: INFO: Got endpoints: latency-svc-f78tt [148.459724ms] +Jan 10 17:34:49.527: INFO: Got endpoints: latency-svc-zqvnj [138.678234ms] +Jan 10 17:34:49.532: INFO: Created: latency-svc-vhszs +Jan 10 17:34:49.540: INFO: Created: latency-svc-9rnj6 +Jan 10 17:34:49.540: INFO: Got endpoints: latency-svc-vhszs [147.322342ms] +Jan 10 17:34:49.544: INFO: Got endpoints: latency-svc-9rnj6 [145.827156ms] +Jan 10 17:34:49.548: INFO: Created: latency-svc-nnd6s +Jan 10 17:34:49.553: INFO: Got endpoints: latency-svc-nnd6s [145.672355ms] +Jan 10 17:34:49.555: INFO: Created: latency-svc-hzlnl +Jan 10 17:34:49.561: INFO: Got endpoints: latency-svc-hzlnl [142.935687ms] +Jan 10 17:34:49.564: INFO: Created: latency-svc-hgch6 +Jan 10 17:34:49.569: INFO: Got endpoints: latency-svc-hgch6 [141.426382ms] +Jan 10 17:34:49.570: INFO: Created: latency-svc-jtcv8 +Jan 10 17:34:49.579: INFO: Got endpoints: latency-svc-jtcv8 [142.09577ms] +Jan 10 17:34:49.581: INFO: Created: latency-svc-hpv8f +Jan 10 17:34:49.588: INFO: Got endpoints: latency-svc-hpv8f [138.712335ms] +Jan 10 17:34:49.591: INFO: Created: latency-svc-t9jpb +Jan 10 17:34:49.597: INFO: Got endpoints: latency-svc-t9jpb [141.828056ms] +Jan 10 17:34:49.601: INFO: Created: latency-svc-zsxnn +Jan 10 17:34:49.607: INFO: Created: latency-svc-9b2lz +Jan 10 17:34:49.614: INFO: Created: latency-svc-n9428 +Jan 10 17:34:49.620: INFO: Got endpoints: latency-svc-zsxnn [156.646167ms] +Jan 10 17:34:49.621: INFO: Created: latency-svc-cqbjv +Jan 10 17:34:49.630: INFO: Created: latency-svc-q6fjz +Jan 10 17:34:49.637: INFO: Created: latency-svc-425gz +Jan 10 17:34:49.644: INFO: Created: latency-svc-99dkh +Jan 10 17:34:49.650: INFO: Created: latency-svc-qklxl +Jan 10 17:34:49.657: INFO: Created: latency-svc-fn49k +Jan 10 17:34:49.664: INFO: Created: latency-svc-bt8rc +Jan 10 17:34:49.670: INFO: Got endpoints: latency-svc-9b2lz [195.872774ms] +Jan 10 17:34:49.672: INFO: Created: latency-svc-rvztd +Jan 10 17:34:49.680: INFO: Created: latency-svc-9vxj8 +Jan 10 17:34:49.685: INFO: Created: latency-svc-7zvnh +Jan 10 17:34:49.692: INFO: Created: latency-svc-k6jz6 +Jan 10 17:34:49.699: INFO: Created: latency-svc-f7qgb +Jan 10 17:34:49.706: INFO: Created: latency-svc-s6zrf +Jan 10 17:34:49.714: INFO: Created: latency-svc-p86hd +Jan 10 17:34:49.720: INFO: Got endpoints: latency-svc-n9428 [236.793648ms] +Jan 10 17:34:49.733: INFO: Created: latency-svc-259xs +Jan 10 17:34:49.770: INFO: Got endpoints: latency-svc-cqbjv [275.142367ms] +Jan 10 17:34:49.779: INFO: Created: latency-svc-sz65q +Jan 10 17:34:49.818: INFO: Got endpoints: latency-svc-q6fjz [308.135711ms] +Jan 10 17:34:49.830: INFO: Created: latency-svc-tl5j6 +Jan 10 17:34:49.867: INFO: Got endpoints: latency-svc-425gz [344.879063ms] +Jan 10 17:34:49.877: INFO: Created: latency-svc-lvm52 +Jan 10 17:34:49.918: INFO: Got endpoints: latency-svc-99dkh [390.426665ms] +Jan 10 17:34:49.932: INFO: Created: latency-svc-6mggt +Jan 10 17:34:49.969: INFO: Got endpoints: latency-svc-qklxl [429.143707ms] +Jan 10 17:34:49.981: INFO: Created: latency-svc-p5m5r +Jan 10 17:34:50.019: INFO: Got endpoints: latency-svc-fn49k [474.554178ms] +Jan 10 17:34:50.035: INFO: Created: latency-svc-jzc7t +Jan 10 17:34:50.068: INFO: Got endpoints: latency-svc-bt8rc [515.316433ms] +Jan 10 17:34:50.078: INFO: Created: latency-svc-tj7nb +Jan 10 17:34:50.119: INFO: Got endpoints: latency-svc-rvztd [557.114599ms] +Jan 10 17:34:50.130: INFO: Created: latency-svc-dmzrx +Jan 10 17:34:50.169: INFO: Got endpoints: latency-svc-9vxj8 [600.628403ms] +Jan 10 17:34:50.180: INFO: Created: latency-svc-88qln +Jan 10 17:34:50.220: INFO: Got endpoints: latency-svc-7zvnh [640.828082ms] +Jan 10 17:34:50.236: INFO: Created: latency-svc-hsxvz +Jan 10 17:34:50.270: INFO: Got endpoints: latency-svc-k6jz6 [681.930581ms] +Jan 10 17:34:50.281: INFO: Created: latency-svc-tjcft +Jan 10 17:34:50.320: INFO: Got endpoints: latency-svc-f7qgb [722.283393ms] +Jan 10 17:34:50.330: INFO: Created: latency-svc-jrhv4 +Jan 10 17:34:50.370: INFO: Got endpoints: latency-svc-s6zrf [749.499032ms] +Jan 10 17:34:50.381: INFO: Created: latency-svc-ts2hq +Jan 10 17:34:50.418: INFO: Got endpoints: latency-svc-p86hd [747.986598ms] +Jan 10 17:34:50.429: INFO: Created: latency-svc-x8d4f +Jan 10 17:34:50.471: INFO: Got endpoints: latency-svc-259xs [750.84181ms] +Jan 10 17:34:50.482: INFO: Created: latency-svc-vnfsl +Jan 10 17:34:50.520: INFO: Got endpoints: latency-svc-sz65q [749.936494ms] +Jan 10 17:34:50.531: INFO: Created: latency-svc-wm58x +Jan 10 17:34:50.570: INFO: Got endpoints: latency-svc-tl5j6 [751.678444ms] +Jan 10 17:34:50.580: INFO: Created: latency-svc-j8sz9 +Jan 10 17:34:50.619: INFO: Got endpoints: latency-svc-lvm52 [751.859889ms] +Jan 10 17:34:50.630: INFO: Created: latency-svc-96k9f +Jan 10 17:34:50.669: INFO: Got endpoints: latency-svc-6mggt [751.493717ms] +Jan 10 17:34:50.680: INFO: Created: latency-svc-5gmts +Jan 10 17:34:50.719: INFO: Got endpoints: latency-svc-p5m5r [750.237437ms] +Jan 10 17:34:50.730: INFO: Created: latency-svc-tpvbq +Jan 10 17:34:50.779: INFO: Got endpoints: latency-svc-jzc7t [759.537254ms] +Jan 10 17:34:50.793: INFO: Created: latency-svc-rvnwr +Jan 10 17:34:50.818: INFO: Got endpoints: latency-svc-tj7nb [750.123417ms] +Jan 10 17:34:50.829: INFO: Created: latency-svc-q8pzr +Jan 10 17:34:50.873: INFO: Got endpoints: latency-svc-dmzrx [753.735794ms] +Jan 10 17:34:50.883: INFO: Created: latency-svc-rl922 +Jan 10 17:34:50.919: INFO: Got endpoints: latency-svc-88qln [749.33338ms] +Jan 10 17:34:50.930: INFO: Created: latency-svc-vqlks +Jan 10 17:34:50.970: INFO: Got endpoints: latency-svc-hsxvz [748.796364ms] +Jan 10 17:34:50.980: INFO: Created: latency-svc-qfh9m +Jan 10 17:34:51.019: INFO: Got endpoints: latency-svc-tjcft [748.688794ms] +Jan 10 17:34:51.030: INFO: Created: latency-svc-b4khk +Jan 10 17:34:51.068: INFO: Got endpoints: latency-svc-jrhv4 [748.338703ms] +Jan 10 17:34:51.081: INFO: Created: latency-svc-pss4z +Jan 10 17:34:51.120: INFO: Got endpoints: latency-svc-ts2hq [750.329606ms] +Jan 10 17:34:51.131: INFO: Created: latency-svc-ccjq8 +Jan 10 17:34:51.169: INFO: Got endpoints: latency-svc-x8d4f [750.685262ms] +Jan 10 17:34:51.179: INFO: Created: latency-svc-75kbn +Jan 10 17:34:51.219: INFO: Got endpoints: latency-svc-vnfsl [748.771495ms] +Jan 10 17:34:51.229: INFO: Created: latency-svc-8dklc +Jan 10 17:34:51.268: INFO: Got endpoints: latency-svc-wm58x [748.320484ms] +Jan 10 17:34:51.280: INFO: Created: latency-svc-5gk9q +Jan 10 17:34:51.320: INFO: Got endpoints: latency-svc-j8sz9 [750.143855ms] +Jan 10 17:34:51.330: INFO: Created: latency-svc-422gk +Jan 10 17:34:51.369: INFO: Got endpoints: latency-svc-96k9f [749.471381ms] +Jan 10 17:34:51.380: INFO: Created: latency-svc-pnm57 +Jan 10 17:34:51.419: INFO: Got endpoints: latency-svc-5gmts [749.675635ms] +Jan 10 17:34:51.428: INFO: Created: latency-svc-t7nv2 +Jan 10 17:34:51.470: INFO: Got endpoints: latency-svc-tpvbq [749.621417ms] +Jan 10 17:34:51.480: INFO: Created: latency-svc-6dvjn +Jan 10 17:34:51.521: INFO: Got endpoints: latency-svc-rvnwr [742.595189ms] +Jan 10 17:34:51.532: INFO: Created: latency-svc-zxvsk +Jan 10 17:34:51.568: INFO: Got endpoints: latency-svc-q8pzr [749.926776ms] +Jan 10 17:34:51.580: INFO: Created: latency-svc-gbk5p +Jan 10 17:34:51.619: INFO: Got endpoints: latency-svc-rl922 [746.18055ms] +Jan 10 17:34:51.629: INFO: Created: latency-svc-sn4zv +Jan 10 17:34:51.668: INFO: Got endpoints: latency-svc-vqlks [749.350135ms] +Jan 10 17:34:51.678: INFO: Created: latency-svc-nhrfk +Jan 10 17:34:51.722: INFO: Got endpoints: latency-svc-qfh9m [752.070036ms] +Jan 10 17:34:51.732: INFO: Created: latency-svc-v5cxp +Jan 10 17:34:51.768: INFO: Got endpoints: latency-svc-b4khk [748.512941ms] +Jan 10 17:34:51.778: INFO: Created: latency-svc-prsp7 +Jan 10 17:34:51.819: INFO: Got endpoints: latency-svc-pss4z [751.007537ms] +Jan 10 17:34:51.829: INFO: Created: latency-svc-76lw6 +Jan 10 17:34:51.869: INFO: Got endpoints: latency-svc-ccjq8 [748.848145ms] +Jan 10 17:34:51.880: INFO: Created: latency-svc-swqkg +Jan 10 17:34:51.919: INFO: Got endpoints: latency-svc-75kbn [750.096357ms] +Jan 10 17:34:51.929: INFO: Created: latency-svc-vw4qj +Jan 10 17:34:51.970: INFO: Got endpoints: latency-svc-8dklc [750.733403ms] +Jan 10 17:34:51.981: INFO: Created: latency-svc-kp5mc +Jan 10 17:34:52.020: INFO: Got endpoints: latency-svc-5gk9q [751.493672ms] +Jan 10 17:34:52.030: INFO: Created: latency-svc-gb469 +Jan 10 17:34:52.070: INFO: Got endpoints: latency-svc-422gk [749.684595ms] +Jan 10 17:34:52.082: INFO: Created: latency-svc-wp9c2 +Jan 10 17:34:52.118: INFO: Got endpoints: latency-svc-pnm57 [748.818549ms] +Jan 10 17:34:52.130: INFO: Created: latency-svc-4t66x +Jan 10 17:34:52.169: INFO: Got endpoints: latency-svc-t7nv2 [750.112896ms] +Jan 10 17:34:52.180: INFO: Created: latency-svc-snh7p +Jan 10 17:34:52.220: INFO: Got endpoints: latency-svc-6dvjn [750.30865ms] +Jan 10 17:34:52.232: INFO: Created: latency-svc-w9f7d +Jan 10 17:34:52.268: INFO: Got endpoints: latency-svc-zxvsk [746.668985ms] +Jan 10 17:34:52.278: INFO: Created: latency-svc-ktr68 +Jan 10 17:34:52.318: INFO: Got endpoints: latency-svc-gbk5p [749.425356ms] +Jan 10 17:34:52.332: INFO: Created: latency-svc-xm5vs +Jan 10 17:34:52.369: INFO: Got endpoints: latency-svc-sn4zv [750.091066ms] +Jan 10 17:34:52.379: INFO: Created: latency-svc-m5d6g +Jan 10 17:34:52.419: INFO: Got endpoints: latency-svc-nhrfk [750.995327ms] +Jan 10 17:34:52.429: INFO: Created: latency-svc-7994r +Jan 10 17:34:52.467: INFO: Got endpoints: latency-svc-v5cxp [745.434084ms] +Jan 10 17:34:52.479: INFO: Created: latency-svc-6zwfc +Jan 10 17:34:52.519: INFO: Got endpoints: latency-svc-prsp7 [751.296196ms] +Jan 10 17:34:52.532: INFO: Created: latency-svc-xqmp2 +Jan 10 17:34:52.568: INFO: Got endpoints: latency-svc-76lw6 [749.163357ms] +Jan 10 17:34:52.579: INFO: Created: latency-svc-8ght6 +Jan 10 17:34:52.619: INFO: Got endpoints: latency-svc-swqkg [749.576094ms] +Jan 10 17:34:52.629: INFO: Created: latency-svc-xf9z8 +Jan 10 17:34:52.669: INFO: Got endpoints: latency-svc-vw4qj [750.583727ms] +Jan 10 17:34:52.680: INFO: Created: latency-svc-zsdxc +Jan 10 17:34:52.718: INFO: Got endpoints: latency-svc-kp5mc [748.028203ms] +Jan 10 17:34:52.730: INFO: Created: latency-svc-bp2vm +Jan 10 17:34:52.771: INFO: Got endpoints: latency-svc-gb469 [751.36535ms] +Jan 10 17:34:52.782: INFO: Created: latency-svc-qpwdt +Jan 10 17:34:52.820: INFO: Got endpoints: latency-svc-wp9c2 [750.211448ms] +Jan 10 17:34:52.831: INFO: Created: latency-svc-gk6xd +Jan 10 17:34:52.868: INFO: Got endpoints: latency-svc-4t66x [750.46587ms] +Jan 10 17:34:52.880: INFO: Created: latency-svc-bs2nr +Jan 10 17:34:52.921: INFO: Got endpoints: latency-svc-snh7p [751.712904ms] +Jan 10 17:34:52.931: INFO: Created: latency-svc-ktsbn +Jan 10 17:34:52.970: INFO: Got endpoints: latency-svc-w9f7d [748.983435ms] +Jan 10 17:34:52.982: INFO: Created: latency-svc-x45xr +Jan 10 17:34:53.020: INFO: Got endpoints: latency-svc-ktr68 [751.819915ms] +Jan 10 17:34:53.030: INFO: Created: latency-svc-gtbt8 +Jan 10 17:34:53.069: INFO: Got endpoints: latency-svc-xm5vs [751.525846ms] +Jan 10 17:34:53.080: INFO: Created: latency-svc-gqghr +Jan 10 17:34:53.119: INFO: Got endpoints: latency-svc-m5d6g [750.331825ms] +Jan 10 17:34:53.131: INFO: Created: latency-svc-24hmd +Jan 10 17:34:53.219: INFO: Got endpoints: latency-svc-7994r [799.711629ms] +Jan 10 17:34:53.231: INFO: Created: latency-svc-b4t2k +Jan 10 17:34:53.268: INFO: Got endpoints: latency-svc-6zwfc [801.103386ms] +Jan 10 17:34:53.279: INFO: Created: latency-svc-khf67 +Jan 10 17:34:53.320: INFO: Got endpoints: latency-svc-xqmp2 [800.390061ms] +Jan 10 17:34:53.331: INFO: Created: latency-svc-lrms2 +Jan 10 17:34:53.368: INFO: Got endpoints: latency-svc-8ght6 [799.528054ms] +Jan 10 17:34:53.379: INFO: Created: latency-svc-89cqx +Jan 10 17:34:53.419: INFO: Got endpoints: latency-svc-xf9z8 [799.964532ms] +Jan 10 17:34:53.430: INFO: Created: latency-svc-p2skk +Jan 10 17:34:53.470: INFO: Got endpoints: latency-svc-zsdxc [800.377399ms] +Jan 10 17:34:53.482: INFO: Created: latency-svc-w9hdn +Jan 10 17:34:53.519: INFO: Got endpoints: latency-svc-bp2vm [800.421471ms] +Jan 10 17:34:53.531: INFO: Created: latency-svc-49vz9 +Jan 10 17:34:53.570: INFO: Got endpoints: latency-svc-qpwdt [798.512895ms] +Jan 10 17:34:53.580: INFO: Created: latency-svc-swn9w +Jan 10 17:34:53.619: INFO: Got endpoints: latency-svc-gk6xd [799.533857ms] +Jan 10 17:34:53.631: INFO: Created: latency-svc-75h4n +Jan 10 17:34:53.668: INFO: Got endpoints: latency-svc-bs2nr [799.641602ms] +Jan 10 17:34:53.682: INFO: Created: latency-svc-h2jc6 +Jan 10 17:34:53.720: INFO: Got endpoints: latency-svc-ktsbn [799.308511ms] +Jan 10 17:34:53.732: INFO: Created: latency-svc-4l6sf +Jan 10 17:34:53.769: INFO: Got endpoints: latency-svc-x45xr [798.98446ms] +Jan 10 17:34:53.781: INFO: Created: latency-svc-96qmx +Jan 10 17:34:53.819: INFO: Got endpoints: latency-svc-gtbt8 [798.880961ms] +Jan 10 17:34:53.830: INFO: Created: latency-svc-8hf42 +Jan 10 17:34:53.869: INFO: Got endpoints: latency-svc-gqghr [799.630038ms] +Jan 10 17:34:53.881: INFO: Created: latency-svc-p2fpx +Jan 10 17:34:53.918: INFO: Got endpoints: latency-svc-24hmd [798.508436ms] +Jan 10 17:34:53.929: INFO: Created: latency-svc-6rg9z +Jan 10 17:34:53.969: INFO: Got endpoints: latency-svc-b4t2k [750.258008ms] +Jan 10 17:34:53.979: INFO: Created: latency-svc-qtzlv +Jan 10 17:34:54.018: INFO: Got endpoints: latency-svc-khf67 [749.777723ms] +Jan 10 17:34:54.037: INFO: Created: latency-svc-bpltd +Jan 10 17:34:54.070: INFO: Got endpoints: latency-svc-lrms2 [750.719346ms] +Jan 10 17:34:54.081: INFO: Created: latency-svc-mmk62 +Jan 10 17:34:54.118: INFO: Got endpoints: latency-svc-89cqx [749.836359ms] +Jan 10 17:34:54.128: INFO: Created: latency-svc-n7n8q +Jan 10 17:34:54.171: INFO: Got endpoints: latency-svc-p2skk [751.811062ms] +Jan 10 17:34:54.182: INFO: Created: latency-svc-8krr4 +Jan 10 17:34:54.219: INFO: Got endpoints: latency-svc-w9hdn [749.151598ms] +Jan 10 17:34:54.229: INFO: Created: latency-svc-tt2cg +Jan 10 17:34:54.269: INFO: Got endpoints: latency-svc-49vz9 [750.644525ms] +Jan 10 17:34:54.279: INFO: Created: latency-svc-85c2x +Jan 10 17:34:54.321: INFO: Got endpoints: latency-svc-swn9w [751.244543ms] +Jan 10 17:34:54.332: INFO: Created: latency-svc-s6wqp +Jan 10 17:34:54.368: INFO: Got endpoints: latency-svc-75h4n [748.013203ms] +Jan 10 17:34:54.380: INFO: Created: latency-svc-ts29q +Jan 10 17:34:54.418: INFO: Got endpoints: latency-svc-h2jc6 [749.454751ms] +Jan 10 17:34:54.428: INFO: Created: latency-svc-vrk8q +Jan 10 17:34:54.469: INFO: Got endpoints: latency-svc-4l6sf [748.13387ms] +Jan 10 17:34:54.479: INFO: Created: latency-svc-8t66r +Jan 10 17:34:54.519: INFO: Got endpoints: latency-svc-96qmx [749.978742ms] +Jan 10 17:34:54.530: INFO: Created: latency-svc-7qlln +Jan 10 17:34:54.568: INFO: Got endpoints: latency-svc-8hf42 [748.823852ms] +Jan 10 17:34:54.579: INFO: Created: latency-svc-mwrjf +Jan 10 17:34:54.620: INFO: Got endpoints: latency-svc-p2fpx [750.570312ms] +Jan 10 17:34:54.630: INFO: Created: latency-svc-8drpg +Jan 10 17:34:54.669: INFO: Got endpoints: latency-svc-6rg9z [751.325553ms] +Jan 10 17:34:54.681: INFO: Created: latency-svc-gsp69 +Jan 10 17:34:54.720: INFO: Got endpoints: latency-svc-qtzlv [750.536827ms] +Jan 10 17:34:54.730: INFO: Created: latency-svc-l96fz +Jan 10 17:34:54.768: INFO: Got endpoints: latency-svc-bpltd [749.385818ms] +Jan 10 17:34:54.778: INFO: Created: latency-svc-4jvw8 +Jan 10 17:34:54.819: INFO: Got endpoints: latency-svc-mmk62 [748.794441ms] +Jan 10 17:34:54.831: INFO: Created: latency-svc-4cl78 +Jan 10 17:34:54.869: INFO: Got endpoints: latency-svc-n7n8q [751.262897ms] +Jan 10 17:34:54.879: INFO: Created: latency-svc-mnchv +Jan 10 17:34:54.920: INFO: Got endpoints: latency-svc-8krr4 [749.677351ms] +Jan 10 17:34:54.931: INFO: Created: latency-svc-9pnsg +Jan 10 17:34:54.971: INFO: Got endpoints: latency-svc-tt2cg [751.908994ms] +Jan 10 17:34:54.982: INFO: Created: latency-svc-hfsm8 +Jan 10 17:34:55.023: INFO: Got endpoints: latency-svc-85c2x [753.078567ms] +Jan 10 17:34:55.036: INFO: Created: latency-svc-zh8lv +Jan 10 17:34:55.068: INFO: Got endpoints: latency-svc-s6wqp [747.147928ms] +Jan 10 17:34:55.094: INFO: Created: latency-svc-vbnd6 +Jan 10 17:34:55.121: INFO: Got endpoints: latency-svc-ts29q [753.001932ms] +Jan 10 17:34:55.151: INFO: Created: latency-svc-flx9x +Jan 10 17:34:55.171: INFO: Got endpoints: latency-svc-vrk8q [752.70895ms] +Jan 10 17:34:55.180: INFO: Created: latency-svc-5tvrd +Jan 10 17:34:55.219: INFO: Got endpoints: latency-svc-8t66r [750.289448ms] +Jan 10 17:34:55.229: INFO: Created: latency-svc-wkcf2 +Jan 10 17:34:55.269: INFO: Got endpoints: latency-svc-7qlln [749.19027ms] +Jan 10 17:34:55.278: INFO: Created: latency-svc-w4ctm +Jan 10 17:34:55.320: INFO: Got endpoints: latency-svc-mwrjf [752.532787ms] +Jan 10 17:34:55.337: INFO: Created: latency-svc-fsg4g +Jan 10 17:34:55.370: INFO: Got endpoints: latency-svc-8drpg [749.844262ms] +Jan 10 17:34:55.380: INFO: Created: latency-svc-4xwv2 +Jan 10 17:34:55.420: INFO: Got endpoints: latency-svc-gsp69 [750.879495ms] +Jan 10 17:34:55.430: INFO: Created: latency-svc-76dw4 +Jan 10 17:34:55.468: INFO: Got endpoints: latency-svc-l96fz [748.21918ms] +Jan 10 17:34:55.479: INFO: Created: latency-svc-fzs29 +Jan 10 17:34:55.521: INFO: Got endpoints: latency-svc-4jvw8 [752.87811ms] +Jan 10 17:34:55.532: INFO: Created: latency-svc-nxkqk +Jan 10 17:34:55.570: INFO: Got endpoints: latency-svc-4cl78 [750.228856ms] +Jan 10 17:34:55.580: INFO: Created: latency-svc-hpdbh +Jan 10 17:34:55.618: INFO: Got endpoints: latency-svc-mnchv [748.684302ms] +Jan 10 17:34:55.629: INFO: Created: latency-svc-gdn2f +Jan 10 17:34:55.668: INFO: Got endpoints: latency-svc-9pnsg [747.568523ms] +Jan 10 17:34:55.680: INFO: Created: latency-svc-bsxs4 +Jan 10 17:34:55.720: INFO: Got endpoints: latency-svc-hfsm8 [749.094952ms] +Jan 10 17:34:55.730: INFO: Created: latency-svc-l862j +Jan 10 17:34:55.769: INFO: Got endpoints: latency-svc-zh8lv [746.479756ms] +Jan 10 17:34:55.779: INFO: Created: latency-svc-wljtg +Jan 10 17:34:55.820: INFO: Got endpoints: latency-svc-vbnd6 [751.744392ms] +Jan 10 17:34:55.832: INFO: Created: latency-svc-xjhfk +Jan 10 17:34:55.868: INFO: Got endpoints: latency-svc-flx9x [747.572638ms] +Jan 10 17:34:55.879: INFO: Created: latency-svc-gbfr2 +Jan 10 17:34:55.919: INFO: Got endpoints: latency-svc-5tvrd [747.76087ms] +Jan 10 17:34:55.935: INFO: Created: latency-svc-nx99w +Jan 10 17:34:55.970: INFO: Got endpoints: latency-svc-wkcf2 [750.693301ms] +Jan 10 17:34:55.980: INFO: Created: latency-svc-bdxkx +Jan 10 17:34:56.021: INFO: Got endpoints: latency-svc-w4ctm [752.791074ms] +Jan 10 17:34:56.034: INFO: Created: latency-svc-rgg4r +Jan 10 17:34:56.069: INFO: Got endpoints: latency-svc-fsg4g [748.640343ms] +Jan 10 17:34:56.079: INFO: Created: latency-svc-2jhqj +Jan 10 17:34:56.119: INFO: Got endpoints: latency-svc-4xwv2 [749.757267ms] +Jan 10 17:34:56.132: INFO: Created: latency-svc-9nr98 +Jan 10 17:34:56.172: INFO: Got endpoints: latency-svc-76dw4 [751.870502ms] +Jan 10 17:34:56.183: INFO: Created: latency-svc-hxmzr +Jan 10 17:34:56.218: INFO: Got endpoints: latency-svc-fzs29 [750.206777ms] +Jan 10 17:34:56.230: INFO: Created: latency-svc-k5zp2 +Jan 10 17:34:56.270: INFO: Got endpoints: latency-svc-nxkqk [749.152718ms] +Jan 10 17:34:56.283: INFO: Created: latency-svc-j8zqx +Jan 10 17:34:56.321: INFO: Got endpoints: latency-svc-hpdbh [750.89374ms] +Jan 10 17:34:56.331: INFO: Created: latency-svc-8c287 +Jan 10 17:34:56.368: INFO: Got endpoints: latency-svc-gdn2f [749.982143ms] +Jan 10 17:34:56.381: INFO: Created: latency-svc-2xmnp +Jan 10 17:34:56.420: INFO: Got endpoints: latency-svc-bsxs4 [751.496481ms] +Jan 10 17:34:56.429: INFO: Created: latency-svc-ph7p5 +Jan 10 17:34:56.468: INFO: Got endpoints: latency-svc-l862j [747.390281ms] +Jan 10 17:34:56.478: INFO: Created: latency-svc-69tdk +Jan 10 17:34:56.519: INFO: Got endpoints: latency-svc-wljtg [749.83889ms] +Jan 10 17:34:56.530: INFO: Created: latency-svc-77l2q +Jan 10 17:34:56.570: INFO: Got endpoints: latency-svc-xjhfk [749.695611ms] +Jan 10 17:34:56.581: INFO: Created: latency-svc-vkpdm +Jan 10 17:34:56.620: INFO: Got endpoints: latency-svc-gbfr2 [751.066806ms] +Jan 10 17:34:56.635: INFO: Created: latency-svc-r54mr +Jan 10 17:34:56.669: INFO: Got endpoints: latency-svc-nx99w [750.579044ms] +Jan 10 17:34:56.682: INFO: Created: latency-svc-h5wxd +Jan 10 17:34:56.720: INFO: Got endpoints: latency-svc-bdxkx [749.778008ms] +Jan 10 17:34:56.729: INFO: Created: latency-svc-v5mcm +Jan 10 17:34:56.771: INFO: Got endpoints: latency-svc-rgg4r [749.73085ms] +Jan 10 17:34:56.781: INFO: Created: latency-svc-x4tb6 +Jan 10 17:34:56.819: INFO: Got endpoints: latency-svc-2jhqj [749.068783ms] +Jan 10 17:34:56.828: INFO: Created: latency-svc-vqsj2 +Jan 10 17:34:56.869: INFO: Got endpoints: latency-svc-9nr98 [749.79854ms] +Jan 10 17:34:56.879: INFO: Created: latency-svc-wfgzz +Jan 10 17:34:56.919: INFO: Got endpoints: latency-svc-hxmzr [746.986541ms] +Jan 10 17:34:56.934: INFO: Created: latency-svc-225jr +Jan 10 17:34:56.970: INFO: Got endpoints: latency-svc-k5zp2 [751.371307ms] +Jan 10 17:34:56.983: INFO: Created: latency-svc-6kcw6 +Jan 10 17:34:57.019: INFO: Got endpoints: latency-svc-j8zqx [748.70466ms] +Jan 10 17:34:57.029: INFO: Created: latency-svc-5hv9w +Jan 10 17:34:57.068: INFO: Got endpoints: latency-svc-8c287 [747.814602ms] +Jan 10 17:34:57.078: INFO: Created: latency-svc-87qjs +Jan 10 17:34:57.119: INFO: Got endpoints: latency-svc-2xmnp [751.548115ms] +Jan 10 17:34:57.130: INFO: Created: latency-svc-4cwf7 +Jan 10 17:34:57.169: INFO: Got endpoints: latency-svc-ph7p5 [749.543106ms] +Jan 10 17:34:57.218: INFO: Got endpoints: latency-svc-69tdk [750.154855ms] +Jan 10 17:34:57.271: INFO: Got endpoints: latency-svc-77l2q [751.835166ms] +Jan 10 17:34:57.318: INFO: Got endpoints: latency-svc-vkpdm [748.515213ms] +Jan 10 17:34:57.369: INFO: Got endpoints: latency-svc-r54mr [749.655398ms] +Jan 10 17:34:57.419: INFO: Got endpoints: latency-svc-h5wxd [748.225145ms] +Jan 10 17:34:57.468: INFO: Got endpoints: latency-svc-v5mcm [748.776044ms] +Jan 10 17:34:57.520: INFO: Got endpoints: latency-svc-x4tb6 [748.346952ms] +Jan 10 17:34:57.568: INFO: Got endpoints: latency-svc-vqsj2 [749.474246ms] +Jan 10 17:34:57.619: INFO: Got endpoints: latency-svc-wfgzz [749.731292ms] +Jan 10 17:34:57.670: INFO: Got endpoints: latency-svc-225jr [750.31264ms] +Jan 10 17:34:57.719: INFO: Got endpoints: latency-svc-6kcw6 [748.738987ms] +Jan 10 17:34:57.769: INFO: Got endpoints: latency-svc-5hv9w [750.284904ms] +Jan 10 17:34:57.822: INFO: Got endpoints: latency-svc-87qjs [753.966823ms] +Jan 10 17:34:57.869: INFO: Got endpoints: latency-svc-4cwf7 [749.640358ms] +Jan 10 17:34:57.869: INFO: Latencies: [15.702449ms 23.64955ms 32.901813ms 42.105306ms 54.403971ms 62.1991ms 69.775187ms 76.784698ms 84.90092ms 92.684748ms 102.262485ms 117.266518ms 121.102816ms 126.962986ms 129.36041ms 129.929286ms 130.703714ms 132.472462ms 132.627464ms 133.108378ms 134.781535ms 135.846226ms 136.2817ms 138.030248ms 138.678234ms 138.712335ms 141.426382ms 141.828056ms 142.09577ms 142.935687ms 145.454564ms 145.672355ms 145.827156ms 147.322342ms 148.459724ms 156.646167ms 195.872774ms 236.793648ms 275.142367ms 308.135711ms 344.879063ms 390.426665ms 429.143707ms 474.554178ms 515.316433ms 557.114599ms 600.628403ms 640.828082ms 681.930581ms 722.283393ms 742.595189ms 745.434084ms 746.18055ms 746.479756ms 746.668985ms 746.986541ms 747.147928ms 747.390281ms 747.568523ms 747.572638ms 747.76087ms 747.814602ms 747.986598ms 748.013203ms 748.028203ms 748.13387ms 748.21918ms 748.225145ms 748.320484ms 748.338703ms 748.346952ms 748.512941ms 748.515213ms 748.640343ms 748.684302ms 748.688794ms 748.70466ms 748.738987ms 748.771495ms 748.776044ms 748.794441ms 748.796364ms 748.818549ms 748.823852ms 748.848145ms 748.983435ms 749.068783ms 749.094952ms 749.151598ms 749.152718ms 749.163357ms 749.19027ms 749.33338ms 749.350135ms 749.385818ms 749.425356ms 749.454751ms 749.471381ms 749.474246ms 749.499032ms 749.543106ms 749.576094ms 749.621417ms 749.640358ms 749.655398ms 749.675635ms 749.677351ms 749.684595ms 749.695611ms 749.73085ms 749.731292ms 749.757267ms 749.777723ms 749.778008ms 749.79854ms 749.836359ms 749.83889ms 749.844262ms 749.926776ms 749.936494ms 749.978742ms 749.982143ms 750.091066ms 750.096357ms 750.112896ms 750.123417ms 750.143855ms 750.154855ms 750.206777ms 750.211448ms 750.228856ms 750.237437ms 750.258008ms 750.284904ms 750.289448ms 750.30865ms 750.31264ms 750.329606ms 750.331825ms 750.46587ms 750.536827ms 750.570312ms 750.579044ms 750.583727ms 750.644525ms 750.685262ms 750.693301ms 750.719346ms 750.733403ms 750.84181ms 750.879495ms 750.89374ms 750.995327ms 751.007537ms 751.066806ms 751.244543ms 751.262897ms 751.296196ms 751.325553ms 751.36535ms 751.371307ms 751.493672ms 751.493717ms 751.496481ms 751.525846ms 751.548115ms 751.678444ms 751.712904ms 751.744392ms 751.811062ms 751.819915ms 751.835166ms 751.859889ms 751.870502ms 751.908994ms 752.070036ms 752.532787ms 752.70895ms 752.791074ms 752.87811ms 753.001932ms 753.078567ms 753.735794ms 753.966823ms 759.537254ms 798.508436ms 798.512895ms 798.880961ms 798.98446ms 799.308511ms 799.528054ms 799.533857ms 799.630038ms 799.641602ms 799.711629ms 799.964532ms 800.377399ms 800.390061ms 800.421471ms 801.103386ms] +Jan 10 17:34:57.870: INFO: 50 %ile: 749.543106ms +Jan 10 17:34:57.870: INFO: 90 %ile: 753.001932ms +Jan 10 17:34:57.870: INFO: 99 %ile: 800.421471ms +Jan 10 17:34:57.870: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:34:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-5274" for this suite. + +• [SLOW TEST:10.809 seconds] +[sig-network] Service endpoints latency +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":277,"completed":103,"skipped":1883,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:34:57.889: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:34:57.912: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:35:59.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4018" for this suite. + +• [SLOW TEST:61.204 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":277,"completed":104,"skipped":1894,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:35:59.093: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 +Jan 10 17:35:59.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4277" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":277,"completed":105,"skipped":1917,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 +STEP: Creating a kubernetes client +Jan 10 17:35:59.149: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 +Jan 10 17:35:59.186: INFO: (0) /api/v1/nodes/ip-172-20-52-46.ap-south-1.compute.internal/proxy/logs/:
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:35:59.634: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:36:02.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should deny crd creation [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering the crd webhook via the AdmissionRegistration API
+STEP: Creating a custom resource definition that should be denied by the webhook
+Jan 10 17:36:02.663: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:36:02.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-8272" for this suite.
+STEP: Destroying namespace "webhook-8272-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":277,"completed":107,"skipped":1966,"failed":0}
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:36:02.734: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-6573
+[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a new StatefulSet
+Jan 10 17:36:02.772: INFO: Found 0 stateful pods, waiting for 3
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
+Jan 10 17:36:12.798: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Not applying an update when the partition is greater than the number of replicas
+STEP: Performing a canary update
+Jan 10 17:36:22.823: INFO: Updating stateful set ss2
+Jan 10 17:36:22.827: INFO: Waiting for Pod statefulset-6573/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+STEP: Restoring Pods to the correct revision when they are deleted
+Jan 10 17:36:32.855: INFO: Found 2 stateful pods, waiting for 3
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Performing a phased rolling update
+Jan 10 17:36:42.879: INFO: Updating stateful set ss2
+Jan 10 17:36:42.884: INFO: Waiting for Pod statefulset-6573/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:36:52.905: INFO: Updating stateful set ss2
+Jan 10 17:36:52.911: INFO: Waiting for StatefulSet statefulset-6573/ss2 to complete update
+Jan 10 17:36:52.911: INFO: Waiting for Pod statefulset-6573/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:37:02.916: INFO: Waiting for StatefulSet statefulset-6573/ss2 to complete update
+Jan 10 17:37:02.916: INFO: Waiting for Pod statefulset-6573/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 17:37:12.916: INFO: Deleting all statefulset in ns statefulset-6573
+Jan 10 17:37:12.918: INFO: Scaling statefulset ss2 to 0
+Jan 10 17:37:32.929: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 17:37:32.931: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-6573" for this suite.
+
+• [SLOW TEST:90.212 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+    should perform canary updates and phased rolling updates of template modifications [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":277,"completed":108,"skipped":1987,"failed":0}
+SSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop simple daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:32.946: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
+[It] should run and stop simple daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.992: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:32.992: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.998: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:33.998: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.998: INFO: Number of nodes with available pods: 3
+Jan 10 17:37:34.998: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Stop a daemon pod, check that the daemon pod is revived.
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.013: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:35.013: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:36.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:37.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:38.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.019: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:39.019: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.019: INFO: Number of nodes with available pods: 3
+Jan 10 17:37:40.019: INFO: Number of running nodes: 3, number of available pods: 3
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2555, will wait for the garbage collector to delete the pods
+Jan 10 17:37:40.079: INFO: Deleting DaemonSet.extensions daemon-set took: 5.316755ms
+Jan 10 17:37:40.179: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.220035ms
+Jan 10 17:37:48.381: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:48.381: INFO: Number of running nodes: 0, number of available pods: 0
+Jan 10 17:37:48.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2555/daemonsets","resourceVersion":"15716"},"items":null}
+
+Jan 10 17:37:48.385: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2555/pods","resourceVersion":"15716"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-2555" for this suite.
+
+• [SLOW TEST:15.454 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run and stop simple daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":277,"completed":109,"skipped":1990,"failed":0}
+S
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:48.400: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:37:48.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99" in namespace "projected-3311" to be "Succeeded or Failed"
+Jan 10 17:37:48.435: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99": Phase="Pending", Reason="", readiness=false. Elapsed: 1.737972ms
+Jan 10 17:37:50.437: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00440725s
+STEP: Saw pod success
+Jan 10 17:37:50.437: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99" satisfied condition "Succeeded or Failed"
+Jan 10 17:37:50.439: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 container client-container: 
+STEP: delete the pod
+Jan 10 17:37:50.461: INFO: Waiting for pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 to disappear
+Jan 10 17:37:50.462: INFO: Pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:50.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3311" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":277,"completed":110,"skipped":1991,"failed":0}
+SSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:50.470: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
+Jan 10 17:37:50.490: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jan 10 17:37:50.498: INFO: Waiting for terminating namespaces to be deleted...
+Jan 10 17:37:50.500: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test
+Jan 10 17:37:50.505: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.505: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.505: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.505: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test
+Jan 10 17:37:50.517: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container e2e ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test
+Jan 10 17:37:50.528: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded)
+Jan 10 17:37:50.528: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.528: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: 	Container autoscaler ready: true, restart count 0
+[It] validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f 42
+STEP: Trying to relaunch the pod, now with labels.
+STEP: removing the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f off the node ip-172-20-33-172.ap-south-1.compute.internal
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:54.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-6387" for this suite.
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
+•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":277,"completed":111,"skipped":2000,"failed":0}
+SSSSSS
+------------------------------
+[k8s.io] [sig-node] Events 
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] [sig-node] Events
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:54.586: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename events
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: retrieving the pod
+Jan 10 17:37:56.622: INFO: &Pod{ObjectMeta:{send-events-343e4f8a-9bad-41f7-8f61-4fd9c1ebe5db  events-1166 /api/v1/namespaces/events-1166/pods/send-events-343e4f8a-9bad-41f7-8f61-4fd9c1ebe5db 58e8d591-4b55-4d51-b085-6086414a3374 15807 0 2021-01-10 17:37:54 +0000 UTC   map[name:foo time:606335127] map[cni.projectcalico.org/podIP:100.108.158.179/32 cni.projectcalico.org/podIPs:100.108.158.179/32] [] []  [{e2e.test Update v1 2021-01-10 17:37:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:37:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:37:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 55 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6zrl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6zrl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6zrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.179,StartTime:2021-01-10 17:37:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:37:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://4c162e13781972491b138f3a72fff36ad8d285f1faf3a0718a901396bf7a4696,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+
+STEP: checking for scheduler event about the pod
+Jan 10 17:37:58.624: INFO: Saw scheduler event for our pod.
+STEP: checking for kubelet event about the pod
+Jan 10 17:38:00.627: INFO: Saw kubelet event for our pod.
+STEP: deleting the pod
+[AfterEach] [k8s.io] [sig-node] Events
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "events-1166" for this suite.
+
+• [SLOW TEST:6.055 seconds]
+[k8s.io] [sig-node] Events
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":277,"completed":112,"skipped":2006,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:00.641: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
+Jan 10 17:38:00.660: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
+Jan 10 17:38:18.364: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:38:27.075: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-5626" for this suite.
+
+• [SLOW TEST:44.453 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":277,"completed":113,"skipped":2014,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should be able to deny attaching pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:45.095: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:38:45.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:38:48.441: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should be able to deny attaching pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering the webhook via the AdmissionRegistration API
+STEP: create a pod
+STEP: 'kubectl attach' the pod, should be denied by the webhook
+Jan 10 17:38:50.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 attach --namespace=webhook-3132 to-be-attached-pod -i -c=container1'
+Jan 10 17:38:50.563: INFO: rc: 1
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:50.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-3132" for this suite.
+STEP: Destroying namespace "webhook-3132-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:5.514 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to deny attaching pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":277,"completed":114,"skipped":2096,"failed":0}
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] LimitRange 
+  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] LimitRange
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:50.610: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename limitrange
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a LimitRange
+STEP: Setting up watch
+STEP: Submitting a LimitRange
+STEP: Verifying LimitRange creation was observed
+Jan 10 17:38:50.642: INFO: observed the limitRanges list
+STEP: Fetching the LimitRange to ensure it has proper values
+Jan 10 17:38:50.645: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
+Jan 10 17:38:50.645: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Creating a Pod with no resource requirements
+STEP: Ensuring Pod has resource requirements applied from LimitRange
+Jan 10 17:38:50.651: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
+Jan 10 17:38:50.651: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Creating a Pod with partial resource requirements
+STEP: Ensuring Pod has merged resource requirements applied from LimitRange
+Jan 10 17:38:50.658: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
+Jan 10 17:38:50.658: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Failing to create a Pod with less than min resources
+STEP: Failing to create a Pod with more than max resources
+STEP: Updating a LimitRange
+STEP: Verifying LimitRange updating is effective
+STEP: Creating a Pod with less than former min resources
+STEP: Failing to create a Pod with more than max resources
+STEP: Deleting a LimitRange
+STEP: Verifying the LimitRange was deleted
+Jan 10 17:38:57.681: INFO: limitRange is already deleted
+STEP: Creating a Pod with more than former max resources
+[AfterEach] [sig-scheduling] LimitRange
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "limitrange-3509" for this suite.
+
+• [SLOW TEST:7.089 seconds]
+[sig-scheduling] LimitRange
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
+  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":277,"completed":115,"skipped":2111,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate custom resource with different stored version [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:57.699: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:38:58.814: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jan 10 17:39:00.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:39:03.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate custom resource with different stored version [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:03.835: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4977-crds.webhook.example.com via the AdmissionRegistration API
+STEP: Creating a custom resource while v1 is storage version
+STEP: Patching Custom Resource Definition to set v2 as storage
+STEP: Patching the custom resource while v2 is storage version
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-6036" for this suite.
+STEP: Destroying namespace "webhook-6036-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:12.316 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate custom resource with different stored version [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":277,"completed":116,"skipped":2148,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:10.015: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating service multi-endpoint-test in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[]
+Jan 10 17:39:10.054: INFO: Get endpoints failed (2.713676ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Jan 10 17:39:11.056: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[] (1.005400971s elapsed)
+STEP: Creating pod pod1 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod1:[100]]
+Jan 10 17:39:13.075: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod1:[100]] (2.013753984s elapsed)
+STEP: Creating pod pod2 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod1:[100] pod2:[101]]
+Jan 10 17:39:15.097: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod1:[100] pod2:[101]] (2.018747203s elapsed)
+STEP: Deleting pod pod1 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod2:[101]]
+Jan 10 17:39:16.111: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod2:[101]] (1.008179261s elapsed)
+STEP: Deleting pod pod2 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[]
+Jan 10 17:39:17.121: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[] (1.004963046s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:17.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-2158" for this suite.
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:7.131 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":277,"completed":117,"skipped":2160,"failed":0}
+SSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
+  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:17.147: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:19.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-4943" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":118,"skipped":2168,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:19.200: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:39:19.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36" in namespace "projected-7334" to be "Succeeded or Failed"
+Jan 10 17:39:19.228: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36": Phase="Pending", Reason="", readiness=false. Elapsed: 1.828479ms
+Jan 10 17:39:21.231: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004342318s
+STEP: Saw pod success
+Jan 10 17:39:21.231: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36" satisfied condition "Succeeded or Failed"
+Jan 10 17:39:21.233: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 container client-container: 
+STEP: delete the pod
+Jan 10 17:39:21.248: INFO: Waiting for pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 to disappear
+Jan 10 17:39:21.249: INFO: Pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:21.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7334" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":277,"completed":119,"skipped":2184,"failed":0}
+
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:21.256: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jan 10 17:39:21.283: INFO: Waiting up to 5m0s for pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0" in namespace "emptydir-7988" to be "Succeeded or Failed"
+Jan 10 17:39:21.285: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.756577ms
+Jan 10 17:39:23.288: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004526863s
+STEP: Saw pod success
+Jan 10 17:39:23.288: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0" satisfied condition "Succeeded or Failed"
+Jan 10 17:39:23.290: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 container test-container: 
+STEP: delete the pod
+Jan 10 17:39:23.304: INFO: Waiting for pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 to disappear
+Jan 10 17:39:23.306: INFO: Pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:23.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-7988" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":120,"skipped":2184,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:23.315: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
+[It] RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:23.335: INFO: Creating deployment "test-recreate-deployment"
+Jan 10 17:39:23.341: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
+Jan 10 17:39:23.345: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
+Jan 10 17:39:25.350: INFO: Waiting deployment "test-recreate-deployment" to complete
+Jan 10 17:39:25.351: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
+Jan 10 17:39:25.358: INFO: Updating deployment test-recreate-deployment
+Jan 10 17:39:25.358: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+Jan 10 17:39:25.409: INFO: Deployment "test-recreate-deployment":
+&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1940 /apis/apps/v1/namespaces/deployment-1940/deployments/test-recreate-deployment 0e932fdd-c28c-4bef-af43-8d7fcbf6358c 16507 2 2021-01-10 17:39:23 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029d7b18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-10 17:39:25 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2021-01-10 17:39:25 +0000 UTC,LastTransitionTime:2021-01-10 17:39:23 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}
+
+Jan 10 17:39:25.411: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
+&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-1940 /apis/apps/v1/namespaces/deployment-1940/replicasets/test-recreate-deployment-d5667d9c7 7d0d35d3-5a78-4370-9ad3-213c478aee94 16505 1 2021-01-10 17:39:25 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0e932fdd-c28c-4bef-af43-8d7fcbf6358c 0xc004f421f0 0xc004f421f1}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 57 51 50 102 100 100 45 99 50 56 99 45 52 98 101 102 45 97 102 52 51 45 56 100 55 102 99 98 102 54 51 53 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f42268  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:39:25.411: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
+Jan 10 17:39:25.412: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-1940 /apis/apps/v1/namespaces/deployment-1940/replicasets/test-recreate-deployment-74d98b5f7c a5f12ffc-b334-4aa6-8056-0ef38656a527 16499 2 2021-01-10 17:39:23 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0e932fdd-c28c-4bef-af43-8d7fcbf6358c 0xc004f420d7 0xc004f420d8}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 57 51 50 102 100 100 45 99 50 56 99 45 52 98 101 102 45 97 102 52 51 45 56 100 55 102 99 98 102 54 51 53 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f42188  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:39:25.414: INFO: Pod "test-recreate-deployment-d5667d9c7-kzg9l" is not available:
+&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-kzg9l test-recreate-deployment-d5667d9c7- deployment-1940 /api/v1/namespaces/deployment-1940/pods/test-recreate-deployment-d5667d9c7-kzg9l ba8d376c-9547-4dfe-ab9f-4cfc11db9446 16508 0 2021-01-10 17:39:25 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 7d0d35d3-5a78-4370-9ad3-213c478aee94 0xc004f427a0 0xc004f427a1}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 100 48 100 51 53 100 51 45 53 97 55 56 45 52 51 55 48 45 57 97 100 51 45 50 49 51 99 52 55 56 97 101 101 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j5css,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j5css,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j5css,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:,StartTime:2021-01-10 17:39:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:25.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-1940" for this suite.
+•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":277,"completed":121,"skipped":2200,"failed":0}
+S
+------------------------------
+[sig-api-machinery] Secrets 
+  should fail to create secret due to empty secret key [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:25.424: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create secret due to empty secret key [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating projection with secret that has name secret-emptykey-test-a7ba6685-1018-4f9d-9b6a-9d9bfa17ce66
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:25.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-9111" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":277,"completed":122,"skipped":2201,"failed":0}
+SSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should receive events on concurrent watches in same order [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:25.452: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should receive events on concurrent watches in same order [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: starting a background goroutine to produce watch events
+STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:30.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-1946" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":277,"completed":123,"skipped":2205,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:30.209: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
+[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Jan 10 17:39:32.760: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03"
+Jan 10 17:39:32.761: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03" in namespace "pods-3913" to be "terminated due to deadline exceeded"
+Jan 10 17:39:32.762: INFO: Pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03": Phase="Running", Reason="", readiness=true. Elapsed: 1.952812ms
+Jan 10 17:39:34.765: INFO: Pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03": Phase="Running", Reason="", readiness=true. Elapsed: 2.004610263s
+Jan 10 17:39:36.768: INFO: Pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.007209308s
+Jan 10 17:39:36.768: INFO: Pod "pod-update-activedeadlineseconds-7e97fa75-e6c2-4258-b9f1-d5b97ac62f03" satisfied condition "terminated due to deadline exceeded"
+[AfterEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:36.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-3913" for this suite.
+
+• [SLOW TEST:6.565 seconds]
+[k8s.io] Pods
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":277,"completed":124,"skipped":2234,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:36.778: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
+[It] should get a host IP [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating pod
+Jan 10 17:39:38.811: INFO: Pod pod-hostip-c24f1cf9-b84a-4d28-950c-ef621dde54be has hostIP: 172.20.33.172
+[AfterEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:38.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-8184" for this suite.
+•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":277,"completed":125,"skipped":2289,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:38.819: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:39:38.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16" in namespace "downward-api-2124" to be "Succeeded or Failed"
+Jan 10 17:39:38.846: INFO: Pod "downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16": Phase="Pending", Reason="", readiness=false. Elapsed: 1.804179ms
+Jan 10 17:39:40.849: INFO: Pod "downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004389057s
+STEP: Saw pod success
+Jan 10 17:39:40.849: INFO: Pod "downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16" satisfied condition "Succeeded or Failed"
+Jan 10 17:39:40.851: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16 container client-container: 
+STEP: delete the pod
+Jan 10 17:39:40.865: INFO: Waiting for pod downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16 to disappear
+Jan 10 17:39:40.866: INFO: Pod downwardapi-volume-f3d5d1a2-d385-4f29-bc4e-7eadcf278e16 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:40.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2124" for this suite.
+•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":277,"completed":126,"skipped":2320,"failed":0}
+SSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support proportional scaling [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:40.873: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
+[It] deployment should support proportional scaling [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:40.893: INFO: Creating deployment "webserver-deployment"
+Jan 10 17:39:40.898: INFO: Waiting for observed generation 1
+Jan 10 17:39:42.903: INFO: Waiting for all required pods to come up
+Jan 10 17:39:42.907: INFO: Pod name httpd: Found 10 pods out of 10
+STEP: ensuring each pod is running
+Jan 10 17:39:44.920: INFO: Waiting for deployment "webserver-deployment" to complete
+Jan 10 17:39:44.924: INFO: Updating deployment "webserver-deployment" with a non-existent image
+Jan 10 17:39:44.930: INFO: Updating deployment webserver-deployment
+Jan 10 17:39:44.930: INFO: Waiting for observed generation 2
+Jan 10 17:39:46.935: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
+Jan 10 17:39:46.937: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
+Jan 10 17:39:46.938: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
+Jan 10 17:39:46.943: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
+Jan 10 17:39:46.943: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
+Jan 10 17:39:46.945: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
+Jan 10 17:39:46.948: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
+Jan 10 17:39:46.948: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
+Jan 10 17:39:46.954: INFO: Updating deployment webserver-deployment
+Jan 10 17:39:46.954: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
+Jan 10 17:39:46.959: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
+Jan 10 17:39:46.961: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+Jan 10 17:39:46.967: INFO: Deployment "webserver-deployment":
+&Deployment{ObjectMeta:{webserver-deployment  deployment-4188 /apis/apps/v1/namespaces/deployment-4188/deployments/webserver-deployment bc373f28-e328-47ef-a254-90d89d808ebb 16977 3 2021-01-10 17:39:40 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d335b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-10 17:39:43 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2021-01-10 17:39:44 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}
+
+Jan 10 17:39:46.972: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
+&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-4188 /apis/apps/v1/namespaces/deployment-4188/replicasets/webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 16980 3 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bc373f28-e328-47ef-a254-90d89d808ebb 0xc004d33a87 0xc004d33a88}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 99 51 55 51 102 50 56 45 101 51 50 56 45 52 55 101 102 45 97 50 53 52 45 57 48 100 56 57 100 56 48 56 101 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d33b08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:39:46.972: INFO: All old ReplicaSets of Deployment "webserver-deployment":
+Jan 10 17:39:46.972: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-4188 /apis/apps/v1/namespaces/deployment-4188/replicasets/webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 16978 3 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bc373f28-e328-47ef-a254-90d89d808ebb 0xc004d33b67 0xc004d33b68}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:39:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 99 51 55 51 102 50 56 45 101 51 50 56 45 52 55 101 102 45 97 50 53 52 45 57 48 100 56 57 100 56 48 56 101 98 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d33bd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:39:46.976: INFO: Pod "webserver-deployment-6676bcd6d4-9k7zp" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9k7zp webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-9k7zp ce874a82-656b-4506-9c79-c286adba2d16 16966 0 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[cni.projectcalico.org/podIP:100.108.158.135/32 cni.projectcalico.org/podIPs:100.108.158.135/32] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020ce8b7 0xc0020ce8b8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:,StartTime:2021-01-10 17:39:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.976: INFO: Pod "webserver-deployment-6676bcd6d4-cdkx8" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cdkx8 webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-cdkx8 313f37a6-47af-4ba2-88d5-73206083b4f0 16982 0 2021-01-10 17:39:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020cea77 0xc0020cea78}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.976: INFO: Pod "webserver-deployment-6676bcd6d4-fh7mz" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fh7mz webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-fh7mz 5745a97b-2dcd-4e0a-bbf0-a19c8d79ee04 16951 0 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[cni.projectcalico.org/podIP:100.112.27.221/32 cni.projectcalico.org/podIPs:100.112.27.221/32] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020ceb87 0xc0020ceb88}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-39-143.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.39.143,PodIP:,StartTime:2021-01-10 17:39:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.977: INFO: Pod "webserver-deployment-6676bcd6d4-nbqfl" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nbqfl webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-nbqfl 7645848b-49c3-4ab5-b1ed-fcbbdbbb19ef 16959 0 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[cni.projectcalico.org/podIP:100.100.191.164/32 cni.projectcalico.org/podIPs:100.100.191.164/32] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020ced47 0xc0020ced48}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-46.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.46,PodIP:,StartTime:2021-01-10 17:39:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.977: INFO: Pod "webserver-deployment-6676bcd6d4-qj9nb" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qj9nb webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-qj9nb 3ff52f83-f09c-4fb6-9235-9fbe49cc226e 16961 0 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[cni.projectcalico.org/podIP:100.100.191.165/32 cni.projectcalico.org/podIPs:100.100.191.165/32] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020cef17 0xc0020cef18}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-46.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.46,PodIP:,StartTime:2021-01-10 17:39:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.977: INFO: Pod "webserver-deployment-6676bcd6d4-vznhz" is not available:
+&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vznhz webserver-deployment-6676bcd6d4- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-6676bcd6d4-vznhz 8f7b30a0-2b8a-493d-ab88-15ab89de83b3 16967 0 2021-01-10 17:39:44 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[cni.projectcalico.org/podIP:100.108.158.131/32 cni.projectcalico.org/podIPs:100.108.158.131/32] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 22bb5384-9540-47ee-a91f-e24c8394511f 0xc0020cf0d7 0xc0020cf0d8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 98 53 51 56 52 45 57 53 52 48 45 52 55 101 101 45 97 57 49 102 45 101 50 52 99 56 51 57 52 53 49 49 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:,StartTime:2021-01-10 17:39:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.978: INFO: Pod "webserver-deployment-84855cf797-5h4sf" is not available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-5h4sf webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-5h4sf c087fbf8-449c-4285-bd75-f1ee8b20eb1a 16984 0 2021-01-10 17:39:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0020cf2a7 0xc0020cf2a8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.978: INFO: Pod "webserver-deployment-84855cf797-867s9" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-867s9 webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-867s9 959e3b3b-3c07-461a-982e-5531f8850361 16858 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.112.27.218/32 cni.projectcalico.org/podIPs:100.112.27.218/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0020cf3a7 0xc0020cf3a8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 49 50 46 50 55 46 50 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-39-143.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.39.143,PodIP:100.112.27.218,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0cf4fcd83725a7b587ae0ebd1069a8a44ac4238a0d67fed7e271d77df3fdb4ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.112.27.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.978: INFO: Pod "webserver-deployment-84855cf797-bqm2t" is not available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-bqm2t webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-bqm2t 82713765-bc9d-4a39-ba49-f6e0db572bd5 16983 0 2021-01-10 17:39:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0020cf987 0xc0020cf988}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.978: INFO: Pod "webserver-deployment-84855cf797-dr2sf" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-dr2sf webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-dr2sf 3d8f6b8e-b946-4adb-990f-15ae59acb3cb 16861 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.108.158.129/32 cni.projectcalico.org/podIPs:100.108.158.129/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0020cfb77 0xc0020cfb78}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 50 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.129,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://68dfa33bc8ddab52afda2e25f2036e3e28c5b6986faccba6c1c4b10cc84a902a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.979: INFO: Pod "webserver-deployment-84855cf797-knglg" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-knglg webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-knglg a047f02b-594a-4207-84e4-97cd9d461be5 16873 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.100.191.163/32 cni.projectcalico.org/podIPs:100.100.191.163/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0020cfd27 0xc0020cfd28}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 48 46 49 57 49 46 49 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-46.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.46,PodIP:100.100.191.163,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ebdf8f1783f2b1441d006bef109d52f318a6015ddfd64b4f3ff938f057c4dd3b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.100.191.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.979: INFO: Pod "webserver-deployment-84855cf797-lq5pt" is not available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-lq5pt webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-lq5pt 05c5a95f-4844-48cb-82fa-f15dd93226e0 16981 0 2021-01-10 17:39:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc0005024d7 0xc0005024d8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.979: INFO: Pod "webserver-deployment-84855cf797-mdffp" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-mdffp webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-mdffp 29ba721b-2c54-4a2d-8c8c-af4741fee438 16870 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.100.191.161/32 cni.projectcalico.org/podIPs:100.100.191.161/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc000502730 0xc000502731}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 48 46 49 57 49 46 49 54 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-46.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.46,PodIP:100.100.191.161,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e4145c1cdbdca88994fbc09b301e8bd202bd3818c9be14a4c0536008b1c3898f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.100.191.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.979: INFO: Pod "webserver-deployment-84855cf797-tpr5p" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-tpr5p webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-tpr5p d9249158-8e7f-40d1-9787-bde90fbf7665 16883 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.112.27.220/32 cni.projectcalico.org/podIPs:100.112.27.220/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc000502b07 0xc000502b08}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 49 50 46 50 55 46 50 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-39-143.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.39.143,PodIP:100.112.27.220,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://bb4c4fb756c06f51a3320bf1e65d36dd787cff22ceb2377c4c6e34c521366f4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.112.27.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.980: INFO: Pod "webserver-deployment-84855cf797-vwsqn" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-vwsqn webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-vwsqn 4a9655f3-17aa-4402-8b69-2cf67f50ef9c 16880 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.112.27.219/32 cni.projectcalico.org/podIPs:100.112.27.219/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc000502fb7 0xc000502fb8}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 49 50 46 50 55 46 50 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-39-143.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.39.143,PodIP:100.112.27.219,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c1e56f8853ca2dbd274db6479881b8c3eade9da3203cf2fe1ac9adfdd40a362a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.112.27.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.980: INFO: Pod "webserver-deployment-84855cf797-wc9t7" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-wc9t7 webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-wc9t7 e72103c4-c4f0-4985-a387-549433b40f4c 16876 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.100.191.162/32 cni.projectcalico.org/podIPs:100.100.191.162/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc000503257 0xc000503258}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 48 46 49 57 49 46 49 54 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-46.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.46,PodIP:100.100.191.162,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c01651a287dd86669cf4e8aeee5059ec683a04f266dcb9ecc1c661d8d891bc49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.100.191.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jan 10 17:39:46.980: INFO: Pod "webserver-deployment-84855cf797-xwq76" is available:
+&Pod{ObjectMeta:{webserver-deployment-84855cf797-xwq76 webserver-deployment-84855cf797- deployment-4188 /api/v1/namespaces/deployment-4188/pods/webserver-deployment-84855cf797-xwq76 4fe57049-7437-4385-b4bb-49be9a4fb57f 16864 0 2021-01-10 17:39:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[cni.projectcalico.org/podIP:100.108.158.132/32 cni.projectcalico.org/podIPs:100.108.158.132/32] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b06374df-b59a-4723-b3c2-671fa4836255 0xc000503607 0xc000503608}] []  [{kube-controller-manager Update v1 2021-01-10 17:39:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 48 54 51 55 52 100 102 45 98 53 57 97 45 52 55 50 51 45 98 51 99 50 45 54 55 49 102 97 52 56 51 54 50 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:39:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:39:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 51 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d97vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d97vl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d97vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:39:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.132,StartTime:2021-01-10 17:39:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:39:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://48d9f30f4a0a0e1cc916ff68a115425faa350c6f0a56a1c63e917572ad1b6247,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:46.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-4188" for this suite.
+
+• [SLOW TEST:6.117 seconds]
+[sig-apps] Deployment
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should support proportional scaling [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":277,"completed":127,"skipped":2334,"failed":0}
+[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
+  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:46.990: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename security-context-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
+[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:47.072: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-66293bae-af1c-449a-a34c-48f66ca73854" in namespace "security-context-test-9717" to be "Succeeded or Failed"
+Jan 10 17:39:47.073: INFO: Pod "busybox-readonly-false-66293bae-af1c-449a-a34c-48f66ca73854": Phase="Pending", Reason="", readiness=false. Elapsed: 1.623706ms
+Jan 10 17:39:49.076: INFO: Pod "busybox-readonly-false-66293bae-af1c-449a-a34c-48f66ca73854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00419365s
+Jan 10 17:39:49.076: INFO: Pod "busybox-readonly-false-66293bae-af1c-449a-a34c-48f66ca73854" satisfied condition "Succeeded or Failed"
+[AfterEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:49.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "security-context-test-9717" for this suite.
+•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":277,"completed":128,"skipped":2334,"failed":0}
+SS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:49.083: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Discovering how many secrets are in namespace by default
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a Secret
+STEP: Ensuring resource quota status captures secret creation
+STEP: Deleting a secret
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:40:06.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-9614" for this suite.
+
+• [SLOW TEST:17.063 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":277,"completed":129,"skipped":2336,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:40:06.147: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod liveness-33f2485d-09f5-4412-801a-06cb07d34c6a in namespace container-probe-6569
+Jan 10 17:40:08.180: INFO: Started pod liveness-33f2485d-09f5-4412-801a-06cb07d34c6a in namespace container-probe-6569
+STEP: checking the pod's current state and verifying that restartCount is present
+Jan 10 17:40:08.181: INFO: Initial restart count of pod liveness-33f2485d-09f5-4412-801a-06cb07d34c6a is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:44:08.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-6569" for this suite.
+
+• [SLOW TEST:242.353 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":277,"completed":130,"skipped":2362,"failed":0}
+S
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with projected pod [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Subpath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:44:08.501: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod pod-subpath-test-projected-nf6w
+STEP: Creating a pod to test atomic-volume-subpath
+Jan 10 17:44:08.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nf6w" in namespace "subpath-3422" to be "Succeeded or Failed"
+Jan 10 17:44:08.535: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Pending", Reason="", readiness=false. Elapsed: 1.74767ms
+Jan 10 17:44:10.537: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 2.004266674s
+Jan 10 17:44:12.540: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 4.006878333s
+Jan 10 17:44:14.542: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 6.009261246s
+Jan 10 17:44:16.545: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 8.011805931s
+Jan 10 17:44:18.547: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 10.01452386s
+Jan 10 17:44:20.550: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 12.017088498s
+Jan 10 17:44:22.553: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 14.019713406s
+Jan 10 17:44:24.555: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 16.022499455s
+Jan 10 17:44:26.558: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 18.025065407s
+Jan 10 17:44:28.560: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Running", Reason="", readiness=true. Elapsed: 20.027521938s
+Jan 10 17:44:30.563: INFO: Pod "pod-subpath-test-projected-nf6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.030018829s
+STEP: Saw pod success
+Jan 10 17:44:30.563: INFO: Pod "pod-subpath-test-projected-nf6w" satisfied condition "Succeeded or Failed"
+Jan 10 17:44:30.565: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-subpath-test-projected-nf6w container test-container-subpath-projected-nf6w: 
+STEP: delete the pod
+Jan 10 17:44:30.586: INFO: Waiting for pod pod-subpath-test-projected-nf6w to disappear
+Jan 10 17:44:30.588: INFO: Pod pod-subpath-test-projected-nf6w no longer exists
+STEP: Deleting pod pod-subpath-test-projected-nf6w
+Jan 10 17:44:30.588: INFO: Deleting pod "pod-subpath-test-projected-nf6w" in namespace "subpath-3422"
+[AfterEach] [sig-storage] Subpath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:44:30.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-3422" for this suite.
+
+• [SLOW TEST:22.095 seconds]
+[sig-storage] Subpath
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with projected pod [LinuxOnly] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":277,"completed":131,"skipped":2363,"failed":0}
+S
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:44:30.596: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:45:30.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-6226" for this suite.
+
+• [SLOW TEST:60.037 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":277,"completed":132,"skipped":2364,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:45:30.634: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test override all
+Jan 10 17:45:30.659: INFO: Waiting up to 5m0s for pod "client-containers-25e31261-1c72-4b10-a0f9-30be271af979" in namespace "containers-798" to be "Succeeded or Failed"
+Jan 10 17:45:30.661: INFO: Pod "client-containers-25e31261-1c72-4b10-a0f9-30be271af979": Phase="Pending", Reason="", readiness=false. Elapsed: 1.7052ms
+Jan 10 17:45:32.663: INFO: Pod "client-containers-25e31261-1c72-4b10-a0f9-30be271af979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004312246s
+STEP: Saw pod success
+Jan 10 17:45:32.663: INFO: Pod "client-containers-25e31261-1c72-4b10-a0f9-30be271af979" satisfied condition "Succeeded or Failed"
+Jan 10 17:45:32.665: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod client-containers-25e31261-1c72-4b10-a0f9-30be271af979 container test-container: 
+STEP: delete the pod
+Jan 10 17:45:32.680: INFO: Waiting for pod client-containers-25e31261-1c72-4b10-a0f9-30be271af979 to disappear
+Jan 10 17:45:32.681: INFO: Pod client-containers-25e31261-1c72-4b10-a0f9-30be271af979 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:45:32.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-798" for this suite.
+•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":277,"completed":133,"skipped":2407,"failed":0}
+SS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:45:32.688: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a ResourceQuota with terminating scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a ResourceQuota with not terminating scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a long running pod
+STEP: Ensuring resource quota with not terminating scope captures the pod usage
+STEP: Ensuring resource quota with terminating scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+STEP: Creating a terminating pod
+STEP: Ensuring resource quota with terminating scope captures the pod usage
+STEP: Ensuring resource quota with not terminating scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:45:48.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-3093" for this suite.
+
+• [SLOW TEST:16.095 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":277,"completed":134,"skipped":2409,"failed":0}
+SSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:45:48.783: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
+Jan 10 17:45:48.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jan 10 17:45:48.812: INFO: Waiting for terminating namespaces to be deleted...
+Jan 10 17:45:48.814: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test
+Jan 10 17:45:48.818: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:45:48.818: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:45:48.818: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:45:48.818: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.818: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:45:48.818: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.818: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:45:48.818: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test
+Jan 10 17:45:48.830: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container e2e ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:45:48.830: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:45:48.830: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test
+Jan 10 17:45:48.841: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:45:48.841: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.841: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded)
+Jan 10 17:45:48.841: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.841: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:45:48.841: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded)
+Jan 10 17:45:48.841: INFO: 	Container autoscaler ready: true, restart count 0
+[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-d8b5231a-dd1b-47b8-b635-295604f3f0a4 90
+STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
+STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
+STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
+STEP: removing the label kubernetes.io/e2e-d8b5231a-dd1b-47b8-b635-295604f3f0a4 off the node ip-172-20-33-172.ap-south-1.compute.internal
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-d8b5231a-dd1b-47b8-b635-295604f3f0a4
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:45:56.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-5487" for this suite.
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
+
+• [SLOW TEST:8.134 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
+  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":277,"completed":135,"skipped":2419,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:45:56.918: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9762
+STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
+STEP: creating service externalsvc in namespace services-9762
+STEP: creating replication controller externalsvc in namespace services-9762
+I0110 17:45:56.966218      24 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9762, replica count: 2
+I0110 17:46:00.016591      24 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+STEP: changing the ClusterIP service to type=ExternalName
+Jan 10 17:46:00.036: INFO: Creating new exec pod
+Jan 10 17:46:02.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-9762 execpodhz62j -- /bin/sh -x -c nslookup clusterip-service'
+Jan 10 17:46:02.423: INFO: stderr: "+ nslookup clusterip-service\n"
+Jan 10 17:46:02.423: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-9762.svc.cluster.local\tcanonical name = externalsvc.services-9762.svc.cluster.local.\nName:\texternalsvc.services-9762.svc.cluster.local\nAddress: 100.66.130.177\n\n"
+STEP: deleting ReplicationController externalsvc in namespace services-9762, will wait for the garbage collector to delete the pods
+Jan 10 17:46:02.482: INFO: Deleting ReplicationController externalsvc took: 6.290958ms
+Jan 10 17:46:02.882: INFO: Terminating ReplicationController externalsvc pods took: 400.236234ms
+Jan 10 17:46:14.299: INFO: Cleaning up the ClusterIP to ExternalName test service
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:46:14.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-9762" for this suite.
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:17.403 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":277,"completed":136,"skipped":2442,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:46:14.325: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:46:14.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b" in namespace "projected-2258" to be "Succeeded or Failed"
+Jan 10 17:46:14.360: INFO: Pod "downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687231ms
+Jan 10 17:46:16.363: INFO: Pod "downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0057068s
+STEP: Saw pod success
+Jan 10 17:46:16.363: INFO: Pod "downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b" satisfied condition "Succeeded or Failed"
+Jan 10 17:46:16.365: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b container client-container: 
+STEP: delete the pod
+Jan 10 17:46:16.379: INFO: Waiting for pod downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b to disappear
+Jan 10 17:46:16.381: INFO: Pod downwardapi-volume-4ccc9b97-0ca4-45c1-bc30-58a5fefa5b3b no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:46:16.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2258" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":137,"skipped":2458,"failed":0}
+SSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:46:16.389: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 in namespace container-probe-8536
+Jan 10 17:46:18.422: INFO: Started pod liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 in namespace container-probe-8536
+STEP: checking the pod's current state and verifying that restartCount is present
+Jan 10 17:46:18.423: INFO: Initial restart count of pod liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is 0
+Jan 10 17:46:32.443: INFO: Restart count of pod container-probe-8536/liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is now 1 (14.019292888s elapsed)
+Jan 10 17:46:50.465: INFO: Restart count of pod container-probe-8536/liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is now 2 (32.041703233s elapsed)
+Jan 10 17:47:10.490: INFO: Restart count of pod container-probe-8536/liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is now 3 (52.066840529s elapsed)
+Jan 10 17:47:30.516: INFO: Restart count of pod container-probe-8536/liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is now 4 (1m12.092644017s elapsed)
+Jan 10 17:48:32.599: INFO: Restart count of pod container-probe-8536/liveness-f26d7178-dcd2-4850-9aa7-19b808ec80c4 is now 5 (2m14.175131531s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:48:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-8536" for this suite.
+
+• [SLOW TEST:136.226 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":277,"completed":138,"skipped":2467,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:48:32.615: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
+[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:48:32.636: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
+Jan 10 17:48:32.643: INFO: Pod name sample-pod: Found 0 pods out of 1
+Jan 10 17:48:37.646: INFO: Pod name sample-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jan 10 17:48:37.646: INFO: Creating deployment "test-rolling-update-deployment"
+Jan 10 17:48:37.649: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
+Jan 10 17:48:37.653: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
+Jan 10 17:48:39.657: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
+Jan 10 17:48:39.658: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+Jan 10 17:48:39.664: INFO: Deployment "test-rolling-update-deployment":
+&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4285 /apis/apps/v1/namespaces/deployment-4285/deployments/test-rolling-update-deployment 08126473-c63a-426e-9fb6-45311d2fbe03 19386 1 2021-01-10 17:48:37 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2021-01-10 17:48:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:48:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004eb9ac8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-10 17:48:37 +0000 UTC,LastTransitionTime:2021-01-10 17:48:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2021-01-10 17:48:39 +0000 UTC,LastTransitionTime:2021-01-10 17:48:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}
+
+Jan 10 17:48:39.666: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
+&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-4285 /apis/apps/v1/namespaces/deployment-4285/replicasets/test-rolling-update-deployment-59d5cb45c7 43178ffd-2adf-46da-a2da-6ee4d8743a0d 19379 1 2021-01-10 17:48:37 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 08126473-c63a-426e-9fb6-45311d2fbe03 0xc004d32057 0xc004d32058}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:48:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 49 50 54 52 55 51 45 99 54 51 97 45 52 50 54 101 45 57 102 98 54 45 52 53 51 49 49 100 50 102 98 101 48 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d320e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:48:39.666: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
+Jan 10 17:48:39.666: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4285 /apis/apps/v1/namespaces/deployment-4285/replicasets/test-rolling-update-controller 51b71c8a-da6d-4982-a369-6dcf3697debf 19385 2 2021-01-10 17:48:32 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 08126473-c63a-426e-9fb6-45311d2fbe03 0xc004eb9f47 0xc004eb9f48}] []  [{e2e.test Update apps/v1 2021-01-10 17:48:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:48:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 56 49 50 54 52 55 51 45 99 54 51 97 45 52 50 54 101 45 57 102 98 54 45 52 53 51 49 49 100 50 102 98 101 48 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004eb9fe8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:48:39.669: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-fbbsl" is available:
+&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-fbbsl test-rolling-update-deployment-59d5cb45c7- deployment-4285 /api/v1/namespaces/deployment-4285/pods/test-rolling-update-deployment-59d5cb45c7-fbbsl 8b9c6a99-21a9-4956-8029-e115285e22a6 19378 0 2021-01-10 17:48:37 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[cni.projectcalico.org/podIP:100.108.158.151/32 cni.projectcalico.org/podIPs:100.108.158.151/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 43178ffd-2adf-46da-a2da-6ee4d8743a0d 0xc004d32617 0xc004d32618}] []  [{kube-controller-manager Update v1 2021-01-10 17:48:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 51 49 55 56 102 102 100 45 50 97 100 102 45 52 54 100 97 45 97 50 100 97 45 54 101 101 52 100 56 55 52 51 97 48 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:48:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:48:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 53 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q7rtw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q7rtw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q7rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:48:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.151,StartTime:2021-01-10 17:48:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:48:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://0517d6ed0336e86874bd5e816cd5ae20c18cf0e47237eced4294f27469e8dec8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:48:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-4285" for this suite.
+
+• [SLOW TEST:7.061 seconds]
+[sig-apps] Deployment
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":277,"completed":139,"skipped":2478,"failed":0}
+SSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  patching/updating a validating webhook should work [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:48:39.677: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:48:40.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:48:43.551: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] patching/updating a validating webhook should work [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a validating webhook configuration
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Updating a validating webhook configuration's rules to not include the create operation
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Patching a validating webhook configuration's rules to include the create operation
+STEP: Creating a configMap that does not comply to the validation webhook rules
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:48:43.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-2844" for this suite.
+STEP: Destroying namespace "webhook-2844-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":277,"completed":140,"skipped":2484,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:48:43.640: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward api env vars
+Jan 10 17:48:43.670: INFO: Waiting up to 5m0s for pod "downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f" in namespace "downward-api-1292" to be "Succeeded or Failed"
+Jan 10 17:48:43.673: INFO: Pod "downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45605ms
+Jan 10 17:48:45.676: INFO: Pod "downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006046142s
+STEP: Saw pod success
+Jan 10 17:48:45.676: INFO: Pod "downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f" satisfied condition "Succeeded or Failed"
+Jan 10 17:48:45.678: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f container dapi-container: 
+STEP: delete the pod
+Jan 10 17:48:45.700: INFO: Waiting for pod downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f to disappear
+Jan 10 17:48:45.702: INFO: Pod downward-api-59d4b6e3-c204-46f9-b38d-b3aa7194c67f no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:48:45.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1292" for this suite.
+•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":277,"completed":141,"skipped":2495,"failed":0}
+SS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:48:45.711: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
+STEP: Gathering metrics
+Jan 10 17:49:25.762: INFO: For apiserver_request_total:
+For apiserver_request_latency_seconds:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+W0110 17:49:25.762754      24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:49:25.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-399" for this suite.
+
+• [SLOW TEST:40.058 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":277,"completed":142,"skipped":2497,"failed":0}
+[sig-network] Services 
+  should be able to change the type from NodePort to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:49:25.769: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should be able to change the type from NodePort to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a service nodeport-service with the type=NodePort in namespace services-7710
+STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
+STEP: creating service externalsvc in namespace services-7710
+STEP: creating replication controller externalsvc in namespace services-7710
+I0110 17:49:25.817567      24 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7710, replica count: 2
+I0110 17:49:28.867997      24 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+STEP: changing the NodePort service to type=ExternalName
+Jan 10 17:49:28.884: INFO: Creating new exec pod
+Jan 10 17:49:30.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-7710 execpodkzmrd -- /bin/sh -x -c nslookup nodeport-service'
+Jan 10 17:49:31.168: INFO: stderr: "+ nslookup nodeport-service\n"
+Jan 10 17:49:31.168: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-7710.svc.cluster.local\tcanonical name = externalsvc.services-7710.svc.cluster.local.\nName:\texternalsvc.services-7710.svc.cluster.local\nAddress: 100.68.167.75\n\n"
+STEP: deleting ReplicationController externalsvc in namespace services-7710, will wait for the garbage collector to delete the pods
+Jan 10 17:49:31.226: INFO: Deleting ReplicationController externalsvc took: 4.682172ms
+Jan 10 17:49:31.627: INFO: Terminating ReplicationController externalsvc pods took: 400.256727ms
+Jan 10 17:49:44.342: INFO: Cleaning up the NodePort to ExternalName test service
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:49:44.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-7710" for this suite.
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:18.592 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from NodePort to ExternalName [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":277,"completed":143,"skipped":2497,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should be able to deny custom resource creation, update and deletion [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:49:44.362: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:49:45.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:49:48.046: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should be able to deny custom resource creation, update and deletion [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:49:48.048: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Registering the custom resource webhook via the AdmissionRegistration API
+STEP: Creating a custom resource that should be denied by the webhook
+STEP: Creating a custom resource whose deletion would be denied by the webhook
+STEP: Updating the custom resource with disallowed data should be denied
+STEP: Deleting the custom resource should be denied
+STEP: Remove the offending key and value from the custom resource data
+STEP: Deleting the updated custom resource should be successful
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:49:54.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4520" for this suite.
+STEP: Destroying namespace "webhook-4520-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:9.817 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to deny custom resource creation, update and deletion [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":277,"completed":144,"skipped":2527,"failed":0}
+SSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should invoke init containers on a RestartNever pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:49:54.179: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
+[It] should invoke init containers on a RestartNever pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+Jan 10 17:49:54.201: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:49:57.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-1142" for this suite.
+•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":277,"completed":145,"skipped":2530,"failed":0}
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:49:57.250: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:49:57.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jan 10 17:49:59.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897797, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897797, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897797, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897797, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:50:02.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
+STEP: create a namespace for the webhook
+STEP: create a configmap should be unconditionally rejected by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:02.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-7415" for this suite.
+STEP: Destroying namespace "webhook-7415-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:5.407 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":277,"completed":146,"skipped":2551,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:02.657: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating replication controller my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411
+Jan 10 17:50:02.692: INFO: Pod name my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411: Found 0 pods out of 1
+Jan 10 17:50:07.694: INFO: Pod name my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411: Found 1 pods out of 1
+Jan 10 17:50:07.694: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411" are running
+Jan 10 17:50:07.696: INFO: Pod "my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411-2v7kb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:50:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:50:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:50:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:50:02 +0000 UTC Reason: Message:}])
+Jan 10 17:50:07.696: INFO: Trying to dial the pod
+Jan 10 17:50:12.703: INFO: Controller my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411: Got expected result from replica 1 [my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411-2v7kb]: "my-hostname-basic-2c68581a-0779-4e1d-9f5f-aeb339153411-2v7kb", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:12.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-7512" for this suite.
+
+• [SLOW TEST:10.058 seconds]
+[sig-apps] ReplicationController
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":277,"completed":147,"skipped":2562,"failed":0}
+S
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:12.715: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Jan 10 17:50:12.742: INFO: Waiting up to 5m0s for pod "pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91" in namespace "emptydir-7588" to be "Succeeded or Failed"
+Jan 10 17:50:12.744: INFO: Pod "pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91": Phase="Pending", Reason="", readiness=false. Elapsed: 1.583572ms
+Jan 10 17:50:14.746: INFO: Pod "pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003797542s
+STEP: Saw pod success
+Jan 10 17:50:14.746: INFO: Pod "pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91" satisfied condition "Succeeded or Failed"
+Jan 10 17:50:14.748: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91 container test-container: 
+STEP: delete the pod
+Jan 10 17:50:14.763: INFO: Waiting for pod pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91 to disappear
+Jan 10 17:50:14.765: INFO: Pod pod-214f5e91-c5ad-4fff-a9cf-ba4158e09a91 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:14.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-7588" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":148,"skipped":2563,"failed":0}
+SSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should release no longer matching pods [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:14.771: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should release no longer matching pods [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Given a ReplicationController is created
+STEP: When the matched label of one of its pods change
+Jan 10 17:50:14.797: INFO: Pod name pod-release: Found 0 pods out of 1
+Jan 10 17:50:19.800: INFO: Pod name pod-release: Found 1 pods out of 1
+STEP: Then the pod is released
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:20.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-466" for this suite.
+
+• [SLOW TEST:6.047 seconds]
+[sig-apps] ReplicationController
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should release no longer matching pods [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":277,"completed":149,"skipped":2570,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:20.818: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name projected-configmap-test-volume-14fd63a1-c3e4-4687-9e1e-bfb682bf5466
+STEP: Creating a pod to test consume configMaps
+Jan 10 17:50:20.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359" in namespace "projected-6343" to be "Succeeded or Failed"
+Jan 10 17:50:20.848: INFO: Pod "pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359": Phase="Pending", Reason="", readiness=false. Elapsed: 1.656965ms
+Jan 10 17:50:22.850: INFO: Pod "pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003919292s
+STEP: Saw pod success
+Jan 10 17:50:22.850: INFO: Pod "pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359" satisfied condition "Succeeded or Failed"
+Jan 10 17:50:22.851: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jan 10 17:50:22.866: INFO: Waiting for pod pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359 to disappear
+Jan 10 17:50:22.868: INFO: Pod pod-projected-configmaps-2f183a81-b631-4459-9188-8ceb20b0d359 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:22.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6343" for this suite.
+•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":150,"skipped":2582,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:22.874: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ReplicaSet
+STEP: Ensuring resource quota status captures replicaset creation
+STEP: Deleting a ReplicaSet
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:33.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-5678" for this suite.
+
+• [SLOW TEST:11.058 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":277,"completed":151,"skipped":2598,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl cluster-info 
+  should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:33.932: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: validating cluster-info
+Jan 10 17:50:33.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 cluster-info'
+Jan 10 17:50:34.024: INFO: stderr: ""
+Jan 10 17:50:34.024: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://100.64.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://100.64.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:34.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9417" for this suite.
+•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":277,"completed":152,"skipped":2624,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:34.033: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:50:34.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde" in namespace "downward-api-6283" to be "Succeeded or Failed"
+Jan 10 17:50:34.062: INFO: Pod "downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.63791ms
+Jan 10 17:50:36.065: INFO: Pod "downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006221967s
+STEP: Saw pod success
+Jan 10 17:50:36.065: INFO: Pod "downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde" satisfied condition "Succeeded or Failed"
+Jan 10 17:50:36.067: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde container client-container: 
+STEP: delete the pod
+Jan 10 17:50:36.082: INFO: Waiting for pod downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde to disappear
+Jan 10 17:50:36.084: INFO: Pod downwardapi-volume-17a42f6d-e74d-449c-8a6b-fffba3183fde no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:36.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-6283" for this suite.
+•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":277,"completed":153,"skipped":2710,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:36.090: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+W0110 17:50:42.133292      24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Jan 10 17:50:42.133: INFO: For apiserver_request_total:
+For apiserver_request_latency_seconds:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-4972" for this suite.
+
+• [SLOW TEST:6.049 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":277,"completed":154,"skipped":2739,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:42.140: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name secret-test-map-13543e2a-9be2-4aed-8f70-eea990bb2fe7
+STEP: Creating a pod to test consume secrets
+Jan 10 17:50:42.174: INFO: Waiting up to 5m0s for pod "pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a" in namespace "secrets-3086" to be "Succeeded or Failed"
+Jan 10 17:50:42.176: INFO: Pod "pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.677855ms
+Jan 10 17:50:44.178: INFO: Pod "pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00378677s
+STEP: Saw pod success
+Jan 10 17:50:44.178: INFO: Pod "pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a" satisfied condition "Succeeded or Failed"
+Jan 10 17:50:44.180: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a container secret-volume-test: 
+STEP: delete the pod
+Jan 10 17:50:44.195: INFO: Waiting for pod pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a to disappear
+Jan 10 17:50:44.196: INFO: Pod pod-secrets-b3eaa8d9-4f7d-406e-b07f-9a0f8929635a no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:44.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-3086" for this suite.
+•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":155,"skipped":2764,"failed":0}
+S
+------------------------------
+[sig-storage] Projected combined 
+  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected combined
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:44.202: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-projected-all-test-volume-52fef0ad-eb79-45b0-9e7d-bab4117f4347
+STEP: Creating secret with name secret-projected-all-test-volume-56f022d7-47e2-48ae-82e3-2c39f0beb88e
+STEP: Creating a pod to test Check all projections for projected volume plugin
+Jan 10 17:50:44.250: INFO: Waiting up to 5m0s for pod "projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05" in namespace "projected-2533" to be "Succeeded or Failed"
+Jan 10 17:50:44.252: INFO: Pod "projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05": Phase="Pending", Reason="", readiness=false. Elapsed: 1.714127ms
+Jan 10 17:50:46.254: INFO: Pod "projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003993111s
+STEP: Saw pod success
+Jan 10 17:50:46.254: INFO: Pod "projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05" satisfied condition "Succeeded or Failed"
+Jan 10 17:50:46.256: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05 container projected-all-volume-test: 
+STEP: delete the pod
+Jan 10 17:50:46.270: INFO: Waiting for pod projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05 to disappear
+Jan 10 17:50:46.272: INFO: Pod projected-volume-8ca31aa8-4020-4381-be70-b0ba8970af05 no longer exists
+[AfterEach] [sig-storage] Projected combined
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:46.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2533" for this suite.
+•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":277,"completed":156,"skipped":2765,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should not be blocked by dependency circle [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:46.282: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not be blocked by dependency circle [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:50:46.322: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"db82b1e4-b176-4941-8c9c-7c87dfecd813", Controller:(*bool)(0xc002424c3a), BlockOwnerDeletion:(*bool)(0xc002424c3b)}}
+Jan 10 17:50:46.326: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f824c4d5-c152-4d27-a530-6c20de254689", Controller:(*bool)(0xc0033e8746), BlockOwnerDeletion:(*bool)(0xc0033e8747)}}
+Jan 10 17:50:46.330: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1f9cbd9-0017-411d-b1bd-8274f0767668", Controller:(*bool)(0xc001d91886), BlockOwnerDeletion:(*bool)(0xc001d91887)}}
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:50:51.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-8003" for this suite.
+
+• [SLOW TEST:5.062 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should not be blocked by dependency circle [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":277,"completed":157,"skipped":2794,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
+  should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:50:51.345: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
+STEP: Setting up server cert
+STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
+STEP: Deploying the custom resource conversion webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:50:51.681: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created
+Jan 10 17:50:53.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897851, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897851, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897851, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897851, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:50:56.698: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
+[It] should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:50:56.701: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Creating a v1 custom resource
+STEP: Create a v2 custom resource
+STEP: List CRs in v1
+STEP: List CRs in v2
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:02.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-webhook-3109" for this suite.
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
+
+• [SLOW TEST:11.619 seconds]
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":277,"completed":158,"skipped":2858,"failed":0}
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate configmap [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:02.964: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:51:03.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:51:06.402: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate configmap [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
+STEP: create a configmap that should be updated by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:06.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-9031" for this suite.
+STEP: Destroying namespace "webhook-9031-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":277,"completed":159,"skipped":2873,"failed":0}
+
+------------------------------
+[k8s.io] Security Context When creating a container with runAsUser 
+  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:06.476: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename security-context-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
+[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:51:06.510: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a87dc50a-1051-47ef-b915-30a2e358eaae" in namespace "security-context-test-9652" to be "Succeeded or Failed"
+Jan 10 17:51:06.512: INFO: Pod "busybox-user-65534-a87dc50a-1051-47ef-b915-30a2e358eaae": Phase="Pending", Reason="", readiness=false. Elapsed: 1.956825ms
+Jan 10 17:51:08.515: INFO: Pod "busybox-user-65534-a87dc50a-1051-47ef-b915-30a2e358eaae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004350519s
+Jan 10 17:51:08.515: INFO: Pod "busybox-user-65534-a87dc50a-1051-47ef-b915-30a2e358eaae" satisfied condition "Succeeded or Failed"
+[AfterEach] [k8s.io] Security Context
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:08.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "security-context-test-9652" for this suite.
+•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":160,"skipped":2873,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
+  should include custom resource definition resources in discovery documents [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:08.522: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should include custom resource definition resources in discovery documents [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: fetching the /apis discovery document
+STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
+STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
+STEP: fetching the /apis/apiextensions.k8s.io discovery document
+STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
+STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
+STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:08.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-7945" for this suite.
+•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":277,"completed":161,"skipped":2904,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Proxy server 
+  should support --unix-socket=/path  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:08.552: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should support --unix-socket=/path  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Starting the proxy
+Jan 10 17:51:08.579: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-870154433 proxy --unix-socket=/tmp/kubectl-proxy-unix059427256/test'
+STEP: retrieving proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:08.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-8407" for this suite.
+•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":277,"completed":162,"skipped":2966,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:08.647: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should update labels on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating the pod
+Jan 10 17:51:11.196: INFO: Successfully updated pod "labelsupdatede2cf580-646e-4bb6-a081-9b81b1b16cb8"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:15.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-7221" for this suite.
+
+• [SLOW TEST:6.576 seconds]
+[sig-storage] Downward API volume
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":277,"completed":163,"skipped":2974,"failed":0}
+SSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:15.223: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
+STEP: Gathering metrics
+Jan 10 17:51:15.779: INFO: For apiserver_request_total:
+For apiserver_request_latency_seconds:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+W0110 17:51:15.779814      24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:15.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-1404" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":277,"completed":164,"skipped":2978,"failed":0}
+SSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:15.786: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jan 10 17:51:19.836: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:19.839: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:21.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:21.842: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:23.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:23.842: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:25.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:25.841: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:27.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:27.842: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:29.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:29.841: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:31.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:31.841: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:33.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:33.841: INFO: Pod pod-with-prestop-http-hook still exists
+Jan 10 17:51:35.839: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jan 10 17:51:35.844: INFO: Pod pod-with-prestop-http-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:35.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-3309" for this suite.
+
+• [SLOW TEST:20.074 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  when create a pod with lifecycle hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute prestop http hook properly [NodeConformance] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":277,"completed":165,"skipped":2982,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:35.860: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-volume-7dcb7807-9f14-46ca-a5dc-4f4930b4406d
+STEP: Creating a pod to test consume configMaps
+Jan 10 17:51:35.892: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587" in namespace "configmap-765" to be "Succeeded or Failed"
+Jan 10 17:51:35.894: INFO: Pod "pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587": Phase="Pending", Reason="", readiness=false. Elapsed: 1.612544ms
+Jan 10 17:51:37.896: INFO: Pod "pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003701545s
+STEP: Saw pod success
+Jan 10 17:51:37.896: INFO: Pod "pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587" satisfied condition "Succeeded or Failed"
+Jan 10 17:51:37.898: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587 container configmap-volume-test: 
+STEP: delete the pod
+Jan 10 17:51:37.911: INFO: Waiting for pod pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587 to disappear
+Jan 10 17:51:37.913: INFO: Pod pod-configmaps-6f40e1f2-dc97-45ec-9274-ecd0e41ad587 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:37.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-765" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":166,"skipped":3022,"failed":0}
+S
+------------------------------
+[sig-storage] Secrets 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:37.919: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name s-test-opt-del-6bb7fe68-4beb-46fb-a6d8-e8f61d8c1b38
+STEP: Creating secret with name s-test-opt-upd-939464cd-9558-4c9a-a8ef-59241c3f3c7e
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-6bb7fe68-4beb-46fb-a6d8-e8f61d8c1b38
+STEP: Updating secret s-test-opt-upd-939464cd-9558-4c9a-a8ef-59241c3f3c7e
+STEP: Creating secret with name s-test-opt-create-58a7e7e8-e90e-46f0-a9d9-7bc0c28220a1
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:42.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-7623" for this suite.
+•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":167,"skipped":3023,"failed":0}
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:42.020: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:51:42.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0" in namespace "projected-4413" to be "Succeeded or Failed"
+Jan 10 17:51:42.050: INFO: Pod "downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.690287ms
+Jan 10 17:51:44.052: INFO: Pod "downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004331879s
+STEP: Saw pod success
+Jan 10 17:51:44.052: INFO: Pod "downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0" satisfied condition "Succeeded or Failed"
+Jan 10 17:51:44.054: INFO: Trying to get logs from node ip-172-20-52-46.ap-south-1.compute.internal pod downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0 container client-container: 
+STEP: delete the pod
+Jan 10 17:51:44.076: INFO: Waiting for pod downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0 to disappear
+Jan 10 17:51:44.078: INFO: Pod downwardapi-volume-77849576-93ae-4e62-8e4c-dae59d3110c0 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:44.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4413" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":277,"completed":168,"skipped":3040,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:44.088: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:51:44.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7" in namespace "downward-api-2740" to be "Succeeded or Failed"
+Jan 10 17:51:44.120: INFO: Pod "downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.108641ms
+Jan 10 17:51:46.123: INFO: Pod "downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007567235s
+STEP: Saw pod success
+Jan 10 17:51:46.123: INFO: Pod "downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7" satisfied condition "Succeeded or Failed"
+Jan 10 17:51:46.125: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7 container client-container: 
+STEP: delete the pod
+Jan 10 17:51:46.139: INFO: Waiting for pod downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7 to disappear
+Jan 10 17:51:46.141: INFO: Pod downwardapi-volume-b3df1b84-1a83-4396-8b84-ed750dd26bc7 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:46.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2740" for this suite.
+•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":277,"completed":169,"skipped":3056,"failed":0}
+SSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:46.148: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name secret-test-48651aa5-02c4-413f-a380-cd0b6469bdca
+STEP: Creating a pod to test consume secrets
+Jan 10 17:51:46.179: INFO: Waiting up to 5m0s for pod "pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306" in namespace "secrets-2729" to be "Succeeded or Failed"
+Jan 10 17:51:46.183: INFO: Pod "pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673503ms
+Jan 10 17:51:48.186: INFO: Pod "pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006896142s
+STEP: Saw pod success
+Jan 10 17:51:48.186: INFO: Pod "pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306" satisfied condition "Succeeded or Failed"
+Jan 10 17:51:48.188: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306 container secret-volume-test: 
+STEP: delete the pod
+Jan 10 17:51:48.203: INFO: Waiting for pod pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306 to disappear
+Jan 10 17:51:48.205: INFO: Pod pod-secrets-60c3c9e4-88f5-44ec-88c4-8e1c657ab306 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:48.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-2729" for this suite.
+•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":170,"skipped":3063,"failed":0}
+
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:48.212: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+Jan 10 17:51:49.259: INFO: For apiserver_request_total:
+For apiserver_request_latency_seconds:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+W0110 17:51:49.259938      24 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:49.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-7383" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":277,"completed":171,"skipped":3063,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:49.266: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jan 10 17:51:51.301: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:51.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-3885" for this suite.
+•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":277,"completed":172,"skipped":3076,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl logs 
+  should be able to retrieve and filter logs  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:51.321: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[BeforeEach] Kubectl logs
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
+STEP: creating an pod
+Jan 10 17:51:51.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5701 -- logs-generator --log-lines-total 100 --run-duration 20s'
+Jan 10 17:51:51.425: INFO: stderr: ""
+Jan 10 17:51:51.425: INFO: stdout: "pod/logs-generator created\n"
+[It] should be able to retrieve and filter logs  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Waiting for log generator to start.
+Jan 10 17:51:51.425: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
+Jan 10 17:51:51.425: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5701" to be "running and ready, or succeeded"
+Jan 10 17:51:51.430: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678187ms
+Jan 10 17:51:53.432: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.007014092s
+Jan 10 17:51:53.432: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
+Jan 10 17:51:53.432: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
+STEP: checking for a matching strings
+Jan 10 17:51:53.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701'
+Jan 10 17:51:53.516: INFO: stderr: ""
+Jan 10 17:51:53.516: INFO: stdout: "I0110 17:51:52.201434       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/4fp 275\nI0110 17:51:52.401520       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/cdp9 254\nI0110 17:51:52.601570       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8hp 421\nI0110 17:51:52.801560       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/g7dv 524\nI0110 17:51:53.001595       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/gplt 239\nI0110 17:51:53.201580       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/9fp 479\nI0110 17:51:53.401586       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/x5t 268\n"
+STEP: limiting log lines
+Jan 10 17:51:53.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701 --tail=1'
+Jan 10 17:51:53.610: INFO: stderr: ""
+Jan 10 17:51:53.610: INFO: stdout: "I0110 17:51:53.401586       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/x5t 268\n"
+Jan 10 17:51:53.610: INFO: got output "I0110 17:51:53.401586       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/x5t 268\n"
+STEP: limiting log bytes
+Jan 10 17:51:53.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701 --limit-bytes=1'
+Jan 10 17:51:53.691: INFO: stderr: ""
+Jan 10 17:51:53.691: INFO: stdout: "I"
+Jan 10 17:51:53.691: INFO: got output "I"
+STEP: exposing timestamps
+Jan 10 17:51:53.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701 --tail=1 --timestamps'
+Jan 10 17:51:53.773: INFO: stderr: ""
+Jan 10 17:51:53.773: INFO: stdout: "2021-01-10T17:51:53.601687756Z I0110 17:51:53.601537       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/l7c 462\n"
+Jan 10 17:51:53.773: INFO: got output "2021-01-10T17:51:53.601687756Z I0110 17:51:53.601537       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/l7c 462\n"
+STEP: restricting to a time range
+Jan 10 17:51:56.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701 --since=1s'
+Jan 10 17:51:56.356: INFO: stderr: ""
+Jan 10 17:51:56.356: INFO: stdout: "I0110 17:51:55.401591       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/jz2 418\nI0110 17:51:55.601556       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/hgcj 450\nI0110 17:51:55.801585       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/tqn 435\nI0110 17:51:56.001563       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/47g 323\nI0110 17:51:56.201569       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/tzx 290\n"
+Jan 10 17:51:56.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs logs-generator logs-generator --namespace=kubectl-5701 --since=24h'
+Jan 10 17:51:56.446: INFO: stderr: ""
+Jan 10 17:51:56.446: INFO: stdout: "I0110 17:51:52.201434       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/4fp 275\nI0110 17:51:52.401520       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/cdp9 254\nI0110 17:51:52.601570       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8hp 421\nI0110 17:51:52.801560       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/g7dv 524\nI0110 17:51:53.001595       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/gplt 239\nI0110 17:51:53.201580       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/9fp 479\nI0110 17:51:53.401586       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/x5t 268\nI0110 17:51:53.601537       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/l7c 462\nI0110 17:51:53.801551       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/dd7 485\nI0110 17:51:54.001566       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/sm5s 215\nI0110 17:51:54.201567       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/qmf 504\nI0110 17:51:54.401587       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/q2c 598\nI0110 17:51:54.601585       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/6mbb 516\nI0110 17:51:54.801583       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/tdq 385\nI0110 17:51:55.001578       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/9b7 587\nI0110 17:51:55.201578       1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/kcc 209\nI0110 17:51:55.401591       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/jz2 418\nI0110 17:51:55.601556       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/hgcj 450\nI0110 17:51:55.801585       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/tqn 435\nI0110 17:51:56.001563       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/47g 323\nI0110 17:51:56.201569       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/tzx 290\nI0110 17:51:56.401564       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/kjk8 369\n"
+[AfterEach] Kubectl logs
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
+Jan 10 17:51:56.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete pod logs-generator --namespace=kubectl-5701'
+Jan 10 17:51:58.306: INFO: stderr: ""
+Jan 10 17:51:58.306: INFO: stdout: "pod \"logs-generator\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:51:58.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-5701" for this suite.
+
+• [SLOW TEST:6.994 seconds]
+[sig-cli] Kubectl client
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl logs
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
+    should be able to retrieve and filter logs  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":277,"completed":173,"skipped":3107,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop complex daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:51:58.315: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
+[It] should run and stop complex daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:51:58.347: INFO: Creating daemon "daemon-set" with a node selector
+STEP: Initially, daemon pods should not be running on any nodes.
+Jan 10 17:51:58.353: INFO: Number of nodes with available pods: 0
+Jan 10 17:51:58.353: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Change node label to blue, check that daemon pod is launched.
+Jan 10 17:51:58.366: INFO: Number of nodes with available pods: 0
+Jan 10 17:51:58.366: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:51:59.369: INFO: Number of nodes with available pods: 1
+Jan 10 17:51:59.369: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Update the node label to green, and wait for daemons to be unscheduled
+Jan 10 17:51:59.382: INFO: Number of nodes with available pods: 1
+Jan 10 17:51:59.382: INFO: Number of running nodes: 0, number of available pods: 1
+Jan 10 17:52:00.385: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:00.385: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
+Jan 10 17:52:00.394: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:00.394: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:52:01.397: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:01.397: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:52:02.397: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:02.397: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:52:03.396: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:03.396: INFO: Node ip-172-20-39-143.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:52:04.397: INFO: Number of nodes with available pods: 1
+Jan 10 17:52:04.397: INFO: Number of running nodes: 1, number of available pods: 1
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1573, will wait for the garbage collector to delete the pods
+Jan 10 17:52:04.457: INFO: Deleting DaemonSet.extensions daemon-set took: 4.677937ms
+Jan 10 17:52:04.557: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.155655ms
+Jan 10 17:52:19.360: INFO: Number of nodes with available pods: 0
+Jan 10 17:52:19.360: INFO: Number of running nodes: 0, number of available pods: 0
+Jan 10 17:52:19.361: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1573/daemonsets","resourceVersion":"21593"},"items":null}
+
+Jan 10 17:52:19.363: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1573/pods","resourceVersion":"21593"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:19.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-1573" for this suite.
+
+• [SLOW TEST:21.072 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run and stop complex daemon [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":277,"completed":174,"skipped":3120,"failed":0}
+SSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:19.388: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating projection with secret that has name projected-secret-test-map-bb2f32fe-9e54-4c47-a682-a916057d0125
+STEP: Creating a pod to test consume secrets
+Jan 10 17:52:19.417: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a" in namespace "projected-1135" to be "Succeeded or Failed"
+Jan 10 17:52:19.421: INFO: Pod "pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.626843ms
+Jan 10 17:52:21.424: INFO: Pod "pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006347143s
+STEP: Saw pod success
+Jan 10 17:52:21.424: INFO: Pod "pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a" satisfied condition "Succeeded or Failed"
+Jan 10 17:52:21.426: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a container projected-secret-volume-test: 
+STEP: delete the pod
+Jan 10 17:52:21.440: INFO: Waiting for pod pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a to disappear
+Jan 10 17:52:21.442: INFO: Pod pod-projected-secrets-756122f3-31bd-45a4-ba6b-cecb58bb802a no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:21.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1135" for this suite.
+•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":175,"skipped":3123,"failed":0}
+SS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for the cluster  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:21.448: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for the cluster  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7761.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7761.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:52:25.497: INFO: DNS probes using dns-7761/dns-test-ec6c5ce5-24bc-4c97-bce7-53c2505eb052 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:25.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-7761" for this suite.
+•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":277,"completed":176,"skipped":3125,"failed":0}
+SSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:25.513: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
+Jan 10 17:52:25.537: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jan 10 17:52:25.545: INFO: Waiting for terminating namespaces to be deleted...
+Jan 10 17:52:25.546: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test
+Jan 10 17:52:25.552: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.552: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:52:25.552: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.552: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:52:25.552: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:52:25.552: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:52:25.552: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:52:25.552: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test
+Jan 10 17:52:25.563: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container e2e ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:52:25.563: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: 	Container systemd-logs ready: true, restart count 0
+Jan 10 17:52:25.563: INFO: 
+Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test
+Jan 10 17:52:25.568: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.568: INFO: 	Container calico-node ready: true, restart count 0
+Jan 10 17:52:25.568: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.568: INFO: 	Container autoscaler ready: true, restart count 0
+Jan 10 17:52:25.568: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded)
+Jan 10 17:52:25.568: INFO: 	Container kube-proxy ready: true, restart count 0
+Jan 10 17:52:25.568: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded)
+Jan 10 17:52:25.568: INFO: 	Container dnsmasq ready: true, restart count 0
+Jan 10 17:52:25.568: INFO: 	Container kubedns ready: true, restart count 0
+Jan 10 17:52:25.569: INFO: 	Container sidecar ready: true, restart count 0
+Jan 10 17:52:25.569: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:52:25.569: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:52:25.569: INFO: 	Container systemd-logs ready: true, restart count 0
+[It] validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Trying to schedule Pod with nonempty NodeSelector.
+STEP: Considering event: 
+Type = [Warning], Name = [restricted-pod.1658f05c3263e9f1], Reason = [FailedScheduling], Message = [0/6 nodes are available: 6 node(s) didn't match node selector.]
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:26.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-5292" for this suite.
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
+•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":277,"completed":177,"skipped":3132,"failed":0}
+SSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:26.604: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
+[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+Jan 10 17:52:26.626: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:29.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-1842" for this suite.
+•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":277,"completed":178,"skipped":3135,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:29.636: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: set up a multi version CRD
+Jan 10 17:52:29.657: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: mark a version not serverd
+STEP: check the unserved version gets removed
+STEP: check the other version is not changed
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:53.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-5281" for this suite.
+
+• [SLOW TEST:24.141 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":277,"completed":179,"skipped":3179,"failed":0}
+SSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:53.778: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating projection with secret that has name projected-secret-test-map-27396516-3dda-4708-a945-525be914f096
+STEP: Creating a pod to test consume secrets
+Jan 10 17:52:53.808: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7" in namespace "projected-1789" to be "Succeeded or Failed"
+Jan 10 17:52:53.811: INFO: Pod "pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.96558ms
+Jan 10 17:52:55.814: INFO: Pod "pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005346069s
+STEP: Saw pod success
+Jan 10 17:52:55.814: INFO: Pod "pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7" satisfied condition "Succeeded or Failed"
+Jan 10 17:52:55.816: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7 container projected-secret-volume-test: 
+STEP: delete the pod
+Jan 10 17:52:55.829: INFO: Waiting for pod pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7 to disappear
+Jan 10 17:52:55.831: INFO: Pod pod-projected-secrets-4a716893-4ed3-441c-a12d-1920b1e601d7 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:55.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1789" for this suite.
+•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":180,"skipped":3182,"failed":0}
+SS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:55.847: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating projection with secret that has name projected-secret-test-4bf5f146-8fc8-4c50-98b1-a0358121ec64
+STEP: Creating a pod to test consume secrets
+Jan 10 17:52:55.878: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf" in namespace "projected-1136" to be "Succeeded or Failed"
+Jan 10 17:52:55.879: INFO: Pod "pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.644636ms
+Jan 10 17:52:57.882: INFO: Pod "pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004148095s
+STEP: Saw pod success
+Jan 10 17:52:57.882: INFO: Pod "pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf" satisfied condition "Succeeded or Failed"
+Jan 10 17:52:57.884: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf container projected-secret-volume-test: 
+STEP: delete the pod
+Jan 10 17:52:57.897: INFO: Waiting for pod pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf to disappear
+Jan 10 17:52:57.899: INFO: Pod pod-projected-secrets-7ad2b4ad-59d5-4924-825c-ed89dae7d7cf no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:52:57.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1136" for this suite.
+•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":181,"skipped":3184,"failed":0}
+SSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:52:57.905: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5971.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5971.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5971.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5971.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5971.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5971.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:53:01.967: INFO: DNS probes using dns-5971/dns-test-02d68ad5-6f89-4e05-a560-004c45dc2393 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:01.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-5971" for this suite.
+•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":277,"completed":182,"skipped":3187,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for multiple CRDs of different groups [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:01.993: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for multiple CRDs of different groups [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
+Jan 10 17:53:02.019: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:53:10.703: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-6185" for this suite.
+
+• [SLOW TEST:26.719 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for multiple CRDs of different groups [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":277,"completed":183,"skipped":3199,"failed":0}
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:28.715: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:53:28.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56" in namespace "downward-api-3428" to be "Succeeded or Failed"
+Jan 10 17:53:28.750: INFO: Pod "downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56": Phase="Pending", Reason="", readiness=false. Elapsed: 3.747458ms
+Jan 10 17:53:30.752: INFO: Pod "downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00631075s
+STEP: Saw pod success
+Jan 10 17:53:30.753: INFO: Pod "downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56" satisfied condition "Succeeded or Failed"
+Jan 10 17:53:30.754: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56 container client-container: 
+STEP: delete the pod
+Jan 10 17:53:30.771: INFO: Waiting for pod downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56 to disappear
+Jan 10 17:53:30.773: INFO: Pod downwardapi-volume-5503f25a-6a86-4c57-8eb1-a8f4f502ce56 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:30.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-3428" for this suite.
+•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":277,"completed":184,"skipped":3213,"failed":0}
+SSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for services  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:30.780: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for services  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7302.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7302.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7302.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7302.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.218.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.218.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.218.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.218.205_tcp@PTR;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7302.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7302.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7302.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7302.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7302.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7302.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.218.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.218.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.218.64.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.64.218.205_tcp@PTR;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:53:32.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.851: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.867: INFO: Unable to read jessie_udp@dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.869: INFO: Unable to read jessie_tcp@dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.871: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local from pod dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545: the server could not find the requested resource (get pods dns-test-fecaf247-3672-42e8-aba1-76d839993545)
+Jan 10 17:53:32.885: INFO: Lookups using dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545 failed for: [wheezy_udp@dns-test-service.dns-7302.svc.cluster.local wheezy_tcp@dns-test-service.dns-7302.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local jessie_udp@dns-test-service.dns-7302.svc.cluster.local jessie_tcp@dns-test-service.dns-7302.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7302.svc.cluster.local]
+
+Jan 10 17:53:37.927: INFO: DNS probes using dns-7302/dns-test-fecaf247-3672-42e8-aba1-76d839993545 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test service
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:37.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-7302" for this suite.
+
+• [SLOW TEST:7.197 seconds]
+[sig-network] DNS
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for services  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":277,"completed":185,"skipped":3217,"failed":0}
+SSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:37.977: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-upd-239e0782-9016-42bd-945a-dde9e209584c
+STEP: Creating the pod
+STEP: Waiting for pod with text data
+STEP: Waiting for pod with binary data
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:40.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-9276" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":186,"skipped":3223,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:40.039: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:53:40.070: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Pending, waiting for it to be Running (with Ready = true)
+Jan 10 17:53:42.073: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:44.073: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:46.072: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:48.072: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:50.073: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:52.072: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:54.072: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:56.073: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = false)
+Jan 10 17:53:58.072: INFO: The status of Pod test-webserver-67e4802a-8d03-48eb-b711-6f1c6ebfa27f is Running (Ready = true)
+Jan 10 17:53:58.074: INFO: Container started at 2021-01-10 17:53:40 +0000 UTC, pod became ready at 2021-01-10 17:53:57 +0000 UTC
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:53:58.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-1685" for this suite.
+
+• [SLOW TEST:18.052 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":277,"completed":187,"skipped":3248,"failed":0}
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should mount an API token into pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:53:58.091: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should mount an API token into pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: getting the auto-created API token
+STEP: reading a file in the container
+Jan 10 17:54:00.652: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8624 pod-service-account-fb90f698-e60e-41b9-ac0e-c83ad38374c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
+STEP: reading a file in the container
+Jan 10 17:54:00.848: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8624 pod-service-account-fb90f698-e60e-41b9-ac0e-c83ad38374c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
+STEP: reading a file in the container
+Jan 10 17:54:01.031: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8624 pod-service-account-fb90f698-e60e-41b9-ac0e-c83ad38374c1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:01.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svcaccounts-8624" for this suite.
+•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":277,"completed":188,"skipped":3268,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should include webhook resources in discovery documents [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:01.211: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:54:01.574: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:54:04.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should include webhook resources in discovery documents [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: fetching the /apis discovery document
+STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
+STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
+STEP: fetching the /apis/admissionregistration.k8s.io discovery document
+STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
+STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
+STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:04.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4976" for this suite.
+STEP: Destroying namespace "webhook-4976-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":277,"completed":189,"skipped":3294,"failed":0}
+SSSSSSS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
+  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:04.636: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods Set QOS Class
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:160
+[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying QOS class is set on the pod
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:04.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-6531" for this suite.
+•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":277,"completed":190,"skipped":3301,"failed":0}
+SSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:04.678: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create ConfigMap with empty key [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap that has name configmap-test-emptyKey-3149cea5-546f-4430-b480-e9873828bbe7
+[AfterEach] [sig-node] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:04.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-5155" for this suite.
+•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":277,"completed":191,"skipped":3307,"failed":0}
+S
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:04.705: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:54:04.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015" in namespace "downward-api-4024" to be "Succeeded or Failed"
+Jan 10 17:54:04.738: INFO: Pod "downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633075ms
+Jan 10 17:54:06.741: INFO: Pod "downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005948524s
+STEP: Saw pod success
+Jan 10 17:54:06.741: INFO: Pod "downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015" satisfied condition "Succeeded or Failed"
+Jan 10 17:54:06.742: INFO: Trying to get logs from node ip-172-20-52-46.ap-south-1.compute.internal pod downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015 container client-container: 
+STEP: delete the pod
+Jan 10 17:54:06.763: INFO: Waiting for pod downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015 to disappear
+Jan 10 17:54:06.765: INFO: Pod downwardapi-volume-cbe8b7ee-83dd-4001-b7e7-8ded023ff015 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:06.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4024" for this suite.
+•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":192,"skipped":3308,"failed":0}
+SSSSSSSSSS
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:06.772: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename namespaces
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a test namespace
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a service in the namespace
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Verifying there is no service in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:12.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-2160" for this suite.
+STEP: Destroying namespace "nsdeletetest-4859" for this suite.
+Jan 10 17:54:12.858: INFO: Namespace nsdeletetest-4859 was already deleted
+STEP: Destroying namespace "nsdeletetest-1898" for this suite.
+
+• [SLOW TEST:6.090 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":277,"completed":193,"skipped":3318,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: http [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Networking
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:12.862: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Performing setup for networking test in namespace pod-network-test-1639
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jan 10 17:54:12.885: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+Jan 10 17:54:12.912: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
+Jan 10 17:54:14.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:16.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:18.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:20.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:22.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:24.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:26.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:28.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:30.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:32.914: INFO: The status of Pod netserver-0 is Running (Ready = true)
+Jan 10 17:54:32.918: INFO: The status of Pod netserver-1 is Running (Ready = true)
+Jan 10 17:54:32.921: INFO: The status of Pod netserver-2 is Running (Ready = true)
+STEP: Creating test pods
+Jan 10 17:54:34.933: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.144:8080/dial?request=hostname&protocol=http&host=100.108.158.140&port=8080&tries=1'] Namespace:pod-network-test-1639 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:34.933: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:35.033: INFO: Waiting for responses: map[]
+Jan 10 17:54:35.035: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.144:8080/dial?request=hostname&protocol=http&host=100.112.27.231&port=8080&tries=1'] Namespace:pod-network-test-1639 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:35.035: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:35.132: INFO: Waiting for responses: map[]
+Jan 10 17:54:35.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.108.158.144:8080/dial?request=hostname&protocol=http&host=100.100.191.179&port=8080&tries=1'] Namespace:pod-network-test-1639 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:35.134: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:35.240: INFO: Waiting for responses: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:35.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-1639" for this suite.
+
+• [SLOW TEST:22.385 seconds]
+[sig-network] Networking
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
+  Granular Checks: Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
+    should function for intra-pod communication: http [NodeConformance] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":277,"completed":194,"skipped":3409,"failed":0}
+SS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Networking
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:35.247: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Performing setup for networking test in namespace pod-network-test-6781
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jan 10 17:54:35.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+Jan 10 17:54:35.298: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
+Jan 10 17:54:37.301: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:39.301: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:41.300: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:43.300: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:45.301: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:47.300: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:49.300: INFO: The status of Pod netserver-0 is Running (Ready = false)
+Jan 10 17:54:51.301: INFO: The status of Pod netserver-0 is Running (Ready = true)
+Jan 10 17:54:51.304: INFO: The status of Pod netserver-1 is Running (Ready = true)
+Jan 10 17:54:51.308: INFO: The status of Pod netserver-2 is Running (Ready = false)
+Jan 10 17:54:53.310: INFO: The status of Pod netserver-2 is Running (Ready = true)
+STEP: Creating test pods
+Jan 10 17:54:55.331: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.108.158.145 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6781 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:55.331: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:56.430: INFO: Found all expected endpoints: [netserver-0]
+Jan 10 17:54:56.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.112.27.232 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6781 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:56.432: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:57.536: INFO: Found all expected endpoints: [netserver-1]
+Jan 10 17:54:57.538: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.100.191.180 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6781 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jan 10 17:54:57.538: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:54:58.643: INFO: Found all expected endpoints: [netserver-2]
+[AfterEach] [sig-network] Networking
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:54:58.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-6781" for this suite.
+
+• [SLOW TEST:23.403 seconds]
+[sig-network] Networking
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
+  Granular Checks: Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
+    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":195,"skipped":3411,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:54:58.651: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jan 10 17:55:00.688: INFO: Expected: &{} to match Container's Termination Message:  --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:00.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-2429" for this suite.
+•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":196,"skipped":3441,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  listing validating webhooks should work [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:00.706: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:55:01.287: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:55:04.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] listing validating webhooks should work [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Listing all of the created validation webhooks
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Deleting the collection of validation webhooks
+STEP: Creating a configMap that does not comply to the validation webhook rules
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:04.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4096" for this suite.
+STEP: Destroying namespace "webhook-4096-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":277,"completed":197,"skipped":3477,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:04.508: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:55:04.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c" in namespace "projected-3418" to be "Succeeded or Failed"
+Jan 10 17:55:04.542: INFO: Pod "downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530927ms
+Jan 10 17:55:06.544: INFO: Pod "downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007060344s
+STEP: Saw pod success
+Jan 10 17:55:06.544: INFO: Pod "downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:06.546: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c container client-container: 
+STEP: delete the pod
+Jan 10 17:55:06.563: INFO: Waiting for pod downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c to disappear
+Jan 10 17:55:06.565: INFO: Pod downwardapi-volume-bd3e1a5f-ce6b-4ae4-acdc-c8124573095c no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:06.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3418" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":198,"skipped":3488,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:06.571: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-volume-map-ece7dbd2-1ce2-4aba-b0c5-c762ab6c3bfb
+STEP: Creating a pod to test consume configMaps
+Jan 10 17:55:06.602: INFO: Waiting up to 5m0s for pod "pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d" in namespace "configmap-1684" to be "Succeeded or Failed"
+Jan 10 17:55:06.605: INFO: Pod "pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195971ms
+Jan 10 17:55:08.608: INFO: Pod "pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00594611s
+STEP: Saw pod success
+Jan 10 17:55:08.608: INFO: Pod "pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:08.610: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d container configmap-volume-test: 
+STEP: delete the pod
+Jan 10 17:55:08.623: INFO: Waiting for pod pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d to disappear
+Jan 10 17:55:08.625: INFO: Pod pod-configmaps-7518d1c9-fcd9-416d-a61d-002a8255be6d no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:08.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1684" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":277,"completed":199,"skipped":3529,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate custom resource with pruning [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:08.633: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:55:09.228: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:55:12.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate custom resource with pruning [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:55:12.245: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Registering the mutating webhook for custom resource e2e-test-webhook-254-crds.webhook.example.com via the AdmissionRegistration API
+STEP: Creating a custom resource that should be mutated by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:18.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4369" for this suite.
+STEP: Destroying namespace "webhook-4369-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:9.755 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate custom resource with pruning [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":277,"completed":200,"skipped":3537,"failed":0}
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:18.388: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-volume-6e7abc23-1054-4400-98c1-a7932d548a82
+STEP: Creating a pod to test consume configMaps
+Jan 10 17:55:18.474: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7" in namespace "configmap-8504" to be "Succeeded or Failed"
+Jan 10 17:55:18.479: INFO: Pod "pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095972ms
+Jan 10 17:55:20.481: INFO: Pod "pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006562732s
+STEP: Saw pod success
+Jan 10 17:55:20.481: INFO: Pod "pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:20.483: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7 container configmap-volume-test: 
+STEP: delete the pod
+Jan 10 17:55:20.497: INFO: Waiting for pod pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7 to disappear
+Jan 10 17:55:20.501: INFO: Pod pod-configmaps-e3ca3937-2e4b-4ca2-b380-c584b22e5ab7 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:20.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-8504" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":201,"skipped":3537,"failed":0}
+SSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] version v1
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:20.508: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:55:20.543: INFO: (0) /api/v1/nodes/ip-172-20-39-143.ap-south-1.compute.internal:10250/proxy/logs/: 
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod pod-subpath-test-configmap-8dx7
+STEP: Creating a pod to test atomic-volume-subpath
+Jan 10 17:55:20.634: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8dx7" in namespace "subpath-3456" to be "Succeeded or Failed"
+Jan 10 17:55:20.637: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775236ms
+Jan 10 17:55:22.640: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 2.005158056s
+Jan 10 17:55:24.643: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 4.008019895s
+Jan 10 17:55:26.645: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 6.01061539s
+Jan 10 17:55:28.648: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 8.013141441s
+Jan 10 17:55:30.650: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 10.015563299s
+Jan 10 17:55:32.653: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 12.018043543s
+Jan 10 17:55:34.655: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 14.020935347s
+Jan 10 17:55:36.658: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 16.023699917s
+Jan 10 17:55:38.661: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 18.026383935s
+Jan 10 17:55:40.663: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Running", Reason="", readiness=true. Elapsed: 20.02879682s
+Jan 10 17:55:42.666: INFO: Pod "pod-subpath-test-configmap-8dx7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.031358528s
+STEP: Saw pod success
+Jan 10 17:55:42.666: INFO: Pod "pod-subpath-test-configmap-8dx7" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:42.668: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-subpath-test-configmap-8dx7 container test-container-subpath-configmap-8dx7: 
+STEP: delete the pod
+Jan 10 17:55:42.682: INFO: Waiting for pod pod-subpath-test-configmap-8dx7 to disappear
+Jan 10 17:55:42.684: INFO: Pod pod-subpath-test-configmap-8dx7 no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-8dx7
+Jan 10 17:55:42.684: INFO: Deleting pod "pod-subpath-test-configmap-8dx7" in namespace "subpath-3456"
+[AfterEach] [sig-storage] Subpath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:42.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-3456" for this suite.
+
+• [SLOW TEST:22.093 seconds]
+[sig-storage] Subpath
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":277,"completed":203,"skipped":3566,"failed":0}
+SSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:42.693: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jan 10 17:55:42.719: INFO: Waiting up to 5m0s for pod "pod-a5389fcb-2243-4f18-a048-f99514487623" in namespace "emptydir-2170" to be "Succeeded or Failed"
+Jan 10 17:55:42.721: INFO: Pod "pod-a5389fcb-2243-4f18-a048-f99514487623": Phase="Pending", Reason="", readiness=false. Elapsed: 1.890106ms
+Jan 10 17:55:44.723: INFO: Pod "pod-a5389fcb-2243-4f18-a048-f99514487623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004455357s
+STEP: Saw pod success
+Jan 10 17:55:44.723: INFO: Pod "pod-a5389fcb-2243-4f18-a048-f99514487623" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:44.725: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-a5389fcb-2243-4f18-a048-f99514487623 container test-container: 
+STEP: delete the pod
+Jan 10 17:55:44.740: INFO: Waiting for pod pod-a5389fcb-2243-4f18-a048-f99514487623 to disappear
+Jan 10 17:55:44.742: INFO: Pod pod-a5389fcb-2243-4f18-a048-f99514487623 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:44.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-2170" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":204,"skipped":3570,"failed":0}
+SSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:44.750: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for ExternalName services [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a test externalName service
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:55:46.796: INFO: DNS probes using dns-test-1823c569-58f8-44e3-a1bc-a83cc63fea8d succeeded
+
+STEP: deleting the pod
+STEP: changing the externalName to bar.example.com
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: creating a second pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:55:48.829: INFO: DNS probes using dns-test-a90c99ef-6179-485e-b63d-d7bbe24b41e4 succeeded
+
+STEP: deleting the pod
+STEP: changing the service to type=ClusterIP
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3041.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3041.svc.cluster.local; sleep 1; done
+
+STEP: creating a third pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:55:50.872: INFO: DNS probes using dns-test-425cb4ef-a2f5-4cfe-8b79-6f2ba5a9b249 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test externalName service
+[AfterEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:50.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-3041" for this suite.
+
+• [SLOW TEST:6.149 seconds]
+[sig-network] DNS
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":277,"completed":205,"skipped":3577,"failed":0}
+SSSSSS
+------------------------------
+[sig-storage] HostPath 
+  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] HostPath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:50.905: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename hostpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] HostPath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
+[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test hostPath mode
+Jan 10 17:55:50.934: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2391" to be "Succeeded or Failed"
+Jan 10 17:55:50.936: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.792872ms
+Jan 10 17:55:52.938: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004312737s
+STEP: Saw pod success
+Jan 10 17:55:52.938: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
+Jan 10 17:55:52.940: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-host-path-test container test-container-1: 
+STEP: delete the pod
+Jan 10 17:55:52.955: INFO: Waiting for pod pod-host-path-test to disappear
+Jan 10 17:55:52.957: INFO: Pod pod-host-path-test no longer exists
+[AfterEach] [sig-storage] HostPath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:55:52.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "hostpath-2391" for this suite.
+•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":206,"skipped":3583,"failed":0}
+SSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a pod. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:55:52.967: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a Pod that fits quota
+STEP: Ensuring ResourceQuota status captures the pod usage
+STEP: Not allowing a pod to be created that exceeds remaining quota
+STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
+STEP: Ensuring a pod cannot update its resource requirements
+STEP: Ensuring attempts to update pod resource requirements did not change quota usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:56:06.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-3670" for this suite.
+
+• [SLOW TEST:13.074 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a pod. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":277,"completed":207,"skipped":3588,"failed":0}
+SSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:56:06.042: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jan 10 17:56:06.070: INFO: Waiting up to 5m0s for pod "pod-2f05f349-e545-42b2-9516-a5c8a9ba8437" in namespace "emptydir-622" to be "Succeeded or Failed"
+Jan 10 17:56:06.072: INFO: Pod "pod-2f05f349-e545-42b2-9516-a5c8a9ba8437": Phase="Pending", Reason="", readiness=false. Elapsed: 1.794992ms
+Jan 10 17:56:08.074: INFO: Pod "pod-2f05f349-e545-42b2-9516-a5c8a9ba8437": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004250578s
+STEP: Saw pod success
+Jan 10 17:56:08.074: INFO: Pod "pod-2f05f349-e545-42b2-9516-a5c8a9ba8437" satisfied condition "Succeeded or Failed"
+Jan 10 17:56:08.076: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-2f05f349-e545-42b2-9516-a5c8a9ba8437 container test-container: 
+STEP: delete the pod
+Jan 10 17:56:08.091: INFO: Waiting for pod pod-2f05f349-e545-42b2-9516-a5c8a9ba8437 to disappear
+Jan 10 17:56:08.092: INFO: Pod pod-2f05f349-e545-42b2-9516-a5c8a9ba8437 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:56:08.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-622" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":208,"skipped":3591,"failed":0}
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:56:08.100: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9348.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9348.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9348.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9348.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9348.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9348.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe /etc/hosts
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jan 10 17:56:10.154: INFO: DNS probes using dns-9348/dns-test-bec2886e-4408-4549-8176-085ac8f817b3 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:56:10.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-9348" for this suite.
+•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":277,"completed":209,"skipped":3612,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:56:10.170: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating 50 configmaps
+STEP: Creating RC which spawns configmap-volume pods
+Jan 10 17:56:10.380: INFO: Pod name wrapped-volume-race-83b4af59-9b71-4771-915e-e902e80e6c20: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-83b4af59-9b71-4771-915e-e902e80e6c20 in namespace emptydir-wrapper-1232, will wait for the garbage collector to delete the pods
+Jan 10 17:56:24.493: INFO: Deleting ReplicationController wrapped-volume-race-83b4af59-9b71-4771-915e-e902e80e6c20 took: 5.44092ms
+Jan 10 17:56:24.893: INFO: Terminating ReplicationController wrapped-volume-race-83b4af59-9b71-4771-915e-e902e80e6c20 pods took: 400.254428ms
+STEP: Creating RC which spawns configmap-volume pods
+Jan 10 17:56:34.409: INFO: Pod name wrapped-volume-race-628d7576-bd7d-435a-9622-039253fdb37b: Found 0 pods out of 5
+Jan 10 17:56:39.413: INFO: Pod name wrapped-volume-race-628d7576-bd7d-435a-9622-039253fdb37b: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-628d7576-bd7d-435a-9622-039253fdb37b in namespace emptydir-wrapper-1232, will wait for the garbage collector to delete the pods
+Jan 10 17:56:49.485: INFO: Deleting ReplicationController wrapped-volume-race-628d7576-bd7d-435a-9622-039253fdb37b took: 5.531655ms
+Jan 10 17:56:49.886: INFO: Terminating ReplicationController wrapped-volume-race-628d7576-bd7d-435a-9622-039253fdb37b pods took: 400.251676ms
+STEP: Creating RC which spawns configmap-volume pods
+Jan 10 17:56:59.500: INFO: Pod name wrapped-volume-race-763cb0c0-c9a1-47ab-abb1-8e94602b5817: Found 0 pods out of 5
+Jan 10 17:57:04.504: INFO: Pod name wrapped-volume-race-763cb0c0-c9a1-47ab-abb1-8e94602b5817: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-763cb0c0-c9a1-47ab-abb1-8e94602b5817 in namespace emptydir-wrapper-1232, will wait for the garbage collector to delete the pods
+Jan 10 17:57:14.577: INFO: Deleting ReplicationController wrapped-volume-race-763cb0c0-c9a1-47ab-abb1-8e94602b5817 took: 6.514689ms
+Jan 10 17:57:14.678: INFO: Terminating ReplicationController wrapped-volume-race-763cb0c0-c9a1-47ab-abb1-8e94602b5817 pods took: 100.220449ms
+STEP: Cleaning up the configMaps
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:19.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-wrapper-1232" for this suite.
+
+• [SLOW TEST:69.025 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":277,"completed":210,"skipped":3635,"failed":0}
+[sig-storage] ConfigMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:19.196: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-upd-9a76844e-a92b-4c9e-bef3-5918ace1c0b0
+STEP: Creating the pod
+STEP: Updating configmap configmap-test-upd-9a76844e-a92b-4c9e-bef3-5918ace1c0b0
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:23.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-9846" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":211,"skipped":3635,"failed":0}
+
+------------------------------
+[sig-node] Downward API 
+  should provide pod UID as env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:23.265: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod UID as env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward api env vars
+Jan 10 17:57:23.291: INFO: Waiting up to 5m0s for pod "downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc" in namespace "downward-api-5267" to be "Succeeded or Failed"
+Jan 10 17:57:23.295: INFO: Pod "downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.943259ms
+Jan 10 17:57:25.298: INFO: Pod "downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006327378s
+STEP: Saw pod success
+Jan 10 17:57:25.298: INFO: Pod "downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc" satisfied condition "Succeeded or Failed"
+Jan 10 17:57:25.300: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc container dapi-container: 
+STEP: delete the pod
+Jan 10 17:57:25.314: INFO: Waiting for pod downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc to disappear
+Jan 10 17:57:25.316: INFO: Pod downward-api-6d3d6ff1-85f1-4960-ac31-6c8d06313bfc no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:25.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-5267" for this suite.
+•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":277,"completed":212,"skipped":3635,"failed":0}
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:25.325: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir volume type on tmpfs
+Jan 10 17:57:25.351: INFO: Waiting up to 5m0s for pod "pod-6652397f-5590-433a-b504-656bbce63217" in namespace "emptydir-8857" to be "Succeeded or Failed"
+Jan 10 17:57:25.354: INFO: Pod "pod-6652397f-5590-433a-b504-656bbce63217": Phase="Pending", Reason="", readiness=false. Elapsed: 3.624828ms
+Jan 10 17:57:27.357: INFO: Pod "pod-6652397f-5590-433a-b504-656bbce63217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006364863s
+STEP: Saw pod success
+Jan 10 17:57:27.357: INFO: Pod "pod-6652397f-5590-433a-b504-656bbce63217" satisfied condition "Succeeded or Failed"
+Jan 10 17:57:27.359: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-6652397f-5590-433a-b504-656bbce63217 container test-container: 
+STEP: delete the pod
+Jan 10 17:57:27.373: INFO: Waiting for pod pod-6652397f-5590-433a-b504-656bbce63217 to disappear
+Jan 10 17:57:27.375: INFO: Pod pod-6652397f-5590-433a-b504-656bbce63217 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:27.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-8857" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":213,"skipped":3637,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Should recreate evicted statefulset [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:27.385: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-2222
+[It] Should recreate evicted statefulset [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Looking for a node to schedule stateful set and pod
+STEP: Creating pod with conflicting port in namespace statefulset-2222
+STEP: Creating statefulset with conflicting port in namespace statefulset-2222
+STEP: Waiting until pod test-pod will start running in namespace statefulset-2222
+STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2222
+Jan 10 17:57:29.427: INFO: Observed stateful pod in namespace: statefulset-2222, name: ss-0, uid: a02551b3-db30-4a82-96a8-530a72ffdcd4, status phase: Pending. Waiting for statefulset controller to delete.
+Jan 10 17:57:30.225: INFO: Observed stateful pod in namespace: statefulset-2222, name: ss-0, uid: a02551b3-db30-4a82-96a8-530a72ffdcd4, status phase: Failed. Waiting for statefulset controller to delete.
+Jan 10 17:57:30.230: INFO: Observed stateful pod in namespace: statefulset-2222, name: ss-0, uid: a02551b3-db30-4a82-96a8-530a72ffdcd4, status phase: Failed. Waiting for statefulset controller to delete.
+Jan 10 17:57:30.233: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2222
+STEP: Removing pod with conflicting port in namespace statefulset-2222
+STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2222 and will be in running state
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 17:57:34.251: INFO: Deleting all statefulset in ns statefulset-2222
+Jan 10 17:57:34.253: INFO: Scaling statefulset ss to 0
+Jan 10 17:57:44.264: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 17:57:44.266: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:44.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-2222" for this suite.
+
+• [SLOW TEST:16.895 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+    Should recreate evicted statefulset [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":277,"completed":214,"skipped":3650,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:44.281: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
+[It] should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:48.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-8991" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":277,"completed":215,"skipped":3689,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:48.318: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a watch on configmaps
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: closing the watch once it receives two notifications
+Jan 10 17:57:48.348: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7146 /api/v1/namespaces/watch-7146/configmaps/e2e-watch-test-watch-closed 133ae4e7-318e-45d7-8565-1fe763eb5d66 24363 0 2021-01-10 17:57:48 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-01-10 17:57:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 17:57:48.348: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7146 /api/v1/namespaces/watch-7146/configmaps/e2e-watch-test-watch-closed 133ae4e7-318e-45d7-8565-1fe763eb5d66 24364 0 2021-01-10 17:57:48 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-01-10 17:57:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: modifying the configmap a second time, while the watch is closed
+STEP: creating a new watch on configmaps from the last resource version observed by the first watch
+STEP: deleting the configmap
+STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
+Jan 10 17:57:48.358: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7146 /api/v1/namespaces/watch-7146/configmaps/e2e-watch-test-watch-closed 133ae4e7-318e-45d7-8565-1fe763eb5d66 24365 0 2021-01-10 17:57:48 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-01-10 17:57:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 17:57:48.358: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7146 /api/v1/namespaces/watch-7146/configmaps/e2e-watch-test-watch-closed 133ae4e7-318e-45d7-8565-1fe763eb5d66 24366 0 2021-01-10 17:57:48 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-01-10 17:57:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:57:48.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-7146" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":277,"completed":216,"skipped":3701,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:57:48.365: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Jan 10 17:57:52.421: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:57:52.423: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:57:54.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:57:54.426: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:57:56.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:57:56.426: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:57:58.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:57:58.426: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:58:00.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:58:00.426: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:58:02.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:58:02.426: INFO: Pod pod-with-poststart-exec-hook still exists
+Jan 10 17:58:04.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jan 10 17:58:04.426: INFO: Pod pod-with-poststart-exec-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:58:04.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-5859" for this suite.
+
+• [SLOW TEST:16.067 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  when create a pod with lifecycle hook
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute poststart exec hook properly [NodeConformance] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":277,"completed":217,"skipped":3717,"failed":0}
+SSSSSS
+------------------------------
+[sig-cli] Kubectl client Guestbook application 
+  should create and stop a working application  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:58:04.433: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should create and stop a working application  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating all guestbook components
+Jan 10 17:58:04.452: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: agnhost-slave
+  labels:
+    app: agnhost
+    role: slave
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+  selector:
+    app: agnhost
+    role: slave
+    tier: backend
+
+Jan 10 17:58:04.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:04.823: INFO: stderr: ""
+Jan 10 17:58:04.823: INFO: stdout: "service/agnhost-slave created\n"
+Jan 10 17:58:04.823: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: agnhost-master
+  labels:
+    app: agnhost
+    role: master
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+    targetPort: 6379
+  selector:
+    app: agnhost
+    role: master
+    tier: backend
+
+Jan 10 17:58:04.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:05.146: INFO: stderr: ""
+Jan 10 17:58:05.146: INFO: stdout: "service/agnhost-master created\n"
+Jan 10 17:58:05.146: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: frontend
+  labels:
+    app: guestbook
+    tier: frontend
+spec:
+  # if your cluster supports it, uncomment the following to automatically create
+  # an external load-balanced IP for the frontend service.
+  # type: LoadBalancer
+  ports:
+  - port: 80
+  selector:
+    app: guestbook
+    tier: frontend
+
+Jan 10 17:58:05.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:05.438: INFO: stderr: ""
+Jan 10 17:58:05.438: INFO: stdout: "service/frontend created\n"
+Jan 10 17:58:05.438: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: frontend
+spec:
+  replicas: 3
+  selector:
+    matchLabels:
+      app: guestbook
+      tier: frontend
+  template:
+    metadata:
+      labels:
+        app: guestbook
+        tier: frontend
+    spec:
+      containers:
+      - name: guestbook-frontend
+        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
+        args: [ "guestbook", "--backend-port", "6379" ]
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 80
+
+Jan 10 17:58:05.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:05.602: INFO: stderr: ""
+Jan 10 17:58:05.602: INFO: stdout: "deployment.apps/frontend created\n"
+Jan 10 17:58:05.602: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: agnhost-master
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: agnhost
+      role: master
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: agnhost
+        role: master
+        tier: backend
+    spec:
+      containers:
+      - name: master
+        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
+        args: [ "guestbook", "--http-port", "6379" ]
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 6379
+
+Jan 10 17:58:05.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:05.859: INFO: stderr: ""
+Jan 10 17:58:05.859: INFO: stdout: "deployment.apps/agnhost-master created\n"
+Jan 10 17:58:05.859: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: agnhost-slave
+spec:
+  replicas: 2
+  selector:
+    matchLabels:
+      app: agnhost
+      role: slave
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: agnhost
+        role: slave
+        tier: backend
+    spec:
+      containers:
+      - name: slave
+        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
+        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 6379
+
+Jan 10 17:58:05.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-3093'
+Jan 10 17:58:06.068: INFO: stderr: ""
+Jan 10 17:58:06.068: INFO: stdout: "deployment.apps/agnhost-slave created\n"
+STEP: validating guestbook app
+Jan 10 17:58:06.068: INFO: Waiting for all frontend pods to be Running.
+Jan 10 17:58:11.118: INFO: Waiting for frontend to serve content.
+Jan 10 17:58:11.124: INFO: Trying to add a new entry to the guestbook.
+Jan 10 17:58:11.131: INFO: Verifying that added entry can be retrieved.
+Jan 10 17:58:11.137: INFO: Failed to get response from guestbook. err: , response: {"data":""}
+STEP: using delete to clean up resources
+Jan 10 17:58:16.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.226: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
+STEP: using delete to clean up resources
+Jan 10 17:58:16.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.330: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.330: INFO: stdout: "service \"agnhost-master\" force deleted\n"
+STEP: using delete to clean up resources
+Jan 10 17:58:16.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.427: INFO: stdout: "service \"frontend\" force deleted\n"
+STEP: using delete to clean up resources
+Jan 10 17:58:16.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.521: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.521: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
+STEP: using delete to clean up resources
+Jan 10 17:58:16.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.618: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
+STEP: using delete to clean up resources
+Jan 10 17:58:16.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete --grace-period=0 --force -f - --namespace=kubectl-3093'
+Jan 10 17:58:16.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jan 10 17:58:16.722: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:58:16.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-3093" for this suite.
+
+• [SLOW TEST:12.306 seconds]
+[sig-cli] Kubectl client
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Guestbook application
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
+    should create and stop a working application  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":277,"completed":218,"skipped":3723,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:58:16.739: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap configmap-684/configmap-test-5d7cec18-5898-4169-8c0d-27bf5a6c1748
+STEP: Creating a pod to test consume configMaps
+Jan 10 17:58:16.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21" in namespace "configmap-684" to be "Succeeded or Failed"
+Jan 10 17:58:16.775: INFO: Pod "pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244881ms
+Jan 10 17:58:18.777: INFO: Pod "pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006601995s
+STEP: Saw pod success
+Jan 10 17:58:18.777: INFO: Pod "pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21" satisfied condition "Succeeded or Failed"
+Jan 10 17:58:18.779: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21 container env-test: 
+STEP: delete the pod
+Jan 10 17:58:18.794: INFO: Waiting for pod pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21 to disappear
+Jan 10 17:58:18.795: INFO: Pod pod-configmaps-56b569a0-5dbe-46e0-9308-4bc68c78cc21 no longer exists
+[AfterEach] [sig-node] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:58:18.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-684" for this suite.
+•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":277,"completed":219,"skipped":3778,"failed":0}
+SSSSSSSSS
+------------------------------
+[sig-apps] ReplicaSet 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:58:18.802: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:58:18.822: INFO: Creating ReplicaSet my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45
+Jan 10 17:58:18.829: INFO: Pod name my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45: Found 0 pods out of 1
+Jan 10 17:58:23.832: INFO: Pod name my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45: Found 1 pods out of 1
+Jan 10 17:58:23.832: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45" is running
+Jan 10 17:58:23.834: INFO: Pod "my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45-xwn2m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:58:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:58:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:58:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-10 17:58:18 +0000 UTC Reason: Message:}])
+Jan 10 17:58:23.834: INFO: Trying to dial the pod
+Jan 10 17:58:28.841: INFO: Controller my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45: Got expected result from replica 1 [my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45-xwn2m]: "my-hostname-basic-01880d34-ed4a-4a91-9a31-86e3aea39b45-xwn2m", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:58:28.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replicaset-3373" for this suite.
+
+• [SLOW TEST:10.046 seconds]
+[sig-apps] ReplicaSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":277,"completed":220,"skipped":3787,"failed":0}
+SSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:58:28.849: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
+[It] should rollback without unnecessary restarts [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:58:28.887: INFO: Create a RollingUpdate DaemonSet
+Jan 10 17:58:28.892: INFO: Check that daemon pods launch on every node of the cluster
+Jan 10 17:58:28.896: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:28.896: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:28.896: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:28.898: INFO: Number of nodes with available pods: 0
+Jan 10 17:58:28.898: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:58:29.902: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:29.902: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:29.902: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:29.904: INFO: Number of nodes with available pods: 0
+Jan 10 17:58:29.904: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:58:30.902: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:30.902: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:30.902: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:30.904: INFO: Number of nodes with available pods: 3
+Jan 10 17:58:30.904: INFO: Number of running nodes: 3, number of available pods: 3
+Jan 10 17:58:30.905: INFO: Update the DaemonSet to trigger a rollout
+Jan 10 17:58:30.911: INFO: Updating DaemonSet daemon-set
+Jan 10 17:58:39.921: INFO: Roll back the DaemonSet before rollout is complete
+Jan 10 17:58:39.927: INFO: Updating DaemonSet daemon-set
+Jan 10 17:58:39.927: INFO: Make sure DaemonSet rollback is complete
+Jan 10 17:58:39.931: INFO: Wrong image for pod: daemon-set-tvz6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
+Jan 10 17:58:39.931: INFO: Pod daemon-set-tvz6k is not available
+Jan 10 17:58:39.934: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:39.934: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:39.934: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:40.937: INFO: Wrong image for pod: daemon-set-tvz6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
+Jan 10 17:58:40.937: INFO: Pod daemon-set-tvz6k is not available
+Jan 10 17:58:40.940: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:40.940: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:40.940: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:41.937: INFO: Pod daemon-set-4f64h is not available
+Jan 10 17:58:41.940: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:41.940: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:58:41.940: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8717, will wait for the garbage collector to delete the pods
+Jan 10 17:58:42.001: INFO: Deleting DaemonSet.extensions daemon-set took: 5.034279ms
+Jan 10 17:58:42.102: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.1839ms
+Jan 10 17:58:54.304: INFO: Number of nodes with available pods: 0
+Jan 10 17:58:54.304: INFO: Number of running nodes: 0, number of available pods: 0
+Jan 10 17:58:54.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8717/daemonsets","resourceVersion":"24963"},"items":null}
+
+Jan 10 17:58:54.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8717/pods","resourceVersion":"24963"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:58:54.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-8717" for this suite.
+
+• [SLOW TEST:25.474 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":277,"completed":221,"skipped":3794,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:58:54.324: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
+[It] should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+STEP: setting up watch
+STEP: submitting the pod to kubernetes
+Jan 10 17:58:54.347: INFO: observed the pod list
+STEP: verifying the pod is in kubernetes
+STEP: verifying pod creation was observed
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+STEP: verifying pod deletion was observed
+[AfterEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:59:04.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-5373" for this suite.
+
+• [SLOW TEST:9.979 seconds]
+[k8s.io] Pods
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":277,"completed":222,"skipped":3816,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:59:04.303: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:59:04.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e" in namespace "projected-8558" to be "Succeeded or Failed"
+Jan 10 17:59:04.335: INFO: Pod "downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281601ms
+Jan 10 17:59:06.337: INFO: Pod "downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004811807s
+STEP: Saw pod success
+Jan 10 17:59:06.337: INFO: Pod "downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e" satisfied condition "Succeeded or Failed"
+Jan 10 17:59:06.339: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e container client-container: 
+STEP: delete the pod
+Jan 10 17:59:06.354: INFO: Waiting for pod downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e to disappear
+Jan 10 17:59:06.357: INFO: Pod downwardapi-volume-d56dec5d-8807-4ffd-a755-70467bdb4d1e no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:59:06.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8558" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":277,"completed":223,"skipped":3874,"failed":0}
+SSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support rollover [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:59:06.364: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
+[It] deployment should support rollover [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:59:06.393: INFO: Pod name rollover-pod: Found 0 pods out of 1
+Jan 10 17:59:11.395: INFO: Pod name rollover-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jan 10 17:59:11.395: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
+Jan 10 17:59:13.398: INFO: Creating deployment "test-rollover-deployment"
+Jan 10 17:59:13.404: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
+Jan 10 17:59:15.409: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
+Jan 10 17:59:15.413: INFO: Ensure that both replica sets have 1 created replica
+Jan 10 17:59:15.417: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
+Jan 10 17:59:15.422: INFO: Updating deployment test-rollover-deployment
+Jan 10 17:59:15.422: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
+Jan 10 17:59:17.426: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
+Jan 10 17:59:17.430: INFO: Make sure deployment "test-rollover-deployment" is complete
+Jan 10 17:59:17.433: INFO: all replica sets need to contain the pod-template-hash label
+Jan 10 17:59:17.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898357, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jan 10 17:59:19.438: INFO: all replica sets need to contain the pod-template-hash label
+Jan 10 17:59:19.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898357, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jan 10 17:59:21.438: INFO: all replica sets need to contain the pod-template-hash label
+Jan 10 17:59:21.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898357, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jan 10 17:59:23.438: INFO: all replica sets need to contain the pod-template-hash label
+Jan 10 17:59:23.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898357, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jan 10 17:59:25.438: INFO: all replica sets need to contain the pod-template-hash label
+Jan 10 17:59:25.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898357, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745898353, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jan 10 17:59:27.438: INFO: 
+Jan 10 17:59:27.438: INFO: Ensure that both old replica sets have no replicas
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+Jan 10 17:59:27.443: INFO: Deployment "test-rollover-deployment":
+&Deployment{ObjectMeta:{test-rollover-deployment  deployment-694 /apis/apps/v1/namespaces/deployment-694/deployments/test-rollover-deployment ea187a92-4967-4ccf-8220-fa3a5fa59315 25189 2 2021-01-10 17:59:13 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2021-01-10 17:59:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:59:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005fe4278  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-10 17:59:13 +0000 UTC,LastTransitionTime:2021-01-10 17:59:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2021-01-10 17:59:27 +0000 UTC,LastTransitionTime:2021-01-10 17:59:13 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}
+
+Jan 10 17:59:27.445: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
+&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-694 /apis/apps/v1/namespaces/deployment-694/replicasets/test-rollover-deployment-84f7f6f64b 0ef1e5bb-c314-46db-9ce5-d6b755444ea1 25182 2 2021-01-10 17:59:15 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ea187a92-4967-4ccf-8220-fa3a5fa59315 0xc005211397 0xc005211398}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:59:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 97 49 56 55 97 57 50 45 52 57 54 55 45 52 99 99 102 45 56 50 50 48 45 102 97 51 97 53 102 97 53 57 51 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005211428  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:59:27.445: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
+Jan 10 17:59:27.445: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-694 /apis/apps/v1/namespaces/deployment-694/replicasets/test-rollover-controller 0567ed45-a3e8-4b66-9211-226ae2a0b006 25188 2 2021-01-10 17:59:06 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ea187a92-4967-4ccf-8220-fa3a5fa59315 0xc005211187 0xc005211188}] []  [{e2e.test Update apps/v1 2021-01-10 17:59:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:59:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 97 49 56 55 97 57 50 45 52 57 54 55 45 52 99 99 102 45 56 50 50 48 45 102 97 51 97 53 102 97 53 57 51 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005211228  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:59:27.446: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-694 /apis/apps/v1/namespaces/deployment-694/replicasets/test-rollover-deployment-5686c4cfd5 00150ad9-d60a-4c32-b0fc-02de053a80f3 25126 2 2021-01-10 17:59:13 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ea187a92-4967-4ccf-8220-fa3a5fa59315 0xc005211297 0xc005211298}] []  [{kube-controller-manager Update apps/v1 2021-01-10 17:59:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 97 49 56 55 97 57 50 45 52 57 54 55 45 52 99 99 102 45 56 50 50 48 45 102 97 51 97 53 102 97 53 57 51 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005211328  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jan 10 17:59:27.448: INFO: Pod "test-rollover-deployment-84f7f6f64b-7gxxw" is available:
+&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-7gxxw test-rollover-deployment-84f7f6f64b- deployment-694 /api/v1/namespaces/deployment-694/pods/test-rollover-deployment-84f7f6f64b-7gxxw 421ce13a-bebe-4015-a718-5c969b5fd526 25145 0 2021-01-10 17:59:15 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[cni.projectcalico.org/podIP:100.108.158.182/32 cni.projectcalico.org/podIPs:100.108.158.182/32] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 0ef1e5bb-c314-46db-9ce5-d6b755444ea1 0xc0025724d7 0xc0025724d8}] []  [{kube-controller-manager Update v1 2021-01-10 17:59:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 102 49 101 53 98 98 45 99 51 49 52 45 52 54 100 98 45 57 99 101 53 45 100 54 98 55 53 53 52 52 52 101 97 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:59:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:59:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 56 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l7nxj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l7nxj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l7nxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:59:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:59:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:59:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:59:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.182,StartTime:2021-01-10 17:59:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:59:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://98aaece9c79fca5fac4bdcf23085265e1d90c252dde126e4d84d18727a4e608d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:59:27.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-694" for this suite.
+
+• [SLOW TEST:21.091 seconds]
+[sig-apps] Deployment
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should support rollover [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":277,"completed":224,"skipped":3879,"failed":0}
+SSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:59:27.455: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
+[It] should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:59:27.475: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:59:29.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-8113" for this suite.
+•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":277,"completed":225,"skipped":3889,"failed":0}
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:59:29.606: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating projection with configMap that has name projected-configmap-test-upd-c1d31761-59e1-4e72-98e8-8f9484e69203
+STEP: Creating the pod
+STEP: Updating configmap projected-configmap-test-upd-c1d31761-59e1-4e72-98e8-8f9484e69203
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:59:33.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-5294" for this suite.
+•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":226,"skipped":3910,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:59:33.674: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a watch on configmaps with label A
+STEP: creating a watch on configmaps with label B
+STEP: creating a watch on configmaps with label A or B
+STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
+Jan 10 17:59:33.701: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25267 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 17:59:33.701: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25267 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: modifying configmap A and ensuring the correct watchers observe the notification
+Jan 10 17:59:43.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25319 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 17:59:43.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25319 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: modifying configmap A again and ensuring the correct watchers observe the notification
+Jan 10 17:59:53.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25355 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 17:59:53.714: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25355 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: deleting configmap A and ensuring the correct watchers observe the notification
+Jan 10 18:00:03.719: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25389 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:00:03.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-a 1e4b12c5-fc0a-4f5f-a1ac-0602db30c173 25389 0 2021-01-10 17:59:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-01-10 17:59:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
+Jan 10 18:00:13.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-b 707b2df5-c326-4a84-aa11-97299e85ac9c 25432 0 2021-01-10 18:00:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-01-10 18:00:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:00:13.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-b 707b2df5-c326-4a84-aa11-97299e85ac9c 25432 0 2021-01-10 18:00:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-01-10 18:00:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: deleting configmap B and ensuring the correct watchers observe the notification
+Jan 10 18:00:23.731: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-b 707b2df5-c326-4a84-aa11-97299e85ac9c 25467 0 2021-01-10 18:00:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-01-10 18:00:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:00:23.731: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8243 /api/v1/namespaces/watch-8243/configmaps/e2e-watch-test-configmap-b 707b2df5-c326-4a84-aa11-97299e85ac9c 25467 0 2021-01-10 18:00:13 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-01-10 18:00:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:00:33.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-8243" for this suite.
+
+• [SLOW TEST:60.065 seconds]
+[sig-api-machinery] Watchers
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":277,"completed":227,"skipped":3933,"failed":0}
+S
+------------------------------
+[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
+  watch on custom resource definition objects [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:00:33.739: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] watch on custom resource definition objects [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 18:00:33.761: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Creating first CR 
+Jan 10 18:00:39.304: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:39Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:00:39Z]] name:name1 resourceVersion:25538 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:428780a0-cb28-44c1-8026-9c9997a4c6f3] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Creating second CR
+Jan 10 18:00:49.309: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:49Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:00:49Z]] name:name2 resourceVersion:25573 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:577d6591-055d-4968-b07f-38982d7982d0] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Modifying first CR
+Jan 10 18:00:59.314: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:00:59Z]] name:name1 resourceVersion:25608 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:428780a0-cb28-44c1-8026-9c9997a4c6f3] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Modifying second CR
+Jan 10 18:01:09.319: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:01:09Z]] name:name2 resourceVersion:25643 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:577d6591-055d-4968-b07f-38982d7982d0] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Deleting first CR
+Jan 10 18:01:19.325: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:00:59Z]] name:name1 resourceVersion:25678 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:428780a0-cb28-44c1-8026-9c9997a4c6f3] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Deleting second CR
+Jan 10 18:01:29.332: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-10T18:00:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-10T18:01:09Z]] name:name2 resourceVersion:25713 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:577d6591-055d-4968-b07f-38982d7982d0] num:map[num1:9223372036854775807 num2:1000000]]}
+[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:01:39.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-watch-4391" for this suite.
+
+• [SLOW TEST:66.108 seconds]
+[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  CustomResourceDefinition Watch
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
+    watch on custom resource definition objects [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":277,"completed":228,"skipped":3934,"failed":0}
+[sig-cli] Kubectl client Proxy server 
+  should support proxy with --port 0  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:01:39.848: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should support proxy with --port 0  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: starting the proxy server
+Jan 10 18:01:39.870: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-870154433 proxy -p 0 --disable-filter'
+STEP: curling proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:01:39.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9435" for this suite.
+•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":277,"completed":229,"skipped":3934,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl expose 
+  should create services for rc  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:01:39.947: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should create services for rc  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating Agnhost RC
+Jan 10 18:01:39.982: INFO: namespace kubectl-1194
+Jan 10 18:01:39.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 create -f - --namespace=kubectl-1194'
+Jan 10 18:01:40.229: INFO: stderr: ""
+Jan 10 18:01:40.229: INFO: stdout: "replicationcontroller/agnhost-master created\n"
+STEP: Waiting for Agnhost master to start.
+Jan 10 18:01:41.232: INFO: Selector matched 1 pods for map[app:agnhost]
+Jan 10 18:01:41.232: INFO: Found 0 / 1
+Jan 10 18:01:42.232: INFO: Selector matched 1 pods for map[app:agnhost]
+Jan 10 18:01:42.232: INFO: Found 1 / 1
+Jan 10 18:01:42.232: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jan 10 18:01:42.234: INFO: Selector matched 1 pods for map[app:agnhost]
+Jan 10 18:01:42.234: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jan 10 18:01:42.234: INFO: wait on agnhost-master startup in kubectl-1194 
+Jan 10 18:01:42.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 logs agnhost-master-ntnq5 agnhost-master --namespace=kubectl-1194'
+Jan 10 18:01:42.325: INFO: stderr: ""
+Jan 10 18:01:42.325: INFO: stdout: "Paused\n"
+STEP: exposing RC
+Jan 10 18:01:42.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1194'
+Jan 10 18:01:42.435: INFO: stderr: ""
+Jan 10 18:01:42.435: INFO: stdout: "service/rm2 exposed\n"
+Jan 10 18:01:42.439: INFO: Service rm2 in namespace kubectl-1194 found.
+STEP: exposing service
+Jan 10 18:01:44.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1194'
+Jan 10 18:01:44.534: INFO: stderr: ""
+Jan 10 18:01:44.534: INFO: stdout: "service/rm3 exposed\n"
+Jan 10 18:01:44.543: INFO: Service rm3 in namespace kubectl-1194 found.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:01:46.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1194" for this suite.
+
+• [SLOW TEST:6.607 seconds]
+[sig-cli] Kubectl client
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl expose
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
+    should create services for rc  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":277,"completed":230,"skipped":3971,"failed":0}
+SSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:01:46.554: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Jan 10 18:01:46.585: INFO: Waiting up to 5m0s for pod "pod-e411b401-75a3-4c3e-8dc8-fb903331f73b" in namespace "emptydir-9669" to be "Succeeded or Failed"
+Jan 10 18:01:46.587: INFO: Pod "pod-e411b401-75a3-4c3e-8dc8-fb903331f73b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.700181ms
+Jan 10 18:01:48.589: INFO: Pod "pod-e411b401-75a3-4c3e-8dc8-fb903331f73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004094609s
+STEP: Saw pod success
+Jan 10 18:01:48.589: INFO: Pod "pod-e411b401-75a3-4c3e-8dc8-fb903331f73b" satisfied condition "Succeeded or Failed"
+Jan 10 18:01:48.591: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-e411b401-75a3-4c3e-8dc8-fb903331f73b container test-container: 
+STEP: delete the pod
+Jan 10 18:01:48.610: INFO: Waiting for pod pod-e411b401-75a3-4c3e-8dc8-fb903331f73b to disappear
+Jan 10 18:01:48.611: INFO: Pod pod-e411b401-75a3-4c3e-8dc8-fb903331f73b no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:01:48.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9669" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":231,"skipped":3977,"failed":0}
+S
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:01:48.619: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ConfigMap
+STEP: Ensuring resource quota status captures configMap creation
+STEP: Deleting a ConfigMap
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:02:04.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-4015" for this suite.
+
+• [SLOW TEST:16.059 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":277,"completed":232,"skipped":3978,"failed":0}
+SSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:02:04.678: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:02:11.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-6780" for this suite.
+
+• [SLOW TEST:7.041 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":277,"completed":233,"skipped":3984,"failed":0}
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:02:11.720: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating secret secrets-3754/secret-test-8914d224-bfd1-4875-ad4d-394fff772a01
+STEP: Creating a pod to test consume secrets
+Jan 10 18:02:11.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c" in namespace "secrets-3754" to be "Succeeded or Failed"
+Jan 10 18:02:11.753: INFO: Pod "pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.883078ms
+Jan 10 18:02:13.755: INFO: Pod "pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004070277s
+STEP: Saw pod success
+Jan 10 18:02:13.755: INFO: Pod "pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c" satisfied condition "Succeeded or Failed"
+Jan 10 18:02:13.757: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c container env-test: 
+STEP: delete the pod
+Jan 10 18:02:13.774: INFO: Waiting for pod pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c to disappear
+Jan 10 18:02:13.776: INFO: Pod pod-configmaps-33a224ed-2c05-4c31-8530-9ad62d2a4c5c no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:02:13.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-3754" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":277,"completed":234,"skipped":3999,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:02:13.783: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a watch on configmaps with a certain label
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: changing the label value of the configmap
+STEP: Expecting to observe a delete notification for the watched object
+Jan 10 18:02:13.814: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26002 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:02:13.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26003 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:02:13.815: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26004 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
+STEP: modifying the configmap a second time
+STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
+STEP: changing the label value of the configmap back
+STEP: modifying the configmap a third time
+STEP: deleting the configmap
+STEP: Expecting to observe an add notification for the watched object when the label value was restored
+Jan 10 18:02:23.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26052 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:02:23.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26053 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
+Jan 10 18:02:23.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-956 /api/v1/namespaces/watch-956/configmaps/e2e-watch-test-label-changed 8d04efb8-d789-497e-8fac-b41eea87a412 26054 0 2021-01-10 18:02:13 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-01-10 18:02:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:02:23.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-956" for this suite.
+
+• [SLOW TEST:10.059 seconds]
+[sig-api-machinery] Watchers
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":277,"completed":235,"skipped":4011,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should honor timeout [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:02:23.843: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 18:02:24.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 18:02:27.354: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should honor timeout [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Setting timeout (1s) shorter than webhook latency (5s)
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
+STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Having no error when timeout is longer than webhook latency
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Having no error when timeout is empty (defaulted to 10s in v1)
+STEP: Registering slow webhook via the AdmissionRegistration API
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:02:39.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4529" for this suite.
+STEP: Destroying namespace "webhook-4529-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:15.639 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should honor timeout [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":277,"completed":236,"skipped":4040,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:02:39.482: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-4994
+[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Initializing watcher for selector baz=blah,foo=bar
+STEP: Creating stateful set ss in namespace statefulset-4994
+STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4994
+Jan 10 18:02:39.523: INFO: Found 0 stateful pods, waiting for 1
+Jan 10 18:02:49.526: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
+Jan 10 18:02:49.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 18:02:49.718: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 18:02:49.718: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 18:02:49.718: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jan 10 18:02:49.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Jan 10 18:02:59.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jan 10 18:02:59.724: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 18:02:59.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999979s
+Jan 10 18:03:00.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.998001388s
+Jan 10 18:03:01.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.99528648s
+Jan 10 18:03:02.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.992518608s
+Jan 10 18:03:03.744: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.989760124s
+Jan 10 18:03:04.746: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.98715825s
+Jan 10 18:03:05.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.984818012s
+Jan 10 18:03:06.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.982505135s
+Jan 10 18:03:07.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.979750037s
+Jan 10 18:03:08.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 977.303964ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4994
+Jan 10 18:03:09.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 18:03:09.941: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 18:03:09.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 18:03:09.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 18:03:09.944: INFO: Found 1 stateful pods, waiting for 3
+Jan 10 18:03:19.947: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 18:03:19.947: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 18:03:19.947: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Verifying that stateful set ss was scaled up in order
+STEP: Scale down will halt with unhealthy stateful pod
+Jan 10 18:03:19.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 18:03:20.133: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 18:03:20.133: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 18:03:20.133: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jan 10 18:03:20.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 18:03:20.316: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 18:03:20.316: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 18:03:20.316: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jan 10 18:03:20.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 18:03:20.520: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 18:03:20.520: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 18:03:20.520: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jan 10 18:03:20.520: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 18:03:20.522: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
+Jan 10 18:03:30.527: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jan 10 18:03:30.527: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Jan 10 18:03:30.527: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Jan 10 18:03:30.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999979s
+Jan 10 18:03:31.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997687258s
+Jan 10 18:03:32.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.994662396s
+Jan 10 18:03:33.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.991957332s
+Jan 10 18:03:34.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98922753s
+Jan 10 18:03:35.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.986736353s
+Jan 10 18:03:36.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.983856561s
+Jan 10 18:03:37.554: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.981310893s
+Jan 10 18:03:38.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.978307961s
+Jan 10 18:03:39.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 975.572185ms
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4994
+Jan 10 18:03:40.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 18:03:40.757: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 18:03:40.757: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 18:03:40.757: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 18:03:40.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 18:03:40.932: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 18:03:40.932: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 18:03:40.932: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 18:03:40.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-4994 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 18:03:41.123: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 18:03:41.123: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 18:03:41.123: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 18:03:41.123: INFO: Scaling statefulset ss to 0
+STEP: Verifying that stateful set ss was scaled down in reverse order
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 18:04:11.134: INFO: Deleting all statefulset in ns statefulset-4994
+Jan 10 18:04:11.136: INFO: Scaling statefulset ss to 0
+Jan 10 18:04:11.143: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 18:04:11.144: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:11.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-4994" for this suite.
+
+• [SLOW TEST:91.679 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":277,"completed":237,"skipped":4088,"failed":0}
+[k8s.io] [sig-node] PreStop 
+  should call prestop when killing a pod  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] [sig-node] PreStop
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:11.161: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename prestop
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] [sig-node] PreStop
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
+[It] should call prestop when killing a pod  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating server pod server in namespace prestop-3277
+STEP: Waiting for pods to come up.
+STEP: Creating tester pod tester in namespace prestop-3277
+STEP: Deleting pre-stop pod
+Jan 10 18:04:20.212: INFO: Saw: {
+	"Hostname": "server",
+	"Sent": null,
+	"Received": {
+		"prestop": 1
+	},
+	"Errors": null,
+	"Log": [
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
+	],
+	"StillContactingPeers": true
+}
+STEP: Deleting the server pod
+[AfterEach] [k8s.io] [sig-node] PreStop
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "prestop-3277" for this suite.
+
+• [SLOW TEST:9.064 seconds]
+[k8s.io] [sig-node] PreStop
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should call prestop when killing a pod  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":277,"completed":238,"skipped":4088,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
+  creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:20.231: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 18:04:20.251: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:26.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-6390" for this suite.
+
+• [SLOW TEST:6.042 seconds]
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Simple CustomResourceDefinition
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
+    creating/deleting custom resource definition objects works  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":277,"completed":239,"skipped":4122,"failed":0}
+SSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:26.273: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Jan 10 18:04:26.299: INFO: Waiting up to 5m0s for pod "pod-5297ec15-55bc-4c38-9829-d2545305a1e6" in namespace "emptydir-7850" to be "Succeeded or Failed"
+Jan 10 18:04:26.301: INFO: Pod "pod-5297ec15-55bc-4c38-9829-d2545305a1e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30794ms
+Jan 10 18:04:28.304: INFO: Pod "pod-5297ec15-55bc-4c38-9829-d2545305a1e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004925576s
+STEP: Saw pod success
+Jan 10 18:04:28.304: INFO: Pod "pod-5297ec15-55bc-4c38-9829-d2545305a1e6" satisfied condition "Succeeded or Failed"
+Jan 10 18:04:28.305: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-5297ec15-55bc-4c38-9829-d2545305a1e6 container test-container: 
+STEP: delete the pod
+Jan 10 18:04:28.325: INFO: Waiting for pod pod-5297ec15-55bc-4c38-9829-d2545305a1e6 to disappear
+Jan 10 18:04:28.326: INFO: Pod pod-5297ec15-55bc-4c38-9829-d2545305a1e6 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:28.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-7850" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":240,"skipped":4129,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:28.334: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name secret-test-d22f0b18-deeb-4ca0-ab0f-adf469731a8a
+STEP: Creating a pod to test consume secrets
+Jan 10 18:04:28.361: INFO: Waiting up to 5m0s for pod "pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3" in namespace "secrets-1348" to be "Succeeded or Failed"
+Jan 10 18:04:28.363: INFO: Pod "pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.652272ms
+Jan 10 18:04:30.366: INFO: Pod "pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004219139s
+STEP: Saw pod success
+Jan 10 18:04:30.366: INFO: Pod "pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3" satisfied condition "Succeeded or Failed"
+Jan 10 18:04:30.367: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3 container secret-env-test: 
+STEP: delete the pod
+Jan 10 18:04:30.383: INFO: Waiting for pod pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3 to disappear
+Jan 10 18:04:30.384: INFO: Pod pod-secrets-75458429-00ac-48be-8be7-b2501ae346e3 no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:30.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-1348" for this suite.
+•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":277,"completed":241,"skipped":4160,"failed":0}
+SSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:30.392: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name configmap-test-volume-fc05c024-3387-4804-b9b3-7327127a5d2e
+STEP: Creating a pod to test consume configMaps
+Jan 10 18:04:30.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089" in namespace "configmap-9698" to be "Succeeded or Failed"
+Jan 10 18:04:30.422: INFO: Pod "pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089": Phase="Pending", Reason="", readiness=false. Elapsed: 1.580616ms
+Jan 10 18:04:32.425: INFO: Pod "pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004510583s
+STEP: Saw pod success
+Jan 10 18:04:32.425: INFO: Pod "pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089" satisfied condition "Succeeded or Failed"
+Jan 10 18:04:32.427: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089 container configmap-volume-test: 
+STEP: delete the pod
+Jan 10 18:04:32.441: INFO: Waiting for pod pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089 to disappear
+Jan 10 18:04:32.443: INFO: Pod pod-configmaps-f272986e-9e4d-4300-8317-53d40826e089 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-9698" for this suite.
+•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":277,"completed":242,"skipped":4169,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-network] Services 
+  should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:32.451: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating a service externalname-service with the type=ExternalName in namespace services-9358
+STEP: changing the ExternalName service to type=NodePort
+STEP: creating replication controller externalname-service in namespace services-9358
+I0110 18:04:32.501479      24 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9358, replica count: 2
+Jan 10 18:04:35.551: INFO: Creating new exec pod
+I0110 18:04:35.551883      24 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jan 10 18:04:38.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-9358 execpodlcpvk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
+Jan 10 18:04:38.746: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
+Jan 10 18:04:38.746: INFO: stdout: ""
+Jan 10 18:04:38.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-9358 execpodlcpvk -- /bin/sh -x -c nc -zv -t -w 2 100.68.212.26 80'
+Jan 10 18:04:38.915: INFO: stderr: "+ nc -zv -t -w 2 100.68.212.26 80\nConnection to 100.68.212.26 80 port [tcp/http] succeeded!\n"
+Jan 10 18:04:38.915: INFO: stdout: ""
+Jan 10 18:04:38.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-9358 execpodlcpvk -- /bin/sh -x -c nc -zv -t -w 2 172.20.33.172 30280'
+Jan 10 18:04:39.083: INFO: stderr: "+ nc -zv -t -w 2 172.20.33.172 30280\nConnection to 172.20.33.172 30280 port [tcp/30280] succeeded!\n"
+Jan 10 18:04:39.083: INFO: stdout: ""
+Jan 10 18:04:39.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=services-9358 execpodlcpvk -- /bin/sh -x -c nc -zv -t -w 2 172.20.52.46 30280'
+Jan 10 18:04:39.248: INFO: stderr: "+ nc -zv -t -w 2 172.20.52.46 30280\nConnection to 172.20.52.46 30280 port [tcp/30280] succeeded!\n"
+Jan 10 18:04:39.248: INFO: stdout: ""
+Jan 10 18:04:39.248: INFO: Cleaning up the ExternalName to NodePort test service
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:39.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-9358" for this suite.
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:6.822 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":277,"completed":243,"skipped":4177,"failed":0}
+SSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:39.274: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating the pod
+Jan 10 18:04:41.830: INFO: Successfully updated pod "annotationupdate70465baf-bffa-464e-abdf-87f3474d51a5"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:45.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2903" for this suite.
+
+• [SLOW TEST:6.587 seconds]
+[sig-storage] Downward API volume
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":277,"completed":244,"skipped":4187,"failed":0}
+[k8s.io] Kubelet when scheduling a read only busybox container 
+  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:45.860: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:04:47.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-4652" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":245,"skipped":4187,"failed":0}
+SS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy through a service and a pod  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] version v1
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:04:47.908: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy through a service and a pod  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: starting an echo server on multiple ports
+STEP: creating replication controller proxy-service-zpzrq in namespace proxy-4628
+I0110 18:04:47.943962      24 runners.go:190] Created replication controller with name: proxy-service-zpzrq, namespace: proxy-4628, replica count: 1
+I0110 18:04:48.994341      24 runners.go:190] proxy-service-zpzrq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0110 18:04:49.994532      24 runners.go:190] proxy-service-zpzrq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0110 18:04:50.994730      24 runners.go:190] proxy-service-zpzrq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0110 18:04:51.994892      24 runners.go:190] proxy-service-zpzrq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jan 10 18:04:51.997: INFO: setup took 4.068162308s, starting test cases
+STEP: running 16 cases, 20 attempts per case, 320 total attempts
+Jan 10 18:04:52.002: INFO: (0) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.037417ms)
+Jan 10 18:04:52.002: INFO: (0) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.696943ms)
+Jan 10 18:04:52.002: INFO: (0) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 5.589883ms)
+Jan 10 18:04:52.002: INFO: (0) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.341324ms)
+Jan 10 18:04:52.002: INFO: (0) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 4.822487ms)
+Jan 10 18:04:52.006: INFO: (0) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 8.126192ms)
+Jan 10 18:04:52.006: INFO: (0) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 8.454547ms)
+Jan 10 18:04:52.006: INFO: (0) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 9.305529ms)
+Jan 10 18:04:52.006: INFO: (0) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 8.208388ms)
+Jan 10 18:04:52.006: INFO: (0) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 8.587661ms)
+Jan 10 18:04:52.010: INFO: (0) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test<... (200; 4.117599ms)
+Jan 10 18:04:52.015: INFO: (1) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.136228ms)
+Jan 10 18:04:52.015: INFO: (1) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.29521ms)
+Jan 10 18:04:52.015: INFO: (1) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.235114ms)
+Jan 10 18:04:52.016: INFO: (1) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 4.969551ms)
+Jan 10 18:04:52.016: INFO: (1) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 4.603747ms)
+Jan 10 18:04:52.017: INFO: (1) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 5.326495ms)
+Jan 10 18:04:52.017: INFO: (1) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 5.68132ms)
+Jan 10 18:04:52.017: INFO: (1) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 5.429181ms)
+Jan 10 18:04:52.018: INFO: (1) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 6.328322ms)
+Jan 10 18:04:52.018: INFO: (1) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.454726ms)
+Jan 10 18:04:52.018: INFO: (1) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 7.000141ms)
+Jan 10 18:04:52.018: INFO: (1) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 7.106037ms)
+Jan 10 18:04:52.021: INFO: (2) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 2.644123ms)
+Jan 10 18:04:52.021: INFO: (2) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 2.705941ms)
+Jan 10 18:04:52.022: INFO: (2) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 2.965129ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.278952ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.399536ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 4.425275ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.623126ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 4.667144ms)
+Jan 10 18:04:52.023: INFO: (2) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 4.662865ms)
+Jan 10 18:04:52.024: INFO: (2) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 4.009973ms)
+Jan 10 18:04:52.030: INFO: (3) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.32879ms)
+Jan 10 18:04:52.030: INFO: (3) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 4.273292ms)
+Jan 10 18:04:52.030: INFO: (3) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 4.588538ms)
+Jan 10 18:04:52.030: INFO: (3) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.54109ms)
+Jan 10 18:04:52.034: INFO: (3) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 8.706507ms)
+Jan 10 18:04:52.034: INFO: (3) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 8.427058ms)
+Jan 10 18:04:52.035: INFO: (3) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 8.809131ms)
+Jan 10 18:04:52.035: INFO: (3) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 9.395836ms)
+Jan 10 18:04:52.035: INFO: (3) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 6.948705ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 7.999938ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test (200; 7.865234ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 6.845929ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 7.857234ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 8.138652ms)
+Jan 10 18:04:52.044: INFO: (4) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 8.076214ms)
+Jan 10 18:04:52.045: INFO: (4) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 8.712246ms)
+Jan 10 18:04:52.045: INFO: (4) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 8.358642ms)
+Jan 10 18:04:52.045: INFO: (4) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 8.692977ms)
+Jan 10 18:04:52.046: INFO: (4) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 8.678258ms)
+Jan 10 18:04:52.046: INFO: (4) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 9.890195ms)
+Jan 10 18:04:52.046: INFO: (4) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 9.99867ms)
+Jan 10 18:04:52.050: INFO: (5) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.100413ms)
+Jan 10 18:04:52.050: INFO: (5) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.581282ms)
+Jan 10 18:04:52.050: INFO: (5) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 3.39415ms)
+Jan 10 18:04:52.050: INFO: (5) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.699147ms)
+Jan 10 18:04:52.051: INFO: (5) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.120129ms)
+Jan 10 18:04:52.051: INFO: (5) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 4.385257ms)
+Jan 10 18:04:52.052: INFO: (5) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 4.720472ms)
+Jan 10 18:04:52.052: INFO: (5) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test<... (200; 4.76178ms)
+Jan 10 18:04:52.052: INFO: (5) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 5.307826ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 6.143639ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 6.021324ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 6.185037ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 6.167169ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.416797ms)
+Jan 10 18:04:52.053: INFO: (5) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 6.34849ms)
+Jan 10 18:04:52.057: INFO: (6) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 3.569413ms)
+Jan 10 18:04:52.057: INFO: (6) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test (200; 3.918558ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.708493ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.946702ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.722572ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 5.175413ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.21585ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 4.967761ms)
+Jan 10 18:04:52.058: INFO: (6) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.99187ms)
+Jan 10 18:04:52.059: INFO: (6) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 5.547886ms)
+Jan 10 18:04:52.060: INFO: (6) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 6.383309ms)
+Jan 10 18:04:52.060: INFO: (6) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 6.293523ms)
+Jan 10 18:04:52.060: INFO: (6) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 6.243175ms)
+Jan 10 18:04:52.060: INFO: (6) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.506744ms)
+Jan 10 18:04:52.060: INFO: (6) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 6.491344ms)
+Jan 10 18:04:52.063: INFO: (7) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.162411ms)
+Jan 10 18:04:52.064: INFO: (7) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.495026ms)
+Jan 10 18:04:52.064: INFO: (7) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.649969ms)
+Jan 10 18:04:52.064: INFO: (7) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.093639ms)
+Jan 10 18:04:52.064: INFO: (7) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.114648ms)
+Jan 10 18:04:52.064: INFO: (7) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 4.444864ms)
+Jan 10 18:04:52.065: INFO: (7) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 4.635256ms)
+Jan 10 18:04:52.065: INFO: (7) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 5.463699ms)
+Jan 10 18:04:52.065: INFO: (7) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 5.398322ms)
+Jan 10 18:04:52.065: INFO: (7) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 5.288926ms)
+Jan 10 18:04:52.065: INFO: (7) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 5.127814ms)
+Jan 10 18:04:52.066: INFO: (7) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 3.279066ms)
+Jan 10 18:04:52.070: INFO: (8) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test (200; 3.297535ms)
+Jan 10 18:04:52.071: INFO: (8) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 3.259456ms)
+Jan 10 18:04:52.071: INFO: (8) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.40313ms)
+Jan 10 18:04:52.071: INFO: (8) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.245537ms)
+Jan 10 18:04:52.071: INFO: (8) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.328213ms)
+Jan 10 18:04:52.072: INFO: (8) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 5.782246ms)
+Jan 10 18:04:52.073: INFO: (8) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 5.720698ms)
+Jan 10 18:04:52.073: INFO: (8) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 6.13004ms)
+Jan 10 18:04:52.073: INFO: (8) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 6.691175ms)
+Jan 10 18:04:52.073: INFO: (8) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 6.903815ms)
+Jan 10 18:04:52.073: INFO: (8) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.872608ms)
+Jan 10 18:04:52.077: INFO: (9) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 3.476387ms)
+Jan 10 18:04:52.077: INFO: (9) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 3.84597ms)
+Jan 10 18:04:52.077: INFO: (9) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 3.649349ms)
+Jan 10 18:04:52.079: INFO: (9) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.251719ms)
+Jan 10 18:04:52.079: INFO: (9) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 5.838983ms)
+Jan 10 18:04:52.080: INFO: (9) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 6.054463ms)
+Jan 10 18:04:52.080: INFO: (9) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 6.160739ms)
+Jan 10 18:04:52.080: INFO: (9) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 6.150119ms)
+Jan 10 18:04:52.080: INFO: (9) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 6.649667ms)
+Jan 10 18:04:52.080: INFO: (9) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 7.052559ms)
+Jan 10 18:04:52.081: INFO: (9) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 7.197003ms)
+Jan 10 18:04:52.081: INFO: (9) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 7.314208ms)
+Jan 10 18:04:52.082: INFO: (9) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 8.138762ms)
+Jan 10 18:04:52.087: INFO: (10) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 5.22038ms)
+Jan 10 18:04:52.087: INFO: (10) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 5.100995ms)
+Jan 10 18:04:52.087: INFO: (10) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.182781ms)
+Jan 10 18:04:52.088: INFO: (10) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 5.685749ms)
+Jan 10 18:04:52.089: INFO: (10) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 6.373039ms)
+Jan 10 18:04:52.089: INFO: (10) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 6.901326ms)
+Jan 10 18:04:52.089: INFO: (10) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 7.327057ms)
+Jan 10 18:04:52.089: INFO: (10) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 7.110966ms)
+Jan 10 18:04:52.089: INFO: (10) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 7.092277ms)
+Jan 10 18:04:52.090: INFO: (10) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 4.437315ms)
+Jan 10 18:04:52.095: INFO: (11) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 4.407796ms)
+Jan 10 18:04:52.095: INFO: (11) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.575469ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 4.586808ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.99907ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.817338ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 5.429991ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 5.506767ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 5.43198ms)
+Jan 10 18:04:52.096: INFO: (11) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 5.45486ms)
+Jan 10 18:04:52.097: INFO: (11) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.34993ms)
+Jan 10 18:04:52.097: INFO: (11) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 6.623528ms)
+Jan 10 18:04:52.098: INFO: (11) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 7.209352ms)
+Jan 10 18:04:52.098: INFO: (11) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 7.300548ms)
+Jan 10 18:04:52.098: INFO: (11) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 7.393674ms)
+Jan 10 18:04:52.103: INFO: (12) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.947822ms)
+Jan 10 18:04:52.103: INFO: (12) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 4.967801ms)
+Jan 10 18:04:52.103: INFO: (12) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 5.066296ms)
+Jan 10 18:04:52.104: INFO: (12) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 5.757816ms)
+Jan 10 18:04:52.107: INFO: (12) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 8.052185ms)
+Jan 10 18:04:52.107: INFO: (12) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 8.148821ms)
+Jan 10 18:04:52.107: INFO: (12) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 8.348702ms)
+Jan 10 18:04:52.107: INFO: (12) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 8.288465ms)
+Jan 10 18:04:52.107: INFO: (12) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test (200; 8.461837ms)
+Jan 10 18:04:52.117: INFO: (13) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 8.558413ms)
+Jan 10 18:04:52.117: INFO: (13) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 8.596341ms)
+Jan 10 18:04:52.117: INFO: (13) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 8.757104ms)
+Jan 10 18:04:52.117: INFO: (13) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 7.968878ms)
+Jan 10 18:04:52.117: INFO: (13) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 9.034781ms)
+Jan 10 18:04:52.118: INFO: (13) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 8.340992ms)
+Jan 10 18:04:52.118: INFO: (13) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 8.517525ms)
+Jan 10 18:04:52.118: INFO: (13) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 9.74595ms)
+Jan 10 18:04:52.118: INFO: (13) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 8.293454ms)
+Jan 10 18:04:52.123: INFO: (13) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 13.563753ms)
+Jan 10 18:04:52.123: INFO: (13) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 14.342648ms)
+Jan 10 18:04:52.123: INFO: (13) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 13.875478ms)
+Jan 10 18:04:52.123: INFO: (13) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 14.474882ms)
+Jan 10 18:04:52.123: INFO: (13) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 14.371557ms)
+Jan 10 18:04:52.128: INFO: (14) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 4.947921ms)
+Jan 10 18:04:52.128: INFO: (14) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 4.799218ms)
+Jan 10 18:04:52.135: INFO: (14) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 11.301062ms)
+Jan 10 18:04:52.135: INFO: (14) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 11.921824ms)
+Jan 10 18:04:52.135: INFO: (14) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 11.57621ms)
+Jan 10 18:04:52.135: INFO: (14) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test<... (200; 11.860917ms)
+Jan 10 18:04:52.136: INFO: (14) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 12.003691ms)
+Jan 10 18:04:52.136: INFO: (14) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 12.49752ms)
+Jan 10 18:04:52.136: INFO: (14) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 12.519828ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 12.905842ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 13.224458ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 13.186619ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 13.029226ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 13.373581ms)
+Jan 10 18:04:52.137: INFO: (14) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 13.324433ms)
+Jan 10 18:04:52.141: INFO: (15) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 4.226494ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 5.475278ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 5.245999ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 5.392243ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 5.513027ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.290607ms)
+Jan 10 18:04:52.143: INFO: (15) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 5.631182ms)
+Jan 10 18:04:52.145: INFO: (15) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 7.395085ms)
+Jan 10 18:04:52.145: INFO: (15) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 7.033251ms)
+Jan 10 18:04:52.145: INFO: (15) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 6.933895ms)
+Jan 10 18:04:52.146: INFO: (15) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 8.1738ms)
+Jan 10 18:04:52.147: INFO: (15) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 8.726126ms)
+Jan 10 18:04:52.147: INFO: (15) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 8.361832ms)
+Jan 10 18:04:52.147: INFO: (15) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 8.699117ms)
+Jan 10 18:04:52.147: INFO: (15) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 8.777674ms)
+Jan 10 18:04:52.153: INFO: (16) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 5.851082ms)
+Jan 10 18:04:52.153: INFO: (16) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 5.959898ms)
+Jan 10 18:04:52.153: INFO: (16) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 6.068162ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 6.603109ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 6.778621ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 6.736893ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 6.767072ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 6.912666ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 6.834799ms)
+Jan 10 18:04:52.154: INFO: (16) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 5.610783ms)
+Jan 10 18:04:52.163: INFO: (17) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 6.381269ms)
+Jan 10 18:04:52.163: INFO: (17) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test<... (200; 6.031305ms)
+Jan 10 18:04:52.163: INFO: (17) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 6.189258ms)
+Jan 10 18:04:52.164: INFO: (17) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.993362ms)
+Jan 10 18:04:52.165: INFO: (17) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 8.15505ms)
+Jan 10 18:04:52.165: INFO: (17) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 8.239547ms)
+Jan 10 18:04:52.165: INFO: (17) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 8.692117ms)
+Jan 10 18:04:52.165: INFO: (17) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 8.278185ms)
+Jan 10 18:04:52.171: INFO: (18) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 5.962688ms)
+Jan 10 18:04:52.171: INFO: (18) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 6.102891ms)
+Jan 10 18:04:52.171: INFO: (18) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 6.101481ms)
+Jan 10 18:04:52.171: INFO: (18) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 6.225696ms)
+Jan 10 18:04:52.172: INFO: (18) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:1080/proxy/: ... (200; 6.284363ms)
+Jan 10 18:04:52.172: INFO: (18) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 6.229136ms)
+Jan 10 18:04:52.172: INFO: (18) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: test<... (200; 6.379019ms)
+Jan 10 18:04:52.172: INFO: (18) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:160/proxy/: foo (200; 6.354441ms)
+Jan 10 18:04:52.172: INFO: (18) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname2/proxy/: bar (200; 6.556061ms)
+Jan 10 18:04:52.173: INFO: (18) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 7.765028ms)
+Jan 10 18:04:52.174: INFO: (18) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 8.446778ms)
+Jan 10 18:04:52.174: INFO: (18) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 8.372742ms)
+Jan 10 18:04:52.174: INFO: (18) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 8.392861ms)
+Jan 10 18:04:52.174: INFO: (18) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 8.484936ms)
+Jan 10 18:04:52.177: INFO: (19) /api/v1/namespaces/proxy-4628/pods/http:proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.310584ms)
+Jan 10 18:04:52.177: INFO: (19) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:162/proxy/: bar (200; 3.261466ms)
+Jan 10 18:04:52.178: INFO: (19) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q/proxy/: test (200; 3.509116ms)
+Jan 10 18:04:52.178: INFO: (19) /api/v1/namespaces/proxy-4628/pods/proxy-service-zpzrq-lrh6q:1080/proxy/: test<... (200; 3.475326ms)
+Jan 10 18:04:52.178: INFO: (19) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:462/proxy/: tls qux (200; 3.784563ms)
+Jan 10 18:04:52.178: INFO: (19) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:460/proxy/: tls baz (200; 3.653129ms)
+Jan 10 18:04:52.178: INFO: (19) /api/v1/namespaces/proxy-4628/pods/https:proxy-service-zpzrq-lrh6q:443/proxy/: ... (200; 5.163482ms)
+Jan 10 18:04:52.180: INFO: (19) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname2/proxy/: bar (200; 6.263204ms)
+Jan 10 18:04:52.181: INFO: (19) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname1/proxy/: tls baz (200; 6.58545ms)
+Jan 10 18:04:52.181: INFO: (19) /api/v1/namespaces/proxy-4628/services/http:proxy-service-zpzrq:portname1/proxy/: foo (200; 7.156905ms)
+Jan 10 18:04:52.181: INFO: (19) /api/v1/namespaces/proxy-4628/services/https:proxy-service-zpzrq:tlsportname2/proxy/: tls qux (200; 7.220692ms)
+Jan 10 18:04:52.181: INFO: (19) /api/v1/namespaces/proxy-4628/services/proxy-service-zpzrq:portname1/proxy/: foo (200; 7.347937ms)
+STEP: deleting ReplicationController proxy-service-zpzrq in namespace proxy-4628, will wait for the garbage collector to delete the pods
+Jan 10 18:04:52.238: INFO: Deleting ReplicationController proxy-service-zpzrq took: 4.656865ms
+Jan 10 18:04:52.638: INFO: Terminating ReplicationController proxy-service-zpzrq pods took: 400.214237ms
+[AfterEach] version v1
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:05:04.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "proxy-4628" for this suite.
+
+• [SLOW TEST:16.438 seconds]
+[sig-network] Proxy
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  version v1
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
+    should proxy through a service and a pod  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":277,"completed":246,"skipped":4189,"failed":0}
+SSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should be possible to delete [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:05:04.346: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
+[It] should be possible to delete [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:05:04.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-2382" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":277,"completed":247,"skipped":4198,"failed":0}
+SSSS
+------------------------------
+[sig-apps] Job 
+  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Job
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:05:04.391: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename job
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a job
+STEP: Ensuring job reaches completions
+[AfterEach] [sig-apps] Job
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:05:10.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-8194" for this suite.
+
+• [SLOW TEST:6.031 seconds]
+[sig-apps] Job
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":277,"completed":248,"skipped":4202,"failed":0}
+SSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should have a working scale subresource [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:05:10.422: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-778
+[It] should have a working scale subresource [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating statefulset ss in namespace statefulset-778
+Jan 10 18:05:10.451: INFO: Found 0 stateful pods, waiting for 1
+Jan 10 18:05:20.454: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: getting scale subresource
+STEP: updating a scale subresource
+STEP: verifying the statefulset Spec.Replicas was modified
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 18:05:20.466: INFO: Deleting all statefulset in ns statefulset-778
+Jan 10 18:05:20.467: INFO: Scaling statefulset ss to 0
+Jan 10 18:05:30.480: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 18:05:30.482: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:05:30.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-778" for this suite.
+
+• [SLOW TEST:20.074 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+    should have a working scale subresource [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":277,"completed":249,"skipped":4207,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:05:30.497: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name projected-configmap-test-volume-map-a7d08920-f513-43b2-a3d0-eabb30b5eab6
+STEP: Creating a pod to test consume configMaps
+Jan 10 18:05:30.527: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b" in namespace "projected-4905" to be "Succeeded or Failed"
+Jan 10 18:05:30.529: INFO: Pod "pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.710553ms
+Jan 10 18:05:32.531: INFO: Pod "pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004003086s
+STEP: Saw pod success
+Jan 10 18:05:32.531: INFO: Pod "pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b" satisfied condition "Succeeded or Failed"
+Jan 10 18:05:32.533: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b container projected-configmap-volume-test: 
+STEP: delete the pod
+Jan 10 18:05:32.547: INFO: Waiting for pod pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b to disappear
+Jan 10 18:05:32.549: INFO: Pod pod-projected-configmaps-e0ea199f-9cfc-4999-a385-b9ea0bd2562b no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:05:32.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4905" for this suite.
+•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":277,"completed":250,"skipped":4241,"failed":0}
+SSSSSSS
+------------------------------
+[sig-apps] Job 
+  should delete a job [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Job
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:05:32.556: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename job
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete a job [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a job
+STEP: Ensuring active pods == parallelism
+STEP: delete a job
+STEP: deleting Job.batch foo in namespace job-506, will wait for the garbage collector to delete the pods
+Jan 10 18:05:36.643: INFO: Deleting Job.batch foo took: 4.870742ms
+Jan 10 18:05:36.744: INFO: Terminating Job.batch foo pods took: 100.260074ms
+STEP: Ensuring job was deleted
+[AfterEach] [sig-apps] Job
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:06:14.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-506" for this suite.
+
+• [SLOW TEST:41.797 seconds]
+[sig-apps] Job
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should delete a job [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":277,"completed":251,"skipped":4248,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:06:14.353: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod test-webserver-7285d674-cc9b-4825-884d-27c914566ab8 in namespace container-probe-8914
+Jan 10 18:06:16.388: INFO: Started pod test-webserver-7285d674-cc9b-4825-884d-27c914566ab8 in namespace container-probe-8914
+STEP: checking the pod's current state and verifying that restartCount is present
+Jan 10 18:06:16.390: INFO: Initial restart count of pod test-webserver-7285d674-cc9b-4825-884d-27c914566ab8 is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:16.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-8914" for this suite.
+
+• [SLOW TEST:242.354 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":277,"completed":252,"skipped":4277,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:16.708: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward api env vars
+Jan 10 18:10:16.735: INFO: Waiting up to 5m0s for pod "downward-api-30f60ec5-74fd-4282-8775-031e7af33b59" in namespace "downward-api-8749" to be "Succeeded or Failed"
+Jan 10 18:10:16.737: INFO: Pod "downward-api-30f60ec5-74fd-4282-8775-031e7af33b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283649ms
+Jan 10 18:10:18.739: INFO: Pod "downward-api-30f60ec5-74fd-4282-8775-031e7af33b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004541591s
+STEP: Saw pod success
+Jan 10 18:10:18.739: INFO: Pod "downward-api-30f60ec5-74fd-4282-8775-031e7af33b59" satisfied condition "Succeeded or Failed"
+Jan 10 18:10:18.741: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downward-api-30f60ec5-74fd-4282-8775-031e7af33b59 container dapi-container: 
+STEP: delete the pod
+Jan 10 18:10:18.761: INFO: Waiting for pod downward-api-30f60ec5-74fd-4282-8775-031e7af33b59 to disappear
+Jan 10 18:10:18.763: INFO: Pod downward-api-30f60ec5-74fd-4282-8775-031e7af33b59 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:18.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-8749" for this suite.
+•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":277,"completed":253,"skipped":4300,"failed":0}
+SSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Lease 
+  lease API should be available [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Lease
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:18.770: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename lease-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] lease API should be available [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Lease
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:18.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "lease-test-9573" for this suite.
+•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":277,"completed":254,"skipped":4315,"failed":0}
+SSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:18.829: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jan 10 18:10:18.856: INFO: Waiting up to 5m0s for pod "pod-e8529cbd-9fd0-4d45-9f74-a009481c0119" in namespace "emptydir-8851" to be "Succeeded or Failed"
+Jan 10 18:10:18.858: INFO: Pod "pod-e8529cbd-9fd0-4d45-9f74-a009481c0119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.438308ms
+Jan 10 18:10:20.860: INFO: Pod "pod-e8529cbd-9fd0-4d45-9f74-a009481c0119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00475028s
+STEP: Saw pod success
+Jan 10 18:10:20.861: INFO: Pod "pod-e8529cbd-9fd0-4d45-9f74-a009481c0119" satisfied condition "Succeeded or Failed"
+Jan 10 18:10:20.862: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-e8529cbd-9fd0-4d45-9f74-a009481c0119 container test-container: 
+STEP: delete the pod
+Jan 10 18:10:20.876: INFO: Waiting for pod pod-e8529cbd-9fd0-4d45-9f74-a009481c0119 to disappear
+Jan 10 18:10:20.878: INFO: Pod pod-e8529cbd-9fd0-4d45-9f74-a009481c0119 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:20.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-8851" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":255,"skipped":4324,"failed":0}
+
+------------------------------
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
+  should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:20.884: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
+STEP: Setting up server cert
+STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
+STEP: Deploying the custom resource conversion webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 18:10:21.511: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 18:10:24.526: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
+[It] should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 18:10:24.528: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Creating a v1 custom resource
+STEP: v2 custom resource should be converted
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:30.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-webhook-2155" for this suite.
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
+
+• [SLOW TEST:9.766 seconds]
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":277,"completed":256,"skipped":4324,"failed":0}
+S
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:30.650: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test substitution in container's args
+Jan 10 18:10:30.684: INFO: Waiting up to 5m0s for pod "var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990" in namespace "var-expansion-6596" to be "Succeeded or Failed"
+Jan 10 18:10:30.688: INFO: Pod "var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438034ms
+Jan 10 18:10:32.691: INFO: Pod "var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007041307s
+STEP: Saw pod success
+Jan 10 18:10:32.691: INFO: Pod "var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990" satisfied condition "Succeeded or Failed"
+Jan 10 18:10:32.693: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990 container dapi-container: 
+STEP: delete the pod
+Jan 10 18:10:32.707: INFO: Waiting for pod var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990 to disappear
+Jan 10 18:10:32.709: INFO: Pod var-expansion-8ca80b0b-6c61-4b83-86cd-5e0e786a4990 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:32.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-6596" for this suite.
+•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":277,"completed":257,"skipped":4325,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:32.716: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating configMap with name projected-configmap-test-volume-15d76ae8-a8d7-48e3-a681-cb148844ea49
+STEP: Creating a pod to test consume configMaps
+Jan 10 18:10:32.755: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7" in namespace "projected-9823" to be "Succeeded or Failed"
+Jan 10 18:10:32.757: INFO: Pod "pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.730089ms
+Jan 10 18:10:34.759: INFO: Pod "pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004078678s
+STEP: Saw pod success
+Jan 10 18:10:34.759: INFO: Pod "pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7" satisfied condition "Succeeded or Failed"
+Jan 10 18:10:34.761: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jan 10 18:10:34.775: INFO: Waiting for pod pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7 to disappear
+Jan 10 18:10:34.777: INFO: Pod pod-projected-configmaps-622923c0-40e6-469c-ae76-d612da3d6ff7 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:34.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9823" for this suite.
+•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":258,"skipped":4338,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:34.786: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 18:10:34.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea" in namespace "projected-4968" to be "Succeeded or Failed"
+Jan 10 18:10:34.813: INFO: Pod "downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea": Phase="Pending", Reason="", readiness=false. Elapsed: 1.667419ms
+Jan 10 18:10:36.816: INFO: Pod "downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004426083s
+STEP: Saw pod success
+Jan 10 18:10:36.816: INFO: Pod "downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea" satisfied condition "Succeeded or Failed"
+Jan 10 18:10:36.818: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea container client-container: 
+STEP: delete the pod
+Jan 10 18:10:36.835: INFO: Waiting for pod downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea to disappear
+Jan 10 18:10:36.837: INFO: Pod downwardapi-volume-73ecd0b9-7e74-48ee-983b-11bc20d3f4ea no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:10:36.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4968" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":259,"skipped":4378,"failed":0}
+S
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:10:36.845: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
+[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+Jan 10 18:10:36.865: INFO: PodSpec: initContainers in spec.initContainers
+Jan 10 18:11:18.232: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-588302ee-43ea-4c92-9f6e-82f3ab54c432", GenerateName:"", Namespace:"init-container-7961", SelfLink:"/api/v1/namespaces/init-container-7961/pods/pod-init-588302ee-43ea-4c92-9f6e-82f3ab54c432", UID:"1e874dc1-8881-4405-8891-c2d4d82ea62f", ResourceVersion:"28859", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745899036, loc:(*time.Location)(0x7b4a600)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"865553424"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"100.108.158.149/32", "cni.projectcalico.org/podIPs":"100.108.158.149/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d90060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d90080)}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d900a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d900c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d900e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d90100)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2n9qd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00423e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2n9qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2n9qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2n9qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003dd00a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-33-172.ap-south-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b86000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003dd0120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003dd0140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003dd0148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003dd014c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745899036, loc:(*time.Location)(0x7b4a600)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745899036, loc:(*time.Location)(0x7b4a600)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745899036, loc:(*time.Location)(0x7b4a600)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745899036, loc:(*time.Location)(0x7b4a600)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.33.172", PodIP:"100.108.158.149", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.108.158.149"}}, StartTime:(*v1.Time)(0xc003d90120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b860e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b861c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://141f54275caa89645f7bcc15f4379fbf24a730000689be0de6477066987a6fa7", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003d90160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003d90140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003dd01c4)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-7961" for this suite.
+
+• [SLOW TEST:41.395 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":277,"completed":260,"skipped":4379,"failed":0}
+[k8s.io] Variable Expansion 
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:18.240: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test env composition
+Jan 10 18:11:18.266: INFO: Waiting up to 5m0s for pod "var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01" in namespace "var-expansion-452" to be "Succeeded or Failed"
+Jan 10 18:11:18.267: INFO: Pod "var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01": Phase="Pending", Reason="", readiness=false. Elapsed: 1.738834ms
+Jan 10 18:11:20.270: INFO: Pod "var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0041595s
+STEP: Saw pod success
+Jan 10 18:11:20.270: INFO: Pod "var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01" satisfied condition "Succeeded or Failed"
+Jan 10 18:11:20.272: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01 container dapi-container: 
+STEP: delete the pod
+Jan 10 18:11:20.288: INFO: Waiting for pod var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01 to disappear
+Jan 10 18:11:20.289: INFO: Pod var-expansion-03245d9f-64ce-4236-b3cb-551dc040de01 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:20.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-452" for this suite.
+•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":277,"completed":261,"skipped":4379,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:20.297: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a ResourceQuota with best effort scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a ResourceQuota with not best effort scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a best-effort pod
+STEP: Ensuring resource quota with best effort scope captures the pod usage
+STEP: Ensuring resource quota with not best effort ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+STEP: Creating a not best-effort pod
+STEP: Ensuring resource quota with not best effort scope captures the pod usage
+STEP: Ensuring resource quota with best effort scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:36.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-7964" for this suite.
+
+• [SLOW TEST:16.092 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":277,"completed":262,"skipped":4434,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:36.389: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test override command
+Jan 10 18:11:36.418: INFO: Waiting up to 5m0s for pod "client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930" in namespace "containers-1441" to be "Succeeded or Failed"
+Jan 10 18:11:36.419: INFO: Pod "client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930": Phase="Pending", Reason="", readiness=false. Elapsed: 1.655866ms
+Jan 10 18:11:38.422: INFO: Pod "client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004404546s
+STEP: Saw pod success
+Jan 10 18:11:38.422: INFO: Pod "client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930" satisfied condition "Succeeded or Failed"
+Jan 10 18:11:38.424: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930 container test-container: 
+STEP: delete the pod
+Jan 10 18:11:38.442: INFO: Waiting for pod client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930 to disappear
+Jan 10 18:11:38.445: INFO: Pod client-containers-22e17aa1-4a00-426a-9ae6-099f30be3930 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-1441" for this suite.
+•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":277,"completed":263,"skipped":4456,"failed":0}
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:38.452: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:40.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-555" for this suite.
+•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":277,"completed":264,"skipped":4476,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:40.495: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jan 10 18:11:41.530: INFO: Expected: &{OK} to match Container's Termination Message: OK --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:41.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-8231" for this suite.
+•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":265,"skipped":4501,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:41.561: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name s-test-opt-del-970c3a0c-fda6-4a74-a050-fdac7abc1163
+STEP: Creating secret with name s-test-opt-upd-f595e39a-4e4d-492e-b5de-9160cedd45d8
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-970c3a0c-fda6-4a74-a050-fdac7abc1163
+STEP: Updating secret s-test-opt-upd-f595e39a-4e4d-492e-b5de-9160cedd45d8
+STEP: Creating secret with name s-test-opt-create-065fcf0f-f883-4e0b-a2fb-a408ede8db15
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected secret
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:45.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2012" for this suite.
+•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":266,"skipped":4553,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:45.668: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ReplicationController
+STEP: Ensuring resource quota status captures replication controller creation
+STEP: Deleting a ReplicationController
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:56.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-6319" for this suite.
+
+• [SLOW TEST:11.065 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":277,"completed":267,"skipped":4566,"failed":0}
+SSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl version 
+  should check is all data is printed  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:56.733: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[It] should check is all data is printed  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 18:11:56.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 version'
+Jan 10 18:11:56.828: INFO: stderr: ""
+Jan 10 18:11:56.828: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.14\", GitCommit:\"89182bdd065fbcaffefec691908a739d161efc03\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:11:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.14\", GitCommit:\"89182bdd065fbcaffefec691908a739d161efc03\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:02:35Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:11:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-260" for this suite.
+•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":277,"completed":268,"skipped":4573,"failed":0}
+SSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:11:56.835: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
+[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod busybox-ebda51f6-3339-415b-9156-4fc84debc4a5 in namespace container-probe-329
+Jan 10 18:11:58.871: INFO: Started pod busybox-ebda51f6-3339-415b-9156-4fc84debc4a5 in namespace container-probe-329
+STEP: checking the pod's current state and verifying that restartCount is present
+Jan 10 18:11:58.873: INFO: Initial restart count of pod busybox-ebda51f6-3339-415b-9156-4fc84debc4a5 is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:15:59.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-329" for this suite.
+
+• [SLOW TEST:242.358 seconds]
+[k8s.io] Probing container
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":277,"completed":269,"skipped":4580,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with downward pod [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Subpath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:15:59.193: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating pod pod-subpath-test-downwardapi-rfzm
+STEP: Creating a pod to test atomic-volume-subpath
+Jan 10 18:15:59.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rfzm" in namespace "subpath-5386" to be "Succeeded or Failed"
+Jan 10 18:15:59.230: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169008ms
+Jan 10 18:16:01.233: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 2.004895356s
+Jan 10 18:16:03.235: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 4.007200056s
+Jan 10 18:16:05.237: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 6.00978277s
+Jan 10 18:16:07.240: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 8.012830434s
+Jan 10 18:16:09.243: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 10.015471703s
+Jan 10 18:16:11.246: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 12.017916882s
+Jan 10 18:16:13.248: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 14.020229182s
+Jan 10 18:16:15.251: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 16.0229821s
+Jan 10 18:16:17.253: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 18.025449312s
+Jan 10 18:16:19.256: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Running", Reason="", readiness=true. Elapsed: 20.028249956s
+Jan 10 18:16:21.258: INFO: Pod "pod-subpath-test-downwardapi-rfzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.030696567s
+STEP: Saw pod success
+Jan 10 18:16:21.258: INFO: Pod "pod-subpath-test-downwardapi-rfzm" satisfied condition "Succeeded or Failed"
+Jan 10 18:16:21.260: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-subpath-test-downwardapi-rfzm container test-container-subpath-downwardapi-rfzm: 
+STEP: delete the pod
+Jan 10 18:16:21.281: INFO: Waiting for pod pod-subpath-test-downwardapi-rfzm to disappear
+Jan 10 18:16:21.283: INFO: Pod pod-subpath-test-downwardapi-rfzm no longer exists
+STEP: Deleting pod pod-subpath-test-downwardapi-rfzm
+Jan 10 18:16:21.283: INFO: Deleting pod "pod-subpath-test-downwardapi-rfzm" in namespace "subpath-5386"
+[AfterEach] [sig-storage] Subpath
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:16:21.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-5386" for this suite.
+
+• [SLOW TEST:22.098 seconds]
+[sig-storage] Subpath
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with downward pod [LinuxOnly] [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":277,"completed":270,"skipped":4606,"failed":0}
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command in a pod 
+  should print the output to logs [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:16:21.297: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[It] should print the output to logs [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:16:23.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-315" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":277,"completed":271,"skipped":4624,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:16:23.345: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
+[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 18:16:23.377: INFO: Creating simple daemon set daemon-set
+STEP: Check that daemon pods launch on every node of the cluster.
+Jan 10 18:16:23.384: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:23.384: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:23.384: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:23.387: INFO: Number of nodes with available pods: 0
+Jan 10 18:16:23.387: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 18:16:24.391: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:24.391: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:24.391: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:24.393: INFO: Number of nodes with available pods: 0
+Jan 10 18:16:24.393: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 18:16:25.390: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:25.390: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:25.390: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:25.393: INFO: Number of nodes with available pods: 3
+Jan 10 18:16:25.393: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Update daemon pods image.
+STEP: Check that daemon pods images are updated.
+Jan 10 18:16:25.410: INFO: Wrong image for pod: daemon-set-2pdgb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:25.410: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:25.410: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:25.415: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:25.415: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:25.415: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:26.418: INFO: Wrong image for pod: daemon-set-2pdgb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:26.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:26.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:26.422: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:26.422: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:26.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:27.418: INFO: Wrong image for pod: daemon-set-2pdgb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:27.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:27.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:27.422: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:27.422: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:27.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:28.418: INFO: Wrong image for pod: daemon-set-2pdgb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:28.418: INFO: Pod daemon-set-2pdgb is not available
+Jan 10 18:16:28.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:28.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:28.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:28.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:28.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:29.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:29.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:29.418: INFO: Pod daemon-set-thfhm is not available
+Jan 10 18:16:29.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:29.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:29.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:30.419: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:30.419: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:30.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:30.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:30.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:31.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:31.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:31.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:31.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:31.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:31.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:32.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:32.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:32.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:32.422: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:32.422: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:32.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:33.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:33.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:33.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:33.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:33.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:33.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:34.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:34.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:34.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:34.422: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:34.422: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:34.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:35.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:35.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:35.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:35.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:35.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:35.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:36.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:36.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:36.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:36.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:36.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:36.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:37.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:37.418: INFO: Wrong image for pod: daemon-set-gblt7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:37.418: INFO: Pod daemon-set-gblt7 is not available
+Jan 10 18:16:37.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:37.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:37.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:38.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:38.418: INFO: Pod daemon-set-vp2zq is not available
+Jan 10 18:16:38.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:38.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:38.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:39.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:39.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:39.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:39.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:40.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:40.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:40.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:40.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:41.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:41.418: INFO: Pod daemon-set-4ffpd is not available
+Jan 10 18:16:41.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:41.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:41.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:42.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:42.418: INFO: Pod daemon-set-4ffpd is not available
+Jan 10 18:16:42.422: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:42.422: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:42.422: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:43.418: INFO: Wrong image for pod: daemon-set-4ffpd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
+Jan 10 18:16:43.418: INFO: Pod daemon-set-4ffpd is not available
+Jan 10 18:16:43.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:43.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:43.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.418: INFO: Pod daemon-set-zc7vp is not available
+Jan 10 18:16:44.421: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.421: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.421: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+STEP: Check that daemon pods are still running on every node of the cluster.
+Jan 10 18:16:44.424: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.424: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.424: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:44.427: INFO: Number of nodes with available pods: 2
+Jan 10 18:16:44.427: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 18:16:45.430: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:45.430: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:45.430: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:45.432: INFO: Number of nodes with available pods: 2
+Jan 10 18:16:45.432: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 18:16:46.430: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:46.430: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:46.430: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 18:16:46.433: INFO: Number of nodes with available pods: 3
+Jan 10 18:16:46.433: INFO: Number of running nodes: 3, number of available pods: 3
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6177, will wait for the garbage collector to delete the pods
+Jan 10 18:16:46.501: INFO: Deleting DaemonSet.extensions daemon-set took: 5.76755ms
+Jan 10 18:16:46.901: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.257489ms
+Jan 10 18:16:59.403: INFO: Number of nodes with available pods: 0
+Jan 10 18:16:59.403: INFO: Number of running nodes: 0, number of available pods: 0
+Jan 10 18:16:59.405: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6177/daemonsets","resourceVersion":"30410"},"items":null}
+
+Jan 10 18:16:59.407: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6177/pods","resourceVersion":"30410"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:16:59.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-6177" for this suite.
+
+• [SLOW TEST:36.078 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":277,"completed":272,"skipped":4661,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:16:59.424: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should serve a basic endpoint from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating service endpoint-test2 in namespace services-8739
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8739 to expose endpoints map[]
+Jan 10 18:16:59.456: INFO: Get endpoints failed (1.75544ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
+Jan 10 18:17:00.459: INFO: successfully validated that service endpoint-test2 in namespace services-8739 exposes endpoints map[] (1.004073046s elapsed)
+STEP: Creating pod pod1 in namespace services-8739
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8739 to expose endpoints map[pod1:[80]]
+Jan 10 18:17:02.478: INFO: successfully validated that service endpoint-test2 in namespace services-8739 exposes endpoints map[pod1:[80]] (2.013713468s elapsed)
+STEP: Creating pod pod2 in namespace services-8739
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8739 to expose endpoints map[pod1:[80] pod2:[80]]
+Jan 10 18:17:04.501: INFO: successfully validated that service endpoint-test2 in namespace services-8739 exposes endpoints map[pod1:[80] pod2:[80]] (2.01904382s elapsed)
+STEP: Deleting pod pod1 in namespace services-8739
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8739 to expose endpoints map[pod2:[80]]
+Jan 10 18:17:05.514: INFO: successfully validated that service endpoint-test2 in namespace services-8739 exposes endpoints map[pod2:[80]] (1.00783806s elapsed)
+STEP: Deleting pod pod2 in namespace services-8739
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8739 to expose endpoints map[]
+Jan 10 18:17:06.523: INFO: successfully validated that service endpoint-test2 in namespace services-8739 exposes endpoints map[] (1.004150582s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:17:06.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-8739" for this suite.
+[AfterEach] [sig-network] Services
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:7.124 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":277,"completed":273,"skipped":4672,"failed":0}
+SSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl replace 
+  should update a single-container pod's image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:17:06.548: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
+[BeforeEach] Kubectl replace
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
+[It] should update a single-container pod's image  [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jan 10 18:17:06.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7377'
+Jan 10 18:17:06.814: INFO: stderr: ""
+Jan 10 18:17:06.815: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
+STEP: verifying the pod e2e-test-httpd-pod is running
+STEP: verifying the pod e2e-test-httpd-pod was created
+Jan 10 18:17:11.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 get pod e2e-test-httpd-pod --namespace=kubectl-7377 -o json'
+Jan 10 18:17:11.936: INFO: stderr: ""
+Jan 10 18:17:11.936: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"annotations\": {\n            \"cni.projectcalico.org/podIP\": \"100.108.158.167/32\",\n            \"cni.projectcalico.org/podIPs\": \"100.108.158.167/32\"\n        },\n        \"creationTimestamp\": \"2021-01-10T18:17:06Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2021-01-10T18:17:06Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:annotations\": {\n                            \".\": {},\n                            \"f:cni.projectcalico.org/podIP\": {},\n                            \"f:cni.projectcalico.org/podIPs\": {}\n                        }\n                    }\n                },\n                \"manager\": \"calico\",\n                \"operation\": \"Update\",\n                \"time\": \"2021-01-10T18:17:07Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"100.108.158.167\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2021-01-10T18:17:07Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7377\",\n        \"resourceVersion\": \"30512\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7377/pods/e2e-test-httpd-pod\",\n        \"uid\": \"7b724388-8195-4944-ad6a-67de9bc931e0\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jfpcq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"ip-172-20-33-172.ap-south-1.compute.internal\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jfpcq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jfpcq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-10T18:17:06Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-10T18:17:07Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-10T18:17:07Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-01-10T18:17:06Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c54ce3b915762ecf097088896e6be6026c919ac108435473bbd6c4e18a20830c\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-01-10T18:17:07Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.20.33.172\",\n        \"phase\": \"Running\",\n        \"podIP\": \"100.108.158.167\",\n        \"podIPs\": [\n            {\n                \"ip\": \"100.108.158.167\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-01-10T18:17:06Z\"\n    }\n}\n"
+STEP: replace the image in the pod
+Jan 10 18:17:11.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 replace -f - --namespace=kubectl-7377'
+Jan 10 18:17:12.177: INFO: stderr: ""
+Jan 10 18:17:12.177: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
+STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
+[AfterEach] Kubectl replace
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
+Jan 10 18:17:12.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 delete pods e2e-test-httpd-pod --namespace=kubectl-7377'
+Jan 10 18:17:24.271: INFO: stderr: ""
+Jan 10 18:17:24.271: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:17:24.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7377" for this suite.
+
+• [SLOW TEST:17.730 seconds]
+[sig-cli] Kubectl client
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl replace
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
+    should update a single-container pod's image  [Conformance]
+    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":277,"completed":274,"skipped":4686,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:17:24.278: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating secret with name secret-test-2e0f4a01-1171-490d-80e6-3138fc1c9af9
+STEP: Creating a pod to test consume secrets
+Jan 10 18:17:24.310: INFO: Waiting up to 5m0s for pod "pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992" in namespace "secrets-5632" to be "Succeeded or Failed"
+Jan 10 18:17:24.312: INFO: Pod "pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992": Phase="Pending", Reason="", readiness=false. Elapsed: 1.62039ms
+Jan 10 18:17:26.314: INFO: Pod "pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004054825s
+STEP: Saw pod success
+Jan 10 18:17:26.314: INFO: Pod "pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992" satisfied condition "Succeeded or Failed"
+Jan 10 18:17:26.316: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992 container secret-volume-test: 
+STEP: delete the pod
+Jan 10 18:17:26.330: INFO: Waiting for pod pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992 to disappear
+Jan 10 18:17:26.332: INFO: Pod pod-secrets-d6d1a20a-edc0-4c58-bae5-af823fe1d992 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:17:26.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-5632" for this suite.
+•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":277,"completed":275,"skipped":4694,"failed":0}
+SSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:17:26.339: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 18:17:26.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b" in namespace "projected-149" to be "Succeeded or Failed"
+Jan 10 18:17:26.370: INFO: Pod "downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702307ms
+Jan 10 18:17:28.373: INFO: Pod "downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005102943s
+STEP: Saw pod success
+Jan 10 18:17:28.373: INFO: Pod "downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b" satisfied condition "Succeeded or Failed"
+Jan 10 18:17:28.374: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b container client-container: 
+STEP: delete the pod
+Jan 10 18:17:28.390: INFO: Waiting for pod downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b to disappear
+Jan 10 18:17:28.393: INFO: Pod downwardapi-volume-5f64da8f-aa06-4cee-ac8c-5ffc8d4c5d7b no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:17:28.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-149" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":276,"skipped":4707,"failed":0}
+SSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  updates the published spec when one version gets renamed [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 18:17:28.400: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates the published spec when one version gets renamed [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: set up a multi version CRD
+Jan 10 18:17:28.421: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: rename a version
+STEP: check the new version name is served
+STEP: check the old version name is removed
+STEP: check the other version is not changed
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 18:17:54.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-5474" for this suite.
+
+• [SLOW TEST:25.913 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  updates the published spec when one version gets renamed [Conformance]
+  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":277,"completed":277,"skipped":4712,"failed":0}
+SSSSSJan 10 18:17:54.313: INFO: Running AfterSuite actions on all nodes
+Jan 10 18:17:54.313: INFO: Running AfterSuite actions on node 1
+Jan 10 18:17:54.313: INFO: Skipping dumping logs from cluster
+
+JUnit report was created: /tmp/results/junit_01.xml
+{"msg":"Test Suite completed","total":277,"completed":277,"skipped":4717,"failed":0}
+
+Ran 277 of 4994 Specs in 4118.681 seconds
+SUCCESS! -- 277 Passed | 0 Failed | 0 Pending | 4717 Skipped
+PASS
+
+Ginkgo ran 1 suite in 1h8m40.19320526s
+Test Suite Passed
diff --git a/v1.18/nexastack/junit_01.xml b/v1.18/nexastack/junit_01.xml
new file mode 100644
index 00000000000..abe5b0c2196
--- /dev/null
+++ b/v1.18/nexastack/junit_01.xml
@@ -0,0 +1,14431 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file