\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
+[AfterEach] [sig-cli] Kubectl client
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:31:55.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1301" for this suite.
+•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":277,"completed":101,"skipped":1878,"failed":0}
+SSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ should perform rolling updates and roll backs of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:31:55.952: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-3026
+[It] should perform rolling updates and roll backs of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a new StatefulSet
+Jan 10 17:31:55.985: INFO: Found 0 stateful pods, waiting for 3
+Jan 10 17:32:05.988: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:32:05.988: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:32:05.988: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:32:05.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 17:32:06.186: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 17:32:06.186: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 17:32:06.186: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
+Jan 10 17:32:16.211: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Updating Pods in reverse ordinal order
+Jan 10 17:32:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 17:32:26.416: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 17:32:26.416: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 17:32:26.416: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 17:32:36.429: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:36.429: INFO: Waiting for Pod statefulset-3026/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:46.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:32:46.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:46.434: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:56.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:32:56.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:32:56.434: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:33:06.434: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:33:06.434: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:33:16.433: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+STEP: Rolling back to a previous revision
+Jan 10 17:33:26.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jan 10 17:33:26.780: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jan 10 17:33:26.780: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jan 10 17:33:26.780: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jan 10 17:33:36.805: INFO: Updating stateful set ss2
+STEP: Rolling back update in reverse ordinal order
+Jan 10 17:33:46.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 exec --namespace=statefulset-3026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jan 10 17:33:47.021: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jan 10 17:33:47.021: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jan 10 17:33:47.021: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jan 10 17:33:57.034: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
+Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
+Jan 10 17:33:57.034: INFO: Waiting for Pod statefulset-3026/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
+Jan 10 17:34:07.039: INFO: Waiting for StatefulSet statefulset-3026/ss2 to complete update
+Jan 10 17:34:07.039: INFO: Waiting for Pod statefulset-3026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
+Jan 10 17:34:07.039: INFO: Waiting for Pod statefulset-3026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 17:34:17.040: INFO: Deleting all statefulset in ns statefulset-3026
+Jan 10 17:34:17.041: INFO: Scaling statefulset ss2 to 0
+Jan 10 17:34:47.052: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 17:34:47.054: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:34:47.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-3026" for this suite.
+
+• [SLOW TEST:171.119 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+ [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+ should perform rolling updates and roll backs of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":277,"completed":102,"skipped":1883,"failed":0}
+[sig-network] Service endpoints latency
+ should not be very high [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Service endpoints latency
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:34:47.072: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename svc-latency
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not be very high [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:34:47.091: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: creating replication controller svc-latency-rc in namespace svc-latency-5274
+I0110 17:34:47.102851 24 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5274, replica count: 1
+I0110 17:34:48.153265 24 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
+I0110 17:34:49.153456 24 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
+Jan 10 17:34:49.263: INFO: Created: latency-svc-wxnb4
+Jan 10 17:34:49.271: INFO: Got endpoints: latency-svc-wxnb4 [17.395356ms]
+Jan 10 17:34:49.281: INFO: Created: latency-svc-lmsvp
+Jan 10 17:34:49.287: INFO: Got endpoints: latency-svc-lmsvp [15.702449ms]
+Jan 10 17:34:49.290: INFO: Created: latency-svc-kfdr7
+Jan 10 17:34:49.295: INFO: Got endpoints: latency-svc-kfdr7 [23.64955ms]
+Jan 10 17:34:49.298: INFO: Created: latency-svc-qgclm
+Jan 10 17:34:49.304: INFO: Got endpoints: latency-svc-qgclm [32.901813ms]
+Jan 10 17:34:49.307: INFO: Created: latency-svc-jmkgd
+Jan 10 17:34:49.313: INFO: Got endpoints: latency-svc-jmkgd [42.105306ms]
+Jan 10 17:34:49.317: INFO: Created: latency-svc-9mpr5
+Jan 10 17:34:49.325: INFO: Created: latency-svc-fgk5h
+Jan 10 17:34:49.326: INFO: Got endpoints: latency-svc-9mpr5 [54.403971ms]
+Jan 10 17:34:49.333: INFO: Created: latency-svc-sjcbv
+Jan 10 17:34:49.333: INFO: Got endpoints: latency-svc-fgk5h [62.1991ms]
+Jan 10 17:34:49.341: INFO: Created: latency-svc-h5qrf
+Jan 10 17:34:49.341: INFO: Got endpoints: latency-svc-sjcbv [69.775187ms]
+Jan 10 17:34:49.348: INFO: Got endpoints: latency-svc-h5qrf [76.784698ms]
+Jan 10 17:34:49.350: INFO: Created: latency-svc-zqzfz
+Jan 10 17:34:49.356: INFO: Got endpoints: latency-svc-zqzfz [84.90092ms]
+Jan 10 17:34:49.359: INFO: Created: latency-svc-q84ds
+Jan 10 17:34:49.364: INFO: Got endpoints: latency-svc-q84ds [92.684748ms]
+Jan 10 17:34:49.369: INFO: Created: latency-svc-hg47z
+Jan 10 17:34:49.374: INFO: Got endpoints: latency-svc-hg47z [102.262485ms]
+Jan 10 17:34:49.377: INFO: Created: latency-svc-rb9xn
+Jan 10 17:34:49.384: INFO: Created: latency-svc-9zg5c
+Jan 10 17:34:49.388: INFO: Got endpoints: latency-svc-rb9xn [117.266518ms]
+Jan 10 17:34:49.393: INFO: Got endpoints: latency-svc-9zg5c [121.102816ms]
+Jan 10 17:34:49.393: INFO: Created: latency-svc-gxfrq
+Jan 10 17:34:49.399: INFO: Got endpoints: latency-svc-gxfrq [126.962986ms]
+Jan 10 17:34:49.401: INFO: Created: latency-svc-q8nlp
+Jan 10 17:34:49.407: INFO: Got endpoints: latency-svc-q8nlp [136.2817ms]
+Jan 10 17:34:49.411: INFO: Created: latency-svc-rkh5h
+Jan 10 17:34:49.418: INFO: Got endpoints: latency-svc-rkh5h [130.703714ms]
+Jan 10 17:34:49.421: INFO: Created: latency-svc-jcwzg
+Jan 10 17:34:49.427: INFO: Got endpoints: latency-svc-jcwzg [132.472462ms]
+Jan 10 17:34:49.432: INFO: Created: latency-svc-2sggx
+Jan 10 17:34:49.437: INFO: Got endpoints: latency-svc-2sggx [133.108378ms]
+Jan 10 17:34:49.440: INFO: Created: latency-svc-j8rwx
+Jan 10 17:34:49.449: INFO: Got endpoints: latency-svc-j8rwx [135.846226ms]
+Jan 10 17:34:49.450: INFO: Created: latency-svc-2w68k
+Jan 10 17:34:49.455: INFO: Got endpoints: latency-svc-2w68k [129.36041ms]
+Jan 10 17:34:49.458: INFO: Created: latency-svc-7cknp
+Jan 10 17:34:49.463: INFO: Got endpoints: latency-svc-7cknp [129.929286ms]
+Jan 10 17:34:49.467: INFO: Created: latency-svc-w44jf
+Jan 10 17:34:49.474: INFO: Got endpoints: latency-svc-w44jf [132.627464ms]
+Jan 10 17:34:49.476: INFO: Created: latency-svc-pnbfm
+Jan 10 17:34:49.483: INFO: Got endpoints: latency-svc-pnbfm [134.781535ms]
+Jan 10 17:34:49.486: INFO: Created: latency-svc-ctjxp
+Jan 10 17:34:49.494: INFO: Got endpoints: latency-svc-ctjxp [138.030248ms]
+Jan 10 17:34:49.505: INFO: Created: latency-svc-b5s99
+Jan 10 17:34:49.510: INFO: Got endpoints: latency-svc-b5s99 [145.454564ms]
+Jan 10 17:34:49.513: INFO: Created: latency-svc-f78tt
+Jan 10 17:34:49.522: INFO: Created: latency-svc-zqvnj
+Jan 10 17:34:49.522: INFO: Got endpoints: latency-svc-f78tt [148.459724ms]
+Jan 10 17:34:49.527: INFO: Got endpoints: latency-svc-zqvnj [138.678234ms]
+Jan 10 17:34:49.532: INFO: Created: latency-svc-vhszs
+Jan 10 17:34:49.540: INFO: Created: latency-svc-9rnj6
+Jan 10 17:34:49.540: INFO: Got endpoints: latency-svc-vhszs [147.322342ms]
+Jan 10 17:34:49.544: INFO: Got endpoints: latency-svc-9rnj6 [145.827156ms]
+Jan 10 17:34:49.548: INFO: Created: latency-svc-nnd6s
+Jan 10 17:34:49.553: INFO: Got endpoints: latency-svc-nnd6s [145.672355ms]
+Jan 10 17:34:49.555: INFO: Created: latency-svc-hzlnl
+Jan 10 17:34:49.561: INFO: Got endpoints: latency-svc-hzlnl [142.935687ms]
+Jan 10 17:34:49.564: INFO: Created: latency-svc-hgch6
+Jan 10 17:34:49.569: INFO: Got endpoints: latency-svc-hgch6 [141.426382ms]
+Jan 10 17:34:49.570: INFO: Created: latency-svc-jtcv8
+Jan 10 17:34:49.579: INFO: Got endpoints: latency-svc-jtcv8 [142.09577ms]
+Jan 10 17:34:49.581: INFO: Created: latency-svc-hpv8f
+Jan 10 17:34:49.588: INFO: Got endpoints: latency-svc-hpv8f [138.712335ms]
+Jan 10 17:34:49.591: INFO: Created: latency-svc-t9jpb
+Jan 10 17:34:49.597: INFO: Got endpoints: latency-svc-t9jpb [141.828056ms]
+Jan 10 17:34:49.601: INFO: Created: latency-svc-zsxnn
+Jan 10 17:34:49.607: INFO: Created: latency-svc-9b2lz
+Jan 10 17:34:49.614: INFO: Created: latency-svc-n9428
+Jan 10 17:34:49.620: INFO: Got endpoints: latency-svc-zsxnn [156.646167ms]
+Jan 10 17:34:49.621: INFO: Created: latency-svc-cqbjv
+Jan 10 17:34:49.630: INFO: Created: latency-svc-q6fjz
+Jan 10 17:34:49.637: INFO: Created: latency-svc-425gz
+Jan 10 17:34:49.644: INFO: Created: latency-svc-99dkh
+Jan 10 17:34:49.650: INFO: Created: latency-svc-qklxl
+Jan 10 17:34:49.657: INFO: Created: latency-svc-fn49k
+Jan 10 17:34:49.664: INFO: Created: latency-svc-bt8rc
+Jan 10 17:34:49.670: INFO: Got endpoints: latency-svc-9b2lz [195.872774ms]
+Jan 10 17:34:49.672: INFO: Created: latency-svc-rvztd
+Jan 10 17:34:49.680: INFO: Created: latency-svc-9vxj8
+Jan 10 17:34:49.685: INFO: Created: latency-svc-7zvnh
+Jan 10 17:34:49.692: INFO: Created: latency-svc-k6jz6
+Jan 10 17:34:49.699: INFO: Created: latency-svc-f7qgb
+Jan 10 17:34:49.706: INFO: Created: latency-svc-s6zrf
+Jan 10 17:34:49.714: INFO: Created: latency-svc-p86hd
+Jan 10 17:34:49.720: INFO: Got endpoints: latency-svc-n9428 [236.793648ms]
+Jan 10 17:34:49.733: INFO: Created: latency-svc-259xs
+Jan 10 17:34:49.770: INFO: Got endpoints: latency-svc-cqbjv [275.142367ms]
+Jan 10 17:34:49.779: INFO: Created: latency-svc-sz65q
+Jan 10 17:34:49.818: INFO: Got endpoints: latency-svc-q6fjz [308.135711ms]
+Jan 10 17:34:49.830: INFO: Created: latency-svc-tl5j6
+Jan 10 17:34:49.867: INFO: Got endpoints: latency-svc-425gz [344.879063ms]
+Jan 10 17:34:49.877: INFO: Created: latency-svc-lvm52
+Jan 10 17:34:49.918: INFO: Got endpoints: latency-svc-99dkh [390.426665ms]
+Jan 10 17:34:49.932: INFO: Created: latency-svc-6mggt
+Jan 10 17:34:49.969: INFO: Got endpoints: latency-svc-qklxl [429.143707ms]
+Jan 10 17:34:49.981: INFO: Created: latency-svc-p5m5r
+Jan 10 17:34:50.019: INFO: Got endpoints: latency-svc-fn49k [474.554178ms]
+Jan 10 17:34:50.035: INFO: Created: latency-svc-jzc7t
+Jan 10 17:34:50.068: INFO: Got endpoints: latency-svc-bt8rc [515.316433ms]
+Jan 10 17:34:50.078: INFO: Created: latency-svc-tj7nb
+Jan 10 17:34:50.119: INFO: Got endpoints: latency-svc-rvztd [557.114599ms]
+Jan 10 17:34:50.130: INFO: Created: latency-svc-dmzrx
+Jan 10 17:34:50.169: INFO: Got endpoints: latency-svc-9vxj8 [600.628403ms]
+Jan 10 17:34:50.180: INFO: Created: latency-svc-88qln
+Jan 10 17:34:50.220: INFO: Got endpoints: latency-svc-7zvnh [640.828082ms]
+Jan 10 17:34:50.236: INFO: Created: latency-svc-hsxvz
+Jan 10 17:34:50.270: INFO: Got endpoints: latency-svc-k6jz6 [681.930581ms]
+Jan 10 17:34:50.281: INFO: Created: latency-svc-tjcft
+Jan 10 17:34:50.320: INFO: Got endpoints: latency-svc-f7qgb [722.283393ms]
+Jan 10 17:34:50.330: INFO: Created: latency-svc-jrhv4
+Jan 10 17:34:50.370: INFO: Got endpoints: latency-svc-s6zrf [749.499032ms]
+Jan 10 17:34:50.381: INFO: Created: latency-svc-ts2hq
+Jan 10 17:34:50.418: INFO: Got endpoints: latency-svc-p86hd [747.986598ms]
+Jan 10 17:34:50.429: INFO: Created: latency-svc-x8d4f
+Jan 10 17:34:50.471: INFO: Got endpoints: latency-svc-259xs [750.84181ms]
+Jan 10 17:34:50.482: INFO: Created: latency-svc-vnfsl
+Jan 10 17:34:50.520: INFO: Got endpoints: latency-svc-sz65q [749.936494ms]
+Jan 10 17:34:50.531: INFO: Created: latency-svc-wm58x
+Jan 10 17:34:50.570: INFO: Got endpoints: latency-svc-tl5j6 [751.678444ms]
+Jan 10 17:34:50.580: INFO: Created: latency-svc-j8sz9
+Jan 10 17:34:50.619: INFO: Got endpoints: latency-svc-lvm52 [751.859889ms]
+Jan 10 17:34:50.630: INFO: Created: latency-svc-96k9f
+Jan 10 17:34:50.669: INFO: Got endpoints: latency-svc-6mggt [751.493717ms]
+Jan 10 17:34:50.680: INFO: Created: latency-svc-5gmts
+Jan 10 17:34:50.719: INFO: Got endpoints: latency-svc-p5m5r [750.237437ms]
+Jan 10 17:34:50.730: INFO: Created: latency-svc-tpvbq
+Jan 10 17:34:50.779: INFO: Got endpoints: latency-svc-jzc7t [759.537254ms]
+Jan 10 17:34:50.793: INFO: Created: latency-svc-rvnwr
+Jan 10 17:34:50.818: INFO: Got endpoints: latency-svc-tj7nb [750.123417ms]
+Jan 10 17:34:50.829: INFO: Created: latency-svc-q8pzr
+Jan 10 17:34:50.873: INFO: Got endpoints: latency-svc-dmzrx [753.735794ms]
+Jan 10 17:34:50.883: INFO: Created: latency-svc-rl922
+Jan 10 17:34:50.919: INFO: Got endpoints: latency-svc-88qln [749.33338ms]
+Jan 10 17:34:50.930: INFO: Created: latency-svc-vqlks
+Jan 10 17:34:50.970: INFO: Got endpoints: latency-svc-hsxvz [748.796364ms]
+Jan 10 17:34:50.980: INFO: Created: latency-svc-qfh9m
+Jan 10 17:34:51.019: INFO: Got endpoints: latency-svc-tjcft [748.688794ms]
+Jan 10 17:34:51.030: INFO: Created: latency-svc-b4khk
+Jan 10 17:34:51.068: INFO: Got endpoints: latency-svc-jrhv4 [748.338703ms]
+Jan 10 17:34:51.081: INFO: Created: latency-svc-pss4z
+Jan 10 17:34:51.120: INFO: Got endpoints: latency-svc-ts2hq [750.329606ms]
+Jan 10 17:34:51.131: INFO: Created: latency-svc-ccjq8
+Jan 10 17:34:51.169: INFO: Got endpoints: latency-svc-x8d4f [750.685262ms]
+Jan 10 17:34:51.179: INFO: Created: latency-svc-75kbn
+Jan 10 17:34:51.219: INFO: Got endpoints: latency-svc-vnfsl [748.771495ms]
+Jan 10 17:34:51.229: INFO: Created: latency-svc-8dklc
+Jan 10 17:34:51.268: INFO: Got endpoints: latency-svc-wm58x [748.320484ms]
+Jan 10 17:34:51.280: INFO: Created: latency-svc-5gk9q
+Jan 10 17:34:51.320: INFO: Got endpoints: latency-svc-j8sz9 [750.143855ms]
+Jan 10 17:34:51.330: INFO: Created: latency-svc-422gk
+Jan 10 17:34:51.369: INFO: Got endpoints: latency-svc-96k9f [749.471381ms]
+Jan 10 17:34:51.380: INFO: Created: latency-svc-pnm57
+Jan 10 17:34:51.419: INFO: Got endpoints: latency-svc-5gmts [749.675635ms]
+Jan 10 17:34:51.428: INFO: Created: latency-svc-t7nv2
+Jan 10 17:34:51.470: INFO: Got endpoints: latency-svc-tpvbq [749.621417ms]
+Jan 10 17:34:51.480: INFO: Created: latency-svc-6dvjn
+Jan 10 17:34:51.521: INFO: Got endpoints: latency-svc-rvnwr [742.595189ms]
+Jan 10 17:34:51.532: INFO: Created: latency-svc-zxvsk
+Jan 10 17:34:51.568: INFO: Got endpoints: latency-svc-q8pzr [749.926776ms]
+Jan 10 17:34:51.580: INFO: Created: latency-svc-gbk5p
+Jan 10 17:34:51.619: INFO: Got endpoints: latency-svc-rl922 [746.18055ms]
+Jan 10 17:34:51.629: INFO: Created: latency-svc-sn4zv
+Jan 10 17:34:51.668: INFO: Got endpoints: latency-svc-vqlks [749.350135ms]
+Jan 10 17:34:51.678: INFO: Created: latency-svc-nhrfk
+Jan 10 17:34:51.722: INFO: Got endpoints: latency-svc-qfh9m [752.070036ms]
+Jan 10 17:34:51.732: INFO: Created: latency-svc-v5cxp
+Jan 10 17:34:51.768: INFO: Got endpoints: latency-svc-b4khk [748.512941ms]
+Jan 10 17:34:51.778: INFO: Created: latency-svc-prsp7
+Jan 10 17:34:51.819: INFO: Got endpoints: latency-svc-pss4z [751.007537ms]
+Jan 10 17:34:51.829: INFO: Created: latency-svc-76lw6
+Jan 10 17:34:51.869: INFO: Got endpoints: latency-svc-ccjq8 [748.848145ms]
+Jan 10 17:34:51.880: INFO: Created: latency-svc-swqkg
+Jan 10 17:34:51.919: INFO: Got endpoints: latency-svc-75kbn [750.096357ms]
+Jan 10 17:34:51.929: INFO: Created: latency-svc-vw4qj
+Jan 10 17:34:51.970: INFO: Got endpoints: latency-svc-8dklc [750.733403ms]
+Jan 10 17:34:51.981: INFO: Created: latency-svc-kp5mc
+Jan 10 17:34:52.020: INFO: Got endpoints: latency-svc-5gk9q [751.493672ms]
+Jan 10 17:34:52.030: INFO: Created: latency-svc-gb469
+Jan 10 17:34:52.070: INFO: Got endpoints: latency-svc-422gk [749.684595ms]
+Jan 10 17:34:52.082: INFO: Created: latency-svc-wp9c2
+Jan 10 17:34:52.118: INFO: Got endpoints: latency-svc-pnm57 [748.818549ms]
+Jan 10 17:34:52.130: INFO: Created: latency-svc-4t66x
+Jan 10 17:34:52.169: INFO: Got endpoints: latency-svc-t7nv2 [750.112896ms]
+Jan 10 17:34:52.180: INFO: Created: latency-svc-snh7p
+Jan 10 17:34:52.220: INFO: Got endpoints: latency-svc-6dvjn [750.30865ms]
+Jan 10 17:34:52.232: INFO: Created: latency-svc-w9f7d
+Jan 10 17:34:52.268: INFO: Got endpoints: latency-svc-zxvsk [746.668985ms]
+Jan 10 17:34:52.278: INFO: Created: latency-svc-ktr68
+Jan 10 17:34:52.318: INFO: Got endpoints: latency-svc-gbk5p [749.425356ms]
+Jan 10 17:34:52.332: INFO: Created: latency-svc-xm5vs
+Jan 10 17:34:52.369: INFO: Got endpoints: latency-svc-sn4zv [750.091066ms]
+Jan 10 17:34:52.379: INFO: Created: latency-svc-m5d6g
+Jan 10 17:34:52.419: INFO: Got endpoints: latency-svc-nhrfk [750.995327ms]
+Jan 10 17:34:52.429: INFO: Created: latency-svc-7994r
+Jan 10 17:34:52.467: INFO: Got endpoints: latency-svc-v5cxp [745.434084ms]
+Jan 10 17:34:52.479: INFO: Created: latency-svc-6zwfc
+Jan 10 17:34:52.519: INFO: Got endpoints: latency-svc-prsp7 [751.296196ms]
+Jan 10 17:34:52.532: INFO: Created: latency-svc-xqmp2
+Jan 10 17:34:52.568: INFO: Got endpoints: latency-svc-76lw6 [749.163357ms]
+Jan 10 17:34:52.579: INFO: Created: latency-svc-8ght6
+Jan 10 17:34:52.619: INFO: Got endpoints: latency-svc-swqkg [749.576094ms]
+Jan 10 17:34:52.629: INFO: Created: latency-svc-xf9z8
+Jan 10 17:34:52.669: INFO: Got endpoints: latency-svc-vw4qj [750.583727ms]
+Jan 10 17:34:52.680: INFO: Created: latency-svc-zsdxc
+Jan 10 17:34:52.718: INFO: Got endpoints: latency-svc-kp5mc [748.028203ms]
+Jan 10 17:34:52.730: INFO: Created: latency-svc-bp2vm
+Jan 10 17:34:52.771: INFO: Got endpoints: latency-svc-gb469 [751.36535ms]
+Jan 10 17:34:52.782: INFO: Created: latency-svc-qpwdt
+Jan 10 17:34:52.820: INFO: Got endpoints: latency-svc-wp9c2 [750.211448ms]
+Jan 10 17:34:52.831: INFO: Created: latency-svc-gk6xd
+Jan 10 17:34:52.868: INFO: Got endpoints: latency-svc-4t66x [750.46587ms]
+Jan 10 17:34:52.880: INFO: Created: latency-svc-bs2nr
+Jan 10 17:34:52.921: INFO: Got endpoints: latency-svc-snh7p [751.712904ms]
+Jan 10 17:34:52.931: INFO: Created: latency-svc-ktsbn
+Jan 10 17:34:52.970: INFO: Got endpoints: latency-svc-w9f7d [748.983435ms]
+Jan 10 17:34:52.982: INFO: Created: latency-svc-x45xr
+Jan 10 17:34:53.020: INFO: Got endpoints: latency-svc-ktr68 [751.819915ms]
+Jan 10 17:34:53.030: INFO: Created: latency-svc-gtbt8
+Jan 10 17:34:53.069: INFO: Got endpoints: latency-svc-xm5vs [751.525846ms]
+Jan 10 17:34:53.080: INFO: Created: latency-svc-gqghr
+Jan 10 17:34:53.119: INFO: Got endpoints: latency-svc-m5d6g [750.331825ms]
+Jan 10 17:34:53.131: INFO: Created: latency-svc-24hmd
+Jan 10 17:34:53.219: INFO: Got endpoints: latency-svc-7994r [799.711629ms]
+Jan 10 17:34:53.231: INFO: Created: latency-svc-b4t2k
+Jan 10 17:34:53.268: INFO: Got endpoints: latency-svc-6zwfc [801.103386ms]
+Jan 10 17:34:53.279: INFO: Created: latency-svc-khf67
+Jan 10 17:34:53.320: INFO: Got endpoints: latency-svc-xqmp2 [800.390061ms]
+Jan 10 17:34:53.331: INFO: Created: latency-svc-lrms2
+Jan 10 17:34:53.368: INFO: Got endpoints: latency-svc-8ght6 [799.528054ms]
+Jan 10 17:34:53.379: INFO: Created: latency-svc-89cqx
+Jan 10 17:34:53.419: INFO: Got endpoints: latency-svc-xf9z8 [799.964532ms]
+Jan 10 17:34:53.430: INFO: Created: latency-svc-p2skk
+Jan 10 17:34:53.470: INFO: Got endpoints: latency-svc-zsdxc [800.377399ms]
+Jan 10 17:34:53.482: INFO: Created: latency-svc-w9hdn
+Jan 10 17:34:53.519: INFO: Got endpoints: latency-svc-bp2vm [800.421471ms]
+Jan 10 17:34:53.531: INFO: Created: latency-svc-49vz9
+Jan 10 17:34:53.570: INFO: Got endpoints: latency-svc-qpwdt [798.512895ms]
+Jan 10 17:34:53.580: INFO: Created: latency-svc-swn9w
+Jan 10 17:34:53.619: INFO: Got endpoints: latency-svc-gk6xd [799.533857ms]
+Jan 10 17:34:53.631: INFO: Created: latency-svc-75h4n
+Jan 10 17:34:53.668: INFO: Got endpoints: latency-svc-bs2nr [799.641602ms]
+Jan 10 17:34:53.682: INFO: Created: latency-svc-h2jc6
+Jan 10 17:34:53.720: INFO: Got endpoints: latency-svc-ktsbn [799.308511ms]
+Jan 10 17:34:53.732: INFO: Created: latency-svc-4l6sf
+Jan 10 17:34:53.769: INFO: Got endpoints: latency-svc-x45xr [798.98446ms]
+Jan 10 17:34:53.781: INFO: Created: latency-svc-96qmx
+Jan 10 17:34:53.819: INFO: Got endpoints: latency-svc-gtbt8 [798.880961ms]
+Jan 10 17:34:53.830: INFO: Created: latency-svc-8hf42
+Jan 10 17:34:53.869: INFO: Got endpoints: latency-svc-gqghr [799.630038ms]
+Jan 10 17:34:53.881: INFO: Created: latency-svc-p2fpx
+Jan 10 17:34:53.918: INFO: Got endpoints: latency-svc-24hmd [798.508436ms]
+Jan 10 17:34:53.929: INFO: Created: latency-svc-6rg9z
+Jan 10 17:34:53.969: INFO: Got endpoints: latency-svc-b4t2k [750.258008ms]
+Jan 10 17:34:53.979: INFO: Created: latency-svc-qtzlv
+Jan 10 17:34:54.018: INFO: Got endpoints: latency-svc-khf67 [749.777723ms]
+Jan 10 17:34:54.037: INFO: Created: latency-svc-bpltd
+Jan 10 17:34:54.070: INFO: Got endpoints: latency-svc-lrms2 [750.719346ms]
+Jan 10 17:34:54.081: INFO: Created: latency-svc-mmk62
+Jan 10 17:34:54.118: INFO: Got endpoints: latency-svc-89cqx [749.836359ms]
+Jan 10 17:34:54.128: INFO: Created: latency-svc-n7n8q
+Jan 10 17:34:54.171: INFO: Got endpoints: latency-svc-p2skk [751.811062ms]
+Jan 10 17:34:54.182: INFO: Created: latency-svc-8krr4
+Jan 10 17:34:54.219: INFO: Got endpoints: latency-svc-w9hdn [749.151598ms]
+Jan 10 17:34:54.229: INFO: Created: latency-svc-tt2cg
+Jan 10 17:34:54.269: INFO: Got endpoints: latency-svc-49vz9 [750.644525ms]
+Jan 10 17:34:54.279: INFO: Created: latency-svc-85c2x
+Jan 10 17:34:54.321: INFO: Got endpoints: latency-svc-swn9w [751.244543ms]
+Jan 10 17:34:54.332: INFO: Created: latency-svc-s6wqp
+Jan 10 17:34:54.368: INFO: Got endpoints: latency-svc-75h4n [748.013203ms]
+Jan 10 17:34:54.380: INFO: Created: latency-svc-ts29q
+Jan 10 17:34:54.418: INFO: Got endpoints: latency-svc-h2jc6 [749.454751ms]
+Jan 10 17:34:54.428: INFO: Created: latency-svc-vrk8q
+Jan 10 17:34:54.469: INFO: Got endpoints: latency-svc-4l6sf [748.13387ms]
+Jan 10 17:34:54.479: INFO: Created: latency-svc-8t66r
+Jan 10 17:34:54.519: INFO: Got endpoints: latency-svc-96qmx [749.978742ms]
+Jan 10 17:34:54.530: INFO: Created: latency-svc-7qlln
+Jan 10 17:34:54.568: INFO: Got endpoints: latency-svc-8hf42 [748.823852ms]
+Jan 10 17:34:54.579: INFO: Created: latency-svc-mwrjf
+Jan 10 17:34:54.620: INFO: Got endpoints: latency-svc-p2fpx [750.570312ms]
+Jan 10 17:34:54.630: INFO: Created: latency-svc-8drpg
+Jan 10 17:34:54.669: INFO: Got endpoints: latency-svc-6rg9z [751.325553ms]
+Jan 10 17:34:54.681: INFO: Created: latency-svc-gsp69
+Jan 10 17:34:54.720: INFO: Got endpoints: latency-svc-qtzlv [750.536827ms]
+Jan 10 17:34:54.730: INFO: Created: latency-svc-l96fz
+Jan 10 17:34:54.768: INFO: Got endpoints: latency-svc-bpltd [749.385818ms]
+Jan 10 17:34:54.778: INFO: Created: latency-svc-4jvw8
+Jan 10 17:34:54.819: INFO: Got endpoints: latency-svc-mmk62 [748.794441ms]
+Jan 10 17:34:54.831: INFO: Created: latency-svc-4cl78
+Jan 10 17:34:54.869: INFO: Got endpoints: latency-svc-n7n8q [751.262897ms]
+Jan 10 17:34:54.879: INFO: Created: latency-svc-mnchv
+Jan 10 17:34:54.920: INFO: Got endpoints: latency-svc-8krr4 [749.677351ms]
+Jan 10 17:34:54.931: INFO: Created: latency-svc-9pnsg
+Jan 10 17:34:54.971: INFO: Got endpoints: latency-svc-tt2cg [751.908994ms]
+Jan 10 17:34:54.982: INFO: Created: latency-svc-hfsm8
+Jan 10 17:34:55.023: INFO: Got endpoints: latency-svc-85c2x [753.078567ms]
+Jan 10 17:34:55.036: INFO: Created: latency-svc-zh8lv
+Jan 10 17:34:55.068: INFO: Got endpoints: latency-svc-s6wqp [747.147928ms]
+Jan 10 17:34:55.094: INFO: Created: latency-svc-vbnd6
+Jan 10 17:34:55.121: INFO: Got endpoints: latency-svc-ts29q [753.001932ms]
+Jan 10 17:34:55.151: INFO: Created: latency-svc-flx9x
+Jan 10 17:34:55.171: INFO: Got endpoints: latency-svc-vrk8q [752.70895ms]
+Jan 10 17:34:55.180: INFO: Created: latency-svc-5tvrd
+Jan 10 17:34:55.219: INFO: Got endpoints: latency-svc-8t66r [750.289448ms]
+Jan 10 17:34:55.229: INFO: Created: latency-svc-wkcf2
+Jan 10 17:34:55.269: INFO: Got endpoints: latency-svc-7qlln [749.19027ms]
+Jan 10 17:34:55.278: INFO: Created: latency-svc-w4ctm
+Jan 10 17:34:55.320: INFO: Got endpoints: latency-svc-mwrjf [752.532787ms]
+Jan 10 17:34:55.337: INFO: Created: latency-svc-fsg4g
+Jan 10 17:34:55.370: INFO: Got endpoints: latency-svc-8drpg [749.844262ms]
+Jan 10 17:34:55.380: INFO: Created: latency-svc-4xwv2
+Jan 10 17:34:55.420: INFO: Got endpoints: latency-svc-gsp69 [750.879495ms]
+Jan 10 17:34:55.430: INFO: Created: latency-svc-76dw4
+Jan 10 17:34:55.468: INFO: Got endpoints: latency-svc-l96fz [748.21918ms]
+Jan 10 17:34:55.479: INFO: Created: latency-svc-fzs29
+Jan 10 17:34:55.521: INFO: Got endpoints: latency-svc-4jvw8 [752.87811ms]
+Jan 10 17:34:55.532: INFO: Created: latency-svc-nxkqk
+Jan 10 17:34:55.570: INFO: Got endpoints: latency-svc-4cl78 [750.228856ms]
+Jan 10 17:34:55.580: INFO: Created: latency-svc-hpdbh
+Jan 10 17:34:55.618: INFO: Got endpoints: latency-svc-mnchv [748.684302ms]
+Jan 10 17:34:55.629: INFO: Created: latency-svc-gdn2f
+Jan 10 17:34:55.668: INFO: Got endpoints: latency-svc-9pnsg [747.568523ms]
+Jan 10 17:34:55.680: INFO: Created: latency-svc-bsxs4
+Jan 10 17:34:55.720: INFO: Got endpoints: latency-svc-hfsm8 [749.094952ms]
+Jan 10 17:34:55.730: INFO: Created: latency-svc-l862j
+Jan 10 17:34:55.769: INFO: Got endpoints: latency-svc-zh8lv [746.479756ms]
+Jan 10 17:34:55.779: INFO: Created: latency-svc-wljtg
+Jan 10 17:34:55.820: INFO: Got endpoints: latency-svc-vbnd6 [751.744392ms]
+Jan 10 17:34:55.832: INFO: Created: latency-svc-xjhfk
+Jan 10 17:34:55.868: INFO: Got endpoints: latency-svc-flx9x [747.572638ms]
+Jan 10 17:34:55.879: INFO: Created: latency-svc-gbfr2
+Jan 10 17:34:55.919: INFO: Got endpoints: latency-svc-5tvrd [747.76087ms]
+Jan 10 17:34:55.935: INFO: Created: latency-svc-nx99w
+Jan 10 17:34:55.970: INFO: Got endpoints: latency-svc-wkcf2 [750.693301ms]
+Jan 10 17:34:55.980: INFO: Created: latency-svc-bdxkx
+Jan 10 17:34:56.021: INFO: Got endpoints: latency-svc-w4ctm [752.791074ms]
+Jan 10 17:34:56.034: INFO: Created: latency-svc-rgg4r
+Jan 10 17:34:56.069: INFO: Got endpoints: latency-svc-fsg4g [748.640343ms]
+Jan 10 17:34:56.079: INFO: Created: latency-svc-2jhqj
+Jan 10 17:34:56.119: INFO: Got endpoints: latency-svc-4xwv2 [749.757267ms]
+Jan 10 17:34:56.132: INFO: Created: latency-svc-9nr98
+Jan 10 17:34:56.172: INFO: Got endpoints: latency-svc-76dw4 [751.870502ms]
+Jan 10 17:34:56.183: INFO: Created: latency-svc-hxmzr
+Jan 10 17:34:56.218: INFO: Got endpoints: latency-svc-fzs29 [750.206777ms]
+Jan 10 17:34:56.230: INFO: Created: latency-svc-k5zp2
+Jan 10 17:34:56.270: INFO: Got endpoints: latency-svc-nxkqk [749.152718ms]
+Jan 10 17:34:56.283: INFO: Created: latency-svc-j8zqx
+Jan 10 17:34:56.321: INFO: Got endpoints: latency-svc-hpdbh [750.89374ms]
+Jan 10 17:34:56.331: INFO: Created: latency-svc-8c287
+Jan 10 17:34:56.368: INFO: Got endpoints: latency-svc-gdn2f [749.982143ms]
+Jan 10 17:34:56.381: INFO: Created: latency-svc-2xmnp
+Jan 10 17:34:56.420: INFO: Got endpoints: latency-svc-bsxs4 [751.496481ms]
+Jan 10 17:34:56.429: INFO: Created: latency-svc-ph7p5
+Jan 10 17:34:56.468: INFO: Got endpoints: latency-svc-l862j [747.390281ms]
+Jan 10 17:34:56.478: INFO: Created: latency-svc-69tdk
+Jan 10 17:34:56.519: INFO: Got endpoints: latency-svc-wljtg [749.83889ms]
+Jan 10 17:34:56.530: INFO: Created: latency-svc-77l2q
+Jan 10 17:34:56.570: INFO: Got endpoints: latency-svc-xjhfk [749.695611ms]
+Jan 10 17:34:56.581: INFO: Created: latency-svc-vkpdm
+Jan 10 17:34:56.620: INFO: Got endpoints: latency-svc-gbfr2 [751.066806ms]
+Jan 10 17:34:56.635: INFO: Created: latency-svc-r54mr
+Jan 10 17:34:56.669: INFO: Got endpoints: latency-svc-nx99w [750.579044ms]
+Jan 10 17:34:56.682: INFO: Created: latency-svc-h5wxd
+Jan 10 17:34:56.720: INFO: Got endpoints: latency-svc-bdxkx [749.778008ms]
+Jan 10 17:34:56.729: INFO: Created: latency-svc-v5mcm
+Jan 10 17:34:56.771: INFO: Got endpoints: latency-svc-rgg4r [749.73085ms]
+Jan 10 17:34:56.781: INFO: Created: latency-svc-x4tb6
+Jan 10 17:34:56.819: INFO: Got endpoints: latency-svc-2jhqj [749.068783ms]
+Jan 10 17:34:56.828: INFO: Created: latency-svc-vqsj2
+Jan 10 17:34:56.869: INFO: Got endpoints: latency-svc-9nr98 [749.79854ms]
+Jan 10 17:34:56.879: INFO: Created: latency-svc-wfgzz
+Jan 10 17:34:56.919: INFO: Got endpoints: latency-svc-hxmzr [746.986541ms]
+Jan 10 17:34:56.934: INFO: Created: latency-svc-225jr
+Jan 10 17:34:56.970: INFO: Got endpoints: latency-svc-k5zp2 [751.371307ms]
+Jan 10 17:34:56.983: INFO: Created: latency-svc-6kcw6
+Jan 10 17:34:57.019: INFO: Got endpoints: latency-svc-j8zqx [748.70466ms]
+Jan 10 17:34:57.029: INFO: Created: latency-svc-5hv9w
+Jan 10 17:34:57.068: INFO: Got endpoints: latency-svc-8c287 [747.814602ms]
+Jan 10 17:34:57.078: INFO: Created: latency-svc-87qjs
+Jan 10 17:34:57.119: INFO: Got endpoints: latency-svc-2xmnp [751.548115ms]
+Jan 10 17:34:57.130: INFO: Created: latency-svc-4cwf7
+Jan 10 17:34:57.169: INFO: Got endpoints: latency-svc-ph7p5 [749.543106ms]
+Jan 10 17:34:57.218: INFO: Got endpoints: latency-svc-69tdk [750.154855ms]
+Jan 10 17:34:57.271: INFO: Got endpoints: latency-svc-77l2q [751.835166ms]
+Jan 10 17:34:57.318: INFO: Got endpoints: latency-svc-vkpdm [748.515213ms]
+Jan 10 17:34:57.369: INFO: Got endpoints: latency-svc-r54mr [749.655398ms]
+Jan 10 17:34:57.419: INFO: Got endpoints: latency-svc-h5wxd [748.225145ms]
+Jan 10 17:34:57.468: INFO: Got endpoints: latency-svc-v5mcm [748.776044ms]
+Jan 10 17:34:57.520: INFO: Got endpoints: latency-svc-x4tb6 [748.346952ms]
+Jan 10 17:34:57.568: INFO: Got endpoints: latency-svc-vqsj2 [749.474246ms]
+Jan 10 17:34:57.619: INFO: Got endpoints: latency-svc-wfgzz [749.731292ms]
+Jan 10 17:34:57.670: INFO: Got endpoints: latency-svc-225jr [750.31264ms]
+Jan 10 17:34:57.719: INFO: Got endpoints: latency-svc-6kcw6 [748.738987ms]
+Jan 10 17:34:57.769: INFO: Got endpoints: latency-svc-5hv9w [750.284904ms]
+Jan 10 17:34:57.822: INFO: Got endpoints: latency-svc-87qjs [753.966823ms]
+Jan 10 17:34:57.869: INFO: Got endpoints: latency-svc-4cwf7 [749.640358ms]
+Jan 10 17:34:57.869: INFO: Latencies: [15.702449ms 23.64955ms 32.901813ms 42.105306ms 54.403971ms 62.1991ms 69.775187ms 76.784698ms 84.90092ms 92.684748ms 102.262485ms 117.266518ms 121.102816ms 126.962986ms 129.36041ms 129.929286ms 130.703714ms 132.472462ms 132.627464ms 133.108378ms 134.781535ms 135.846226ms 136.2817ms 138.030248ms 138.678234ms 138.712335ms 141.426382ms 141.828056ms 142.09577ms 142.935687ms 145.454564ms 145.672355ms 145.827156ms 147.322342ms 148.459724ms 156.646167ms 195.872774ms 236.793648ms 275.142367ms 308.135711ms 344.879063ms 390.426665ms 429.143707ms 474.554178ms 515.316433ms 557.114599ms 600.628403ms 640.828082ms 681.930581ms 722.283393ms 742.595189ms 745.434084ms 746.18055ms 746.479756ms 746.668985ms 746.986541ms 747.147928ms 747.390281ms 747.568523ms 747.572638ms 747.76087ms 747.814602ms 747.986598ms 748.013203ms 748.028203ms 748.13387ms 748.21918ms 748.225145ms 748.320484ms 748.338703ms 748.346952ms 748.512941ms 748.515213ms 748.640343ms 748.684302ms 748.688794ms 748.70466ms 748.738987ms 748.771495ms 748.776044ms 748.794441ms 748.796364ms 748.818549ms 748.823852ms 748.848145ms 748.983435ms 749.068783ms 749.094952ms 749.151598ms 749.152718ms 749.163357ms 749.19027ms 749.33338ms 749.350135ms 749.385818ms 749.425356ms 749.454751ms 749.471381ms 749.474246ms 749.499032ms 749.543106ms 749.576094ms 749.621417ms 749.640358ms 749.655398ms 749.675635ms 749.677351ms 749.684595ms 749.695611ms 749.73085ms 749.731292ms 749.757267ms 749.777723ms 749.778008ms 749.79854ms 749.836359ms 749.83889ms 749.844262ms 749.926776ms 749.936494ms 749.978742ms 749.982143ms 750.091066ms 750.096357ms 750.112896ms 750.123417ms 750.143855ms 750.154855ms 750.206777ms 750.211448ms 750.228856ms 750.237437ms 750.258008ms 750.284904ms 750.289448ms 750.30865ms 750.31264ms 750.329606ms 750.331825ms 750.46587ms 750.536827ms 750.570312ms 750.579044ms 750.583727ms 750.644525ms 750.685262ms 750.693301ms 750.719346ms 750.733403ms 750.84181ms 750.879495ms 750.89374ms 750.995327ms 751.007537ms 751.066806ms 751.244543ms 751.262897ms 751.296196ms 751.325553ms 751.36535ms 751.371307ms 751.493672ms 751.493717ms 751.496481ms 751.525846ms 751.548115ms 751.678444ms 751.712904ms 751.744392ms 751.811062ms 751.819915ms 751.835166ms 751.859889ms 751.870502ms 751.908994ms 752.070036ms 752.532787ms 752.70895ms 752.791074ms 752.87811ms 753.001932ms 753.078567ms 753.735794ms 753.966823ms 759.537254ms 798.508436ms 798.512895ms 798.880961ms 798.98446ms 799.308511ms 799.528054ms 799.533857ms 799.630038ms 799.641602ms 799.711629ms 799.964532ms 800.377399ms 800.390061ms 800.421471ms 801.103386ms]
+Jan 10 17:34:57.870: INFO: 50 %ile: 749.543106ms
+Jan 10 17:34:57.870: INFO: 90 %ile: 753.001932ms
+Jan 10 17:34:57.870: INFO: 99 %ile: 800.421471ms
+Jan 10 17:34:57.870: INFO: Total sample count: 200
+[AfterEach] [sig-network] Service endpoints latency
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:34:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svc-latency-5274" for this suite.
+
+• [SLOW TEST:10.809 seconds]
+[sig-network] Service endpoints latency
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+ should not be very high [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":277,"completed":103,"skipped":1883,"failed":0}
+SSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition
+ listing custom resource definition objects works [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:34:57.889: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] listing custom resource definition objects works [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:34:57.912: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:35:59.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-4018" for this suite.
+
+• [SLOW TEST:61.204 seconds]
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+ Simple CustomResourceDefinition
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
+ listing custom resource definition objects works [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":277,"completed":104,"skipped":1894,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services
+ should find a service from listing all namespaces [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:35:59.093: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should find a service from listing all namespaces [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: fetching services
+[AfterEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:35:59.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-4277" for this suite.
+[AfterEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":277,"completed":105,"skipped":1917,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1
+ should proxy logs on node using proxy subresource [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] version v1
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:35:59.149: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node using proxy subresource [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:35:59.186: INFO: (0) /api/v1/nodes/ip-172-20-52-46.ap-south-1.compute.internal/proxy/logs/:
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+alternatives.log
+amazon/
+>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:35:59.634: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:36:02.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should deny crd creation [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering the crd webhook via the AdmissionRegistration API
+STEP: Creating a custom resource definition that should be denied by the webhook
+Jan 10 17:36:02.663: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:36:02.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-8272" for this suite.
+STEP: Destroying namespace "webhook-8272-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":277,"completed":107,"skipped":1966,"failed":0}
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ should perform canary updates and phased rolling updates of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:36:02.734: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
+STEP: Creating service test in namespace statefulset-6573
+[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a new StatefulSet
+Jan 10 17:36:02.772: INFO: Found 0 stateful pods, waiting for 3
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:12.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
+Jan 10 17:36:12.798: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Not applying an update when the partition is greater than the number of replicas
+STEP: Performing a canary update
+Jan 10 17:36:22.823: INFO: Updating stateful set ss2
+Jan 10 17:36:22.827: INFO: Waiting for Pod statefulset-6573/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+STEP: Restoring Pods to the correct revision when they are deleted
+Jan 10 17:36:32.855: INFO: Found 2 stateful pods, waiting for 3
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jan 10 17:36:42.859: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Performing a phased rolling update
+Jan 10 17:36:42.879: INFO: Updating stateful set ss2
+Jan 10 17:36:42.884: INFO: Waiting for Pod statefulset-6573/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:36:52.905: INFO: Updating stateful set ss2
+Jan 10 17:36:52.911: INFO: Waiting for StatefulSet statefulset-6573/ss2 to complete update
+Jan 10 17:36:52.911: INFO: Waiting for Pod statefulset-6573/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jan 10 17:37:02.916: INFO: Waiting for StatefulSet statefulset-6573/ss2 to complete update
+Jan 10 17:37:02.916: INFO: Waiting for Pod statefulset-6573/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
+Jan 10 17:37:12.916: INFO: Deleting all statefulset in ns statefulset-6573
+Jan 10 17:37:12.918: INFO: Scaling statefulset ss2 to 0
+Jan 10 17:37:32.929: INFO: Waiting for statefulset status.replicas updated to 0
+Jan 10 17:37:32.931: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-6573" for this suite.
+
+• [SLOW TEST:90.212 seconds]
+[sig-apps] StatefulSet
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+ [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+ should perform canary updates and phased rolling updates of template modifications [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":277,"completed":108,"skipped":1987,"failed":0}
+SSS
+------------------------------
+[sig-apps] Daemon set [Serial]
+ should run and stop simple daemon [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Daemon set [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:32.946: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
+[It] should run and stop simple daemon [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.990: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:32.992: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:32.992: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.996: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:33.998: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:33.998: INFO: Node ip-172-20-33-172.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.996: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:34.998: INFO: Number of nodes with available pods: 3
+Jan 10 17:37:34.998: INFO: Number of running nodes: 3, number of available pods: 3
+STEP: Stop a daemon pod, check that the daemon pod is revived.
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.011: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:35.013: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:35.013: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:36.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:36.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:37.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:37.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:38.020: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:38.020: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:39.019: INFO: Number of nodes with available pods: 2
+Jan 10 17:37:39.019: INFO: Node ip-172-20-52-46.ap-south-1.compute.internal is running more than one daemon pod
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-34-101.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-50-129.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.017: INFO: DaemonSet pods can't tolerate node ip-172-20-63-171.ap-south-1.compute.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jan 10 17:37:40.019: INFO: Number of nodes with available pods: 3
+Jan 10 17:37:40.019: INFO: Number of running nodes: 3, number of available pods: 3
+[AfterEach] [sig-apps] Daemon set [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2555, will wait for the garbage collector to delete the pods
+Jan 10 17:37:40.079: INFO: Deleting DaemonSet.extensions daemon-set took: 5.316755ms
+Jan 10 17:37:40.179: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.220035ms
+Jan 10 17:37:48.381: INFO: Number of nodes with available pods: 0
+Jan 10 17:37:48.381: INFO: Number of running nodes: 0, number of available pods: 0
+Jan 10 17:37:48.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2555/daemonsets","resourceVersion":"15716"},"items":null}
+
+Jan 10 17:37:48.385: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2555/pods","resourceVersion":"15716"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-2555" for this suite.
+
+• [SLOW TEST:15.454 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+ should run and stop simple daemon [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":277,"completed":109,"skipped":1990,"failed":0}
+S
+------------------------------
+[sig-storage] Projected downwardAPI
+ should provide container's cpu limit [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:48.400: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:37:48.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99" in namespace "projected-3311" to be "Succeeded or Failed"
+Jan 10 17:37:48.435: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99": Phase="Pending", Reason="", readiness=false. Elapsed: 1.737972ms
+Jan 10 17:37:50.437: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00440725s
+STEP: Saw pod success
+Jan 10 17:37:50.437: INFO: Pod "downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99" satisfied condition "Succeeded or Failed"
+Jan 10 17:37:50.439: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 container client-container:
+STEP: delete the pod
+Jan 10 17:37:50.461: INFO: Waiting for pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 to disappear
+Jan 10 17:37:50.462: INFO: Pod downwardapi-volume-0731bc7d-1e27-4a83-af39-4454d5e62a99 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:50.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3311" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":277,"completed":110,"skipped":1991,"failed":0}
+SSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial]
+ validates that NodeSelector is respected if matching [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:50.470: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
+Jan 10 17:37:50.490: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jan 10 17:37:50.498: INFO: Waiting for terminating namespaces to be deleted...
+Jan 10 17:37:50.500: INFO:
+Logging pods the kubelet thinks is on node ip-172-20-33-172.ap-south-1.compute.internal before test
+Jan 10 17:37:50.505: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-tfj4x from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.505: INFO: Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: calico-node-vgdrq from kube-system started at 2021-01-10 16:58:19 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.505: INFO: Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.505: INFO: kube-proxy-ip-172-20-33-172.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:44 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.505: INFO: Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.505: INFO:
+Logging pods the kubelet thinks is on node ip-172-20-39-143.ap-south-1.compute.internal before test
+Jan 10 17:37:50.517: INFO: sonobuoy from sonobuoy started at 2021-01-10 17:08:58 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container kube-sonobuoy ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-zrwk8 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: kube-proxy-ip-172-20-39-143.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:29 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: sonobuoy-e2e-job-5c46f38a56914321 from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container e2e ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: kube-dns-64f86fb8dd-ngh4q from kube-system started at 2021-01-10 17:12:23 +0000 UTC (3 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container dnsmasq ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: Container kubedns ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: Container sidecar ready: true, restart count 0
+Jan 10 17:37:50.517: INFO: calico-node-ldj9k from kube-system started at 2021-01-10 16:58:16 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.517: INFO: Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.517: INFO:
+Logging pods the kubelet thinks is on node ip-172-20-52-46.ap-south-1.compute.internal before test
+Jan 10 17:37:50.528: INFO: kube-proxy-ip-172-20-52-46.ap-south-1.compute.internal from kube-system started at 2021-01-10 16:55:48 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: Container kube-proxy ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: kube-dns-64f86fb8dd-gdkpz from kube-system started at 2021-01-10 16:58:37 +0000 UTC (3 container statuses recorded)
+Jan 10 17:37:50.528: INFO: Container dnsmasq ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: Container kubedns ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: Container sidecar ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: sonobuoy-systemd-logs-daemon-set-511350556efd4097-sk6xf from sonobuoy started at 2021-01-10 17:09:05 +0000 UTC (2 container statuses recorded)
+Jan 10 17:37:50.528: INFO: Container sonobuoy-worker ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: Container systemd-logs ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: calico-node-nrg4h from kube-system started at 2021-01-10 16:58:13 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: Container calico-node ready: true, restart count 0
+Jan 10 17:37:50.528: INFO: kube-dns-autoscaler-cd7778b7b-c8mf6 from kube-system started at 2021-01-10 16:58:37 +0000 UTC (1 container statuses recorded)
+Jan 10 17:37:50.528: INFO: Container autoscaler ready: true, restart count 0
+[It] validates that NodeSelector is respected if matching [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f 42
+STEP: Trying to relaunch the pod, now with labels.
+STEP: removing the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f off the node ip-172-20-33-172.ap-south-1.compute.internal
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-563c7004-9f1b-4c5a-9a26-17bd34ce022f
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:37:54.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-6387" for this suite.
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
+•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":277,"completed":111,"skipped":2000,"failed":0}
+SSSSSS
+------------------------------
+[k8s.io] [sig-node] Events
+ should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] [sig-node] Events
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:37:54.586: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename events
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: retrieving the pod
+Jan 10 17:37:56.622: INFO: &Pod{ObjectMeta:{send-events-343e4f8a-9bad-41f7-8f61-4fd9c1ebe5db events-1166 /api/v1/namespaces/events-1166/pods/send-events-343e4f8a-9bad-41f7-8f61-4fd9c1ebe5db 58e8d591-4b55-4d51-b085-6086414a3374 15807 0 2021-01-10 17:37:54 +0000 UTC map[name:foo time:606335127] map[cni.projectcalico.org/podIP:100.108.158.179/32 cni.projectcalico.org/podIPs:100.108.158.179/32] [] [] [{e2e.test Update v1 2021-01-10 17:37:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {calico Update v1 2021-01-10 17:37:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 34 58 123 125 44 34 102 58 99 110 105 46 112 114 111 106 101 99 116 99 97 108 105 99 111 46 111 114 103 47 112 111 100 73 80 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-01-10 17:37:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 48 46 49 48 56 46 49 53 56 46 49 55 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6zrl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6zrl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6zrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-33-172.ap-south-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-10 17:37:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.33.172,PodIP:100.108.158.179,StartTime:2021-01-10 17:37:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-10 17:37:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://4c162e13781972491b138f3a72fff36ad8d285f1faf3a0718a901396bf7a4696,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.108.158.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+
+STEP: checking for scheduler event about the pod
+Jan 10 17:37:58.624: INFO: Saw scheduler event for our pod.
+STEP: checking for kubelet event about the pod
+Jan 10 17:38:00.627: INFO: Saw kubelet event for our pod.
+STEP: deleting the pod
+[AfterEach] [k8s.io] [sig-node] Events
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "events-1166" for this suite.
+
+• [SLOW TEST:6.055 seconds]
+[k8s.io] [sig-node] Events
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+ should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":277,"completed":112,"skipped":2006,"failed":0}
+SSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+ works for multiple CRDs of same group but different versions [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:00.641: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for multiple CRDs of same group but different versions [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
+Jan 10 17:38:00.660: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
+Jan 10 17:38:18.364: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+Jan 10 17:38:27.075: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-5626" for this suite.
+
+• [SLOW TEST:44.453 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+ works for multiple CRDs of same group but different versions [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":277,"completed":113,"skipped":2014,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ should be able to deny attaching pod [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:45.095: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:38:45.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:38:48.441: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should be able to deny attaching pod [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Registering the webhook via the AdmissionRegistration API
+STEP: create a pod
+STEP: 'kubectl attach' the pod, should be denied by the webhook
+Jan 10 17:38:50.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-870154433 attach --namespace=webhook-3132 to-be-attached-pod -i -c=container1'
+Jan 10 17:38:50.563: INFO: rc: 1
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:50.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-3132" for this suite.
+STEP: Destroying namespace "webhook-3132-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:5.514 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+ should be able to deny attaching pod [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":277,"completed":114,"skipped":2096,"failed":0}
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] LimitRange
+ should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-scheduling] LimitRange
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:50.610: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename limitrange
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a LimitRange
+STEP: Setting up watch
+STEP: Submitting a LimitRange
+STEP: Verifying LimitRange creation was observed
+Jan 10 17:38:50.642: INFO: observed the limitRanges list
+STEP: Fetching the LimitRange to ensure it has proper values
+Jan 10 17:38:50.645: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}]
+Jan 10 17:38:50.645: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Creating a Pod with no resource requirements
+STEP: Ensuring Pod has resource requirements applied from LimitRange
+Jan 10 17:38:50.651: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}]
+Jan 10 17:38:50.651: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Creating a Pod with partial resource requirements
+STEP: Ensuring Pod has merged resource requirements applied from LimitRange
+Jan 10 17:38:50.658: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
+Jan 10 17:38:50.658: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
+STEP: Failing to create a Pod with less than min resources
+STEP: Failing to create a Pod with more than max resources
+STEP: Updating a LimitRange
+STEP: Verifying LimitRange updating is effective
+STEP: Creating a Pod with less than former min resources
+STEP: Failing to create a Pod with more than max resources
+STEP: Deleting a LimitRange
+STEP: Verifying the LimitRange was deleted
+Jan 10 17:38:57.681: INFO: limitRange is already deleted
+STEP: Creating a Pod with more than former max resources
+[AfterEach] [sig-scheduling] LimitRange
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:38:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "limitrange-3509" for this suite.
+
+• [SLOW TEST:7.089 seconds]
+[sig-scheduling] LimitRange
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
+ should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":277,"completed":115,"skipped":2111,"failed":0}
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ should mutate custom resource with different stored version [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:38:57.699: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jan 10 17:38:58.814: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jan 10 17:39:00.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745897138, loc:(*time.Location)(0x7b4a600)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jan 10 17:39:03.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate custom resource with different stored version [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:03.835: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4977-crds.webhook.example.com via the AdmissionRegistration API
+STEP: Creating a custom resource while v1 is storage version
+STEP: Patching Custom Resource Definition to set v2 as storage
+STEP: Patching the custom resource while v2 is storage version
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-6036" for this suite.
+STEP: Destroying namespace "webhook-6036-markers" for this suite.
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
+
+• [SLOW TEST:12.316 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+ should mutate custom resource with different stored version [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":277,"completed":116,"skipped":2148,"failed":0}
+SSSSSSSSSSSS
+------------------------------
+[sig-network] Services
+ should serve multiport endpoints from pods [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:10.015: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
+[It] should serve multiport endpoints from pods [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: creating service multi-endpoint-test in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[]
+Jan 10 17:39:10.054: INFO: Get endpoints failed (2.713676ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Jan 10 17:39:11.056: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[] (1.005400971s elapsed)
+STEP: Creating pod pod1 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod1:[100]]
+Jan 10 17:39:13.075: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod1:[100]] (2.013753984s elapsed)
+STEP: Creating pod pod2 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod1:[100] pod2:[101]]
+Jan 10 17:39:15.097: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod1:[100] pod2:[101]] (2.018747203s elapsed)
+STEP: Deleting pod pod1 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[pod2:[101]]
+Jan 10 17:39:16.111: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[pod2:[101]] (1.008179261s elapsed)
+STEP: Deleting pod pod2 in namespace services-2158
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2158 to expose endpoints map[]
+Jan 10 17:39:17.121: INFO: successfully validated that service multi-endpoint-test in namespace services-2158 exposes endpoints map[] (1.004963046s elapsed)
+[AfterEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:17.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-2158" for this suite.
+[AfterEach] [sig-network] Services
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
+
+• [SLOW TEST:7.131 seconds]
+[sig-network] Services
+/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+ should serve multiport endpoints from pods [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+------------------------------
+{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":277,"completed":117,"skipped":2160,"failed":0}
+SSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases
+ should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [k8s.io] Kubelet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:17.147: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
+[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[AfterEach] [k8s.io] Kubelet
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:19.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-4943" for this suite.
+•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":118,"skipped":2168,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI
+ should provide podname only [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:19.200: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
+[It] should provide podname only [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test downward API volume plugin
+Jan 10 17:39:19.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36" in namespace "projected-7334" to be "Succeeded or Failed"
+Jan 10 17:39:19.228: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36": Phase="Pending", Reason="", readiness=false. Elapsed: 1.828479ms
+Jan 10 17:39:21.231: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004342318s
+STEP: Saw pod success
+Jan 10 17:39:21.231: INFO: Pod "downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36" satisfied condition "Succeeded or Failed"
+Jan 10 17:39:21.233: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 container client-container:
+STEP: delete the pod
+Jan 10 17:39:21.248: INFO: Waiting for pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 to disappear
+Jan 10 17:39:21.249: INFO: Pod downwardapi-volume-85275fd4-7f2f-4ead-9b95-cb074449bb36 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:21.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7334" for this suite.
+•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":277,"completed":119,"skipped":2184,"failed":0}
+
+------------------------------
+[sig-storage] EmptyDir volumes
+ should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-storage] EmptyDir volumes
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:21.256: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jan 10 17:39:21.283: INFO: Waiting up to 5m0s for pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0" in namespace "emptydir-7988" to be "Succeeded or Failed"
+Jan 10 17:39:21.285: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.756577ms
+Jan 10 17:39:23.288: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004526863s
+STEP: Saw pod success
+Jan 10 17:39:23.288: INFO: Pod "pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0" satisfied condition "Succeeded or Failed"
+Jan 10 17:39:23.290: INFO: Trying to get logs from node ip-172-20-33-172.ap-south-1.compute.internal pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 container test-container:
+STEP: delete the pod
+Jan 10 17:39:23.304: INFO: Waiting for pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 to disappear
+Jan 10 17:39:23.306: INFO: Pod pod-cb764de1-d5e1-45de-b6e2-f278b7da4fb0 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
+Jan 10 17:39:23.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-7988" for this suite.
+•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":120,"skipped":2184,"failed":0}
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment
+ RecreateDeployment should delete old pods and create new ones [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+[BeforeEach] [sig-apps] Deployment
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
+STEP: Creating a kubernetes client
+Jan 10 17:39:23.315: INFO: >>> kubeConfig: /tmp/kubeconfig-870154433
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
+[It] RecreateDeployment should delete old pods and create new ones [Conformance]
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
+Jan 10 17:39:23.335: INFO: Creating deployment "test-recreate-deployment"
+Jan 10 17:39:23.341: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
+Jan 10 17:39:23.345: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
+Jan 10 17:39:25.350: INFO: Waiting deployment "test-recreate-deployment" to complete
+Jan 10 17:39:25.351: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
+Jan 10 17:39:25.358: INFO: Updating deployment test-recreate-deployment
+Jan 10 17:39:25.358: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
+[AfterEach] [sig-apps] Deployment
+ /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+Jan 10 17:39:25.409: INFO: Deployment "test-recreate-deployment":
+&Deployment{ObjectMeta:{test-recreate-deployment deployment-1940 /apis/apps/v1/namespaces/deployment-1940/deployments/test-recreate-deployment 0e932fdd-c28c-4bef-af43-8d7fcbf6358c 16507 2 2021-01-10 17:39:23 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029d7b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-10 17:39:25 +0000 UTC,LastTransitionTime:2021-01-10 17:39:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2021-01-10 17:39:25 +0000 UTC,LastTransitionTime:2021-01-10 17:39:23 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}
+
+Jan 10 17:39:25.411: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
+&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-1940 /apis/apps/v1/namespaces/deployment-1940/replicasets/test-recreate-deployment-d5667d9c7 7d0d35d3-5a78-4370-9ad3-213c478aee94 16505 1 2021-01-10 17:39:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0e932fdd-c28c-4bef-af43-8d7fcbf6358c 0xc004f421f0 0xc004f421f1}] [] [{kube-controller-manager Update apps/v1 2021-01-10 17:39:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 101 57 51 50 102 100 100 45 99 50 56 99 45 52 98 101 102 45 97 102 52 51 45 56 100 55 102 99 98 102 54 51 53 56 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC