Skip to content

NGINX fails to start w/eroneous errors IF snippets are in ingress objects, but snippets set to false #7844

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ThatAIXGuy opened this issue Oct 24, 2021 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ThatAIXGuy
Copy link

k exec -it std-ingress-nginx-controller-5578cfcc8f-5bw7d -n std-ingress -- /nginx-ingress-controller --version

NGINX Ingress controller
Release: v0.49.3
Build: 7ee28f4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9

Kubernetes version (use kubectl version):
v1.17.17+vmware.1

Environment:
On Prem / VMWare / PKS 1.9
Linux d75ce9bb-3c60-49ec-aab1-0c41f22e8b57 4.15.0-136-generic #140~16.04.1-Ubuntu SMP Wed Feb 3 18:51:03 UTC 2021 x86_64 GNU/Linux

  • How was the ingress-nginx-controller installed:
    HELM / CICD

    • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
  • Current State of the controller:
    kg pods -n std-ingress ─╯
    NAME READY STATUS RESTARTS AGE
    echoserver-5785b8b877-6pj77 1/1 Running 0 33d
    std-ingress-nginx-admission-create-lrbhg 0/1 Completed 0 42h
    std-ingress-nginx-admission-patch-5nnlz 0/1 Completed 0 42h
    std-ingress-nginx-controller-58cd7b7bfb-h7q5k 0/1 Running 3 4m16s
    std-ingress-nginx-controller-58cd7b7bfb-kztts 0/1 Running 3 4m16s
    std-ingress-nginx-controller-58cd7b7bfb-zfsxf 0/1 Running 3 4m16s
    std-ingress-nginx-controller-64464f869-mvn97 1/1 Running 3 41h
    std-ingress-nginx-controller-64464f869-n6dgq 1/1 Running 3 41h
    std-ingress-nginx-defaultbackend-848cfcbd4d-l8zbx 1/1 Running 0 42h
    std-ingress-nginx-defaultbackend-848cfcbd4d-ngs89 1/1 Running 0 42h

    --

kd pods -n std-ingress std-ingress-nginx-controller-7b4694c9f7-2p6gn ─╯
Name: std-ingress-nginx-controller-7b4694c9f7-2p6gn
Namespace: std-ingress
Priority: 0
Node: ba4bfbc1-f0c6-40ae-b35c-0b25c52c8f5c/10.13.171.44
Start Time: Sun, 24 Oct 2021 11:20:17 -0700
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=std-ingress-nginx
app.kubernetes.io/name=ingress-nginx
k8s-app=std-ingress-ctlr
pod-template-hash=7b4694c9f7
Annotations: kubectl.kubernetes.io/restartedAt: 2021-10-24T11:20:16-07:00
Status: Running
IP: 198.18.120.24
IPs:
IP: 198.18.120.24
Controlled By: ReplicaSet/std-ingress-nginx-controller-7b4694c9f7
Containers:
controller:
Container ID: docker://b33ffe3d880e9b0c0fd14c2e84b98e4529bb95f3aeb26ce0466f8c1f0d0137b3
Image: harbor.geo.pks.foo.com/pie/ingress-nginx/controller:v0.49.3
Image ID: docker-pullable://harbor.geo.pks.foo.com/pie/ingress-nginx/controller@sha256:3ae44a7f6410879f8983f2b4772105d07d0a3ab381dc9fb40e3b7b4dd1cca1e9
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/std-ingress-nginx-defaultbackend
--publish-service=$(POD_NAMESPACE)/std-ingress-nginx-controller
--election-id=std-ingress-controller-leader
--ingress-class=std-ingress-class
--configmap=$(POD_NAMESPACE)/std-ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--default-backend-service=std-ingress/std-ingress-nginx-defaultbackend
--default-ssl-certificate=std-ingress/std-ingress-secret
State: Running
Started: Sun, 24 Oct 2021 11:20:18 -0700
Ready: False
Restart Count: 0
Limits:
memory: 8Gi
Requests:
cpu: 500m
memory: 1200Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: std-ingress-nginx-controller-7b4694c9f7-2p6gn (v1:metadata.name)
POD_NAMESPACE: std-ingress (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from std-ingress-nginx-token-dckh4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: std-ingress-nginx-admission
Optional: false
std-ingress-nginx-token-dckh4:
Type: Secret (a volume populated by a Secret)
SecretName: std-ingress-nginx-token-dckh4
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 62s default-scheduler Successfully assigned std-ingress/std-ingress-nginx-controller-7b4694c9f7-2p6gn to ba4bfbc1-f0c6-40ae-b35c-0b25c52c8f5c
Normal Pulled 61s kubelet Container image "harbor.geo.pks.foo.com/pie/ingress-nginx/controller:v0.49.3" already present on machine
Normal Created 61s kubelet Created container controller
Normal Started 61s kubelet Started container controller
Warning RELOAD 59s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:149
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:150
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1034381619:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1034381619:151
2021/10/24 18:20:20 [emerg] 34#34: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg1034381619 test failed


Warning RELOAD 56s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:23 [warn] 35#35: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg169938813:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg169938813:149
2021/10/24 18:20:23 [warn] 35#35: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg169938813:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg169938813:150
2021/10/24 18:20:23 [warn] 35#35: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg169938813:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg169938813:151
2021/10/24 18:20:23 [emerg] 35#35: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg169938813 test failed


Warning RELOAD 52s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:27 [warn] 36#36: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2054124516:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2054124516:149
2021/10/24 18:20:27 [warn] 36#36: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2054124516:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2054124516:150
2021/10/24 18:20:27 [warn] 36#36: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2054124516:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2054124516:151
2021/10/24 18:20:27 [emerg] 36#36: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg2054124516 test failed


Warning RELOAD 49s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:30 [warn] 37#37: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg597560783:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg597560783:149
2021/10/24 18:20:30 [warn] 37#37: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg597560783:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg597560783:150
2021/10/24 18:20:30 [warn] 37#37: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg597560783:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg597560783:151
2021/10/24 18:20:30 [emerg] 37#37: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg597560783 test failed


Warning RELOAD 46s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:33 [warn] 38#38: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2864870219:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2864870219:149
2021/10/24 18:20:33 [warn] 38#38: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2864870219:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2864870219:150
2021/10/24 18:20:33 [warn] 38#38: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2864870219:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2864870219:151
2021/10/24 18:20:33 [emerg] 38#38: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg2864870219 test failed


Warning RELOAD 42s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:37 [warn] 39#39: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg678951166:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg678951166:149
2021/10/24 18:20:37 [warn] 39#39: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg678951166:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg678951166:150
2021/10/24 18:20:37 [warn] 39#39: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg678951166:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg678951166:151
2021/10/24 18:20:37 [emerg] 39#39: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg678951166 test failed


Warning RELOAD 39s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:40 [warn] 40#40: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg866625069:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg866625069:149
2021/10/24 18:20:40 [warn] 40#40: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg866625069:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg866625069:150
2021/10/24 18:20:40 [warn] 40#40: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg866625069:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg866625069:151
2021/10/24 18:20:40 [emerg] 40#40: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg866625069 test failed


Warning RELOAD 36s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:43 [warn] 41#41: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2322870449:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2322870449:149
2021/10/24 18:20:43 [warn] 41#41: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2322870449:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2322870449:150
2021/10/24 18:20:43 [warn] 41#41: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2322870449:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2322870449:151
2021/10/24 18:20:43 [emerg] 41#41: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg2322870449 test failed


Warning RELOAD 32s nginx-ingress-controller Error reloading NGINX:

Error: exit status 1
2021/10/24 18:20:47 [warn] 42#42: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1060248150:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1060248150:149
2021/10/24 18:20:47 [warn] 42#42: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1060248150:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1060248150:150
2021/10/24 18:20:47 [warn] 42#42: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1060248150:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1060248150:151
2021/10/24 18:20:47 [emerg] 42#42: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg1060248150 test failed


Warning Unhealthy 6s (x5 over 46s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 6s kubelet Container controller failed liveness probe, will be restarted
Warning RELOAD 6s (x8 over 29s) nginx-ingress-controller (combined from similar events): Error reloading NGINX:

Error: exit status 1
2021/10/24 18:21:13 [warn] 50#50: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1839753172:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1839753172:149
2021/10/24 18:21:13 [warn] 50#50: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1839753172:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1839753172:150
2021/10/24 18:21:13 [warn] 50#50: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1839753172:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1839753172:151
2021/10/24 18:21:13 [emerg] 50#50: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg1839753172 test failed

  • Current state of ingress object, if applicable:

kg ingress -n namespace-tst -o yaml ─╯
apiVersion: v1
items:

  • apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    annotations:
    ingress.kubernetes.io/use-port-in-redirects: "true"
    kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/use-port-in-redirects":"true","kubernetes.io/ingress.class":"std-ingress-class"},"name":"wordpress-ingress","namespace":"namespace-tst"},"spec":{"rules":[{"host":"wordpress.px-snd1101.pks.foo.com","http":{"paths":[{"backend":{"serviceName":"wordpress","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["wordpress.px-snd1101.pks.foo.com"]}]}}
    kubernetes.io/ingress.class: std-ingress-class
    nginx.ingress.kubernetes.io/configuration-snippet: |
    if ($req_id ~ "([a-z0-9]{16,})([a-z0-9]{16,})") {
    set $span_id $1;
    }
    proxy_set_header x-b3-traceid $span_id;
    proxy_set_header x-b3-spanid $span_id;
    proxy_set_header b3 "";
    creationTimestamp: "2021-10-24T18:18:48Z"
    generation: 1
    name: wordpress-ingress
    namespace: namespace-tst
    resourceVersion: "1025014262"
    selfLink: /apis/extensions/v1beta1/namespaces/namespace-tst/ingresses/wordpress-ingress
    uid: 4ad9677a-0ec4-4892-b76e-05932388679f
    spec:
    rules:
    • host: cwordpress.px-snd1101.pks.foo.com
      http:
      paths:
      • backend:
        serviceName: wordpress
        servicePort: 80
        path: /
        tls:
    • hosts:
      • wordpress.px-snd1101.pks.foo.com
        status:
        loadBalancer:
        ingress:
      • ip: 198.19.244.74
        kind: List
        metadata:
        resourceVersion: ""
        selfLink: ""

Configmap:
apiVersion: v1
data:
allow-snippet-annotations: "false"
compute-full-forwarded-for: "true"
enable-underscores-in-headers: "true"
large-client-header-buffers: 8 64k
log-format-upstream: $proxy_protocol_addr - [$remote_addr] - $remote_user [$time_local]
"$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length
$request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length
$upstream_response_time $upstream_status $req_id $host [$proxy_add_x_forwarded_for]
proxy-buffer-size: 16k
ssl-ciphers: ALL
ssl-protocols: TLSv1.1 TLSv1.2 TLSv1 TLSv1.3
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: std-ingress-nginx
meta.helm.sh/release-namespace: std-ingress
creationTimestamp: "2020-09-04T19:23:42Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: std-ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.49.3
helm.sh/chart: ingress-nginx-3.39.0
name: std-ingress-nginx-controller
namespace: std-ingress

  • Others:
    • Snippet:
    •   nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($req_id ~ "([a-z0-9]{16,})([a-z0-9]{16,})") {
        set $span_id $1;
      }
      proxy_set_header x-b3-traceid $span_id;
      proxy_set_header x-b3-spanid $span_id;
      proxy_set_header b3 "";
      

What happened:

The NEW NGINX pods fail to start; the old ones remain up (due to MinPods).
Thew new pods fail to start w/errors stating (these are valid options per https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/), and not related to the problem at hand, which is the configmap w/the snipppet.

Error: exit status 1
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:149
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg1034381619:150
2021/10/24 18:20:20 [warn] 34#34: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1034381619:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg1034381619:151
2021/10/24 18:20:20 [emerg] 34#34: unknown "auth_resp_failcount" variable
nginx: [emerg] unknown "auth_resp_failcount" variable
nginx: configuration file /tmp/nginx-cfg1034381619 test failed

What you expected to happen:
I would expect NGINX to start, but either NOT allow the SNIPPET in the existing ingress object to functional. Or I would expect an error message stating that there is a SNIPPET in place when they're not allowed.

How to reproduce it:
Have an older version of NGINX like .46.0.
Have an ingress object w/ a snippet configuration.
Upgrade to .49.3 - setting the allow-snippet-annotations: "false"
(I have tested this on two separate clusters)

--->

Anything else we need to know:
This actually won't resolve, even if you remove the ingress object.
Remove (kubectl delete) the bad ingress object w/the Snippet ingress object, rollout restart the pods, and it will continue to fail to start the NGINX controller pods.

Change the CM to
allow-snippet-annotations: "true"

Rollout restart the controller pods, and it's up and running again.
Then you can change it back to false, safely and things will work (assuming you don't have any other snippet ingresses in the cluster).

/kind bug

@ThatAIXGuy ThatAIXGuy added the kind/bug Categorizes issue or PR as related to a bug. label Oct 24, 2021
@k8s-ci-robot
Copy link
Contributor

@ThatAIXGuy: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Oct 24, 2021
@ThatAIXGuy
Copy link
Author

Note this was an upgrade from .46 to .49.
I'm seeing other errors in the namespace events as well w/ this .49.3 version:

Error: exit status 1
2021/10/29 23:28:45 [emerg] 5448#5448: unknown directive "me" in /tmp/nginx-cfg975307894:1
nginx: [emerg] unknown directive "me" in /tmp/nginx-cfg975307894:1
nginx: configuration file /tmp/nginx-cfg975307894 test failed


and this where we still left the snippet enabled (these are valid options though I'm only setting 1 of them as well as our SSL key is good/valid formatting / works):

Error: exit status 1
2021/10/29 06:39:44 [warn] 49#49: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg574671440:149
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg574671440:149
2021/10/29 06:39:44 [warn] 49#49: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg574671440:150
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg574671440:150
2021/10/29 06:39:44 [warn] 49#49: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg574671440:151
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg574671440:151
2021/10/29 06:39:44 [emerg] 49#49: cannot load certificate "/etc/ingress-controller/ssl/std-ingress-std-ingress-secret.pem": PEM_read_bio_X509() failed (SSL: error:0908F066:PEM routines:get_header_and_data:bad end line)
nginx: [emerg] cannot load certificate "/etc/ingress-controller/ssl/std-ingress-std-ingress-secret.pem": PEM_read_bio_X509() failed (SSL: error:0908F066:PEM routines:get_header_and_data:bad end line)
nginx: configuration file /tmp/nginx-cfg574671440 test failed


@brsolomon-deloitte
Copy link

You can also trigger this with:

    nginx.ingress.kubernetes.io/proxy-redirect-from: '/(.*)$'
    nginx.ingress.kubernetes.io/proxy-redirect-to: '/app1/$1$'

Result:

admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: ------------------------------------------------------------------------------- Error: exit status 1 2021/12/02 18:21:52 [warn] 4346#4346: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2390481190:143 nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2390481190:143 2021/12/02 18:21:52 [warn] 4346#4346: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2390481190:144 nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx-cfg2390481190:144 2021/12/02 18:21:52 [warn] 4346#4346: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2390481190:145 nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx-cfg2390481190:145 2021/12/02 18:21:52 [emerg] 4346#4346: invalid variable name in /tmp/nginx-cfg2390481190:407 nginx: [emerg] invalid variable name in /tmp/nginx-cfg2390481190:407 nginx: configuration file /tmp/nginx-cfg2390481190 test failed -------------------------------------------------------------------------------

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants