You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.23.0
Kubernetes version (use kubectl version): irrelevant
Environment: irrelevant
What happened:
We managed to completely crash an ingress-controller by simply adding a canary-ingress. When done in a production environment, all ingress-traffic will be down!
We applied an ingress rule with canary-annotation that contained multiple rules/paths and had no matching backend. As ingress-controller could not find a matching backend, it removed the alternative from the upstreams. As there were multiple rules/paths within the canary ingress, ingress-controller tried to remove the alternative from the upstreams again, leading to a panic and ultimately to a CrashLoopBackoff of all our ingress-controllers:
I0222 07:25:07.963194 8 nginx.go:288] Starting NGINX process
W0222 07:25:07.965543 8 controller.go:1221] unable to find real backend for alternative backend homepage-app-80. Deleting.
E0222 07:25:07.966247 8 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:522
/usr/local/go/src/runtime/panic.go:513
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1135
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1212
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:574
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:124
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:135
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61
/usr/local/go/src/runtime/asm_amd64.s:1333
What you expected to happen:
Ingress-controller should only try to remove the alternative backend once.
How to reproduce it (as minimally and precisely as possible):
Simply load a canary-ingress rule with two paths that both do not match any real backend.
Anything else we need to know:
I will provide a PR including a fix and a test in a minute...
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.23.0
Kubernetes version (use
kubectl version
): irrelevantEnvironment: irrelevant
What happened:
We managed to completely crash an ingress-controller by simply adding a canary-ingress. When done in a production environment, all ingress-traffic will be down!
We applied an ingress rule with canary-annotation that contained multiple rules/paths and had no matching backend. As ingress-controller could not find a matching backend, it removed the alternative from the upstreams. As there were multiple rules/paths within the canary ingress, ingress-controller tried to remove the alternative from the upstreams again, leading to a panic and ultimately to a CrashLoopBackoff of all our ingress-controllers:
What you expected to happen:
Ingress-controller should only try to remove the alternative backend once.
How to reproduce it (as minimally and precisely as possible):
Simply load a canary-ingress rule with two paths that both do not match any real backend.
Anything else we need to know:
I will provide a PR including a fix and a test in a minute...
The text was updated successfully, but these errors were encountered: