Skip to content

Panic & CrashLoopBackoff when applying multiple non-matching canaries #3838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
perprogramming opened this issue Mar 4, 2019 · 0 comments · Fixed by #3839
Closed

Panic & CrashLoopBackoff when applying multiple non-matching canaries #3838

perprogramming opened this issue Mar 4, 2019 · 0 comments · Fixed by #3839

Comments

@perprogramming
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version: 0.23.0

Kubernetes version (use kubectl version): irrelevant

Environment: irrelevant

What happened:
We managed to completely crash an ingress-controller by simply adding a canary-ingress. When done in a production environment, all ingress-traffic will be down!
We applied an ingress rule with canary-annotation that contained multiple rules/paths and had no matching backend. As ingress-controller could not find a matching backend, it removed the alternative from the upstreams. As there were multiple rules/paths within the canary ingress, ingress-controller tried to remove the alternative from the upstreams again, leading to a panic and ultimately to a CrashLoopBackoff of all our ingress-controllers:

I0222 07:25:07.963194       8 nginx.go:288] Starting NGINX process
W0222 07:25:07.965543       8 controller.go:1221] unable to find real backend for alternative backend homepage-app-80. Deleting.
E0222 07:25:07.966247       8 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:522
/usr/local/go/src/runtime/panic.go:513
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1135
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1212
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:574
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:124
/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:135
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61
/usr/local/go/src/runtime/asm_amd64.s:1333

What you expected to happen:
Ingress-controller should only try to remove the alternative backend once.

How to reproduce it (as minimally and precisely as possible):
Simply load a canary-ingress rule with two paths that both do not match any real backend.

Anything else we need to know:
I will provide a PR including a fix and a test in a minute...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant