-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Querying /health causes traefik to crash eventually #1013
Comments
I think I encountered this as well, just now. Everything had been running completely fine. The very moment I opened the health page:
I'm running There was nothing in the logs. I don't think it's simply a "stuck socket", since the admin UI serving the Now, I am definitely not opening the "health" page ever again in production, until this is fixed. |
This seems like a regression. This was fixed in #458 but it came back. Said issue said this has been fixed by setting the vendor pointer to thoas/stats@152b5d0 (Jul 26, 2016). However, when looking at the current master, we see it pointing to thoas/stats@79b768f (Jul 18, 2016 -> older with the bug). All vendor pointers are stored in Why was it downgraded? The commit of the downgrade points to dacde21 with message "Fix glide.yml go-units" but it changes so many more dependency versions than just go-units. Was this an error, @emilevauge? |
I managed to replicate the error with @zinob's method. BTW, I love the clever use of You can also pipe it to
On Traefik hanging, I
Extra reading: debugging deadlocks in Golang: https://lincolnloop.com/blog/lesson-learned-while-debugging-botbotme/ |
I just created a PR #1141 that fixes this issue. It will be released in the next RC :) |
Fixed by #1141 |
What version of Traefik are you using (
traefik version
)?tested on 1.0.2 and 1.1.2
What is your environment & configuration (arguments, toml...)?
started using docker-compose with a traefik.yml somewhat like:
What did you do?
I discovered this since traefik stopped responding when using collectd curl_json monitor to gather usage statistics via 127.0.0.1:8080/health
In that case it hangs after about 10 days or 14 400 queries
It can be provoked in a few minutes on my laptop with:
yes http://127.0.0.1:8080/health | xargs curl -s > /dev/null
strangely it then takes roughly 300k-700k queries. No excessive RAM-usage was detected, this does not happen if I request http://127.0.0.1:8080/ or http://127.0.0.1/foo.
What did you expect to see?
A json-answer
What did you see instead?
a stuck socket, complete failure to serve web-pages.
The text was updated successfully, but these errors were encountered: