-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
query: v0.29.0. Opentracing config create gopanic SIGSEGV #5872
Comments
If I don't remember the default behavior of the Jaeger go sdk, if |
I'll look into this on the weekend, apologies for the crash. |
thank you, also if we need to set a sampler type, would be great to update the jeager/thanos doc we followed :) |
Hey @ahurtaud, sorry about that, I thought I have flagged this during review but looks like not. If remote was default before it should be default now as well. |
I created #5887 to correct this. I think we could even go with a patch release to make sure if more folks hit this they can just use a patch version. |
Thanos, Prometheus and Golang version used:
thanos v0.29.0
What happened:
While upgrading thanos version from 0.28.1 to v0.29.0, all our thanos queriers went in Crashloop because of a Segmentation violation (see logs below)
What you expected to happen:
as per #5411 states, there should be no breaking change.
Removing the tracing config from the specs, fix the pod (but remove tracing :) )
How to reproduce it (as minimally and precisely as possible):
hard to tell, on our setup with the working opentracing config above, we just upgraded the docker image version... :/
I am not sure about the healthiness of my jaeger installation though, maybe it panics when the otel target is down only. Still this should not break the thanos process if this happens :)
Full logs to relevant components:
The text was updated successfully, but these errors were encountered: