You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Im trying to add auto instrumentation to trino, and for the most part it is working. I can see and collect traces for bunch of different libraries, also adding @WithSpan annotation create them.
But traces created have different TraceID for each different node in the cluster (I am running trino on kubernetes with multiple nodes), even though it should be all connected as I just run one query.
I suspect below JettyHttpClient9TracingInterceptor is to blame, as I can see it all over logs of every node in the cluster, and it started showing as soon as I added opentelemetry-javaagent to JVM configs.
2021-10-26T18:26:06.410Z INFO http-client-node-manager-60 org.eclipse.jetty.client.ResponseNotifier Exception while notifying listener org.eclipse.jetty.client.HttpRequest$2@14653b77
java.lang.NullPointerException
at io.opentelemetry.javaagent.shaded.instrumentation.jetty.httpclient.v9_2.internal.JettyHttpClient9TracingInterceptor.onBegin(JettyHttpClient9TracingInterceptor.java:139)
at org.eclipse.jetty.client.HttpRequest$2.onBegin(HttpRequest.java:414)
at org.eclipse.jetty.client.RequestNotifier.notifyBegin(RequestNotifier.java:91)
at org.eclipse.jetty.client.RequestNotifier.notifyBegin(RequestNotifier.java:77)
at org.eclipse.jetty.client.HttpSender.queuedToBegin(HttpSender.java:229)
at org.eclipse.jetty.client.HttpSender.send(HttpSender.java:172)
at org.eclipse.jetty.client.http.HttpChannelOverHTTP.send(HttpChannelOverHTTP.java:84)
at org.eclipse.jetty.client.HttpChannel.send(HttpChannel.java:125)
at org.eclipse.jetty.client.HttpConnection.send(HttpConnection.java:241)
at org.eclipse.jetty.client.http.HttpConnectionOverHTTP$Delegate.send(HttpConnectionOverHTTP.java:269)
at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.send(HttpConnectionOverHTTP.java:125)
at org.eclipse.jetty.client.http.HttpDestinationOverHTTP.send(HttpDestinationOverHTTP.java:38)
at org.eclipse.jetty.client.HttpDestination.process(HttpDestination.java:377)
at org.eclipse.jetty.client.HttpDestination.process(HttpDestination.java:332)
at org.eclipse.jetty.client.HttpDestination.send(HttpDestination.java:311)
at org.eclipse.jetty.client.HttpDestination.succeeded(HttpDestination.java:246)
at org.eclipse.jetty.client.AbstractConnectionPool.proceed(AbstractConnectionPool.java:313)
at org.eclipse.jetty.client.AbstractConnectionPool$FutureConnection.succeeded(AbstractConnectionPool.java:533)
at org.eclipse.jetty.client.AbstractConnectionPool$FutureConnection.succeeded(AbstractConnectionPool.java:511)
at org.eclipse.jetty.util.Promise$Wrapper.succeeded(Promise.java:130)
at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.onOpen(HttpConnectionOverHTTP.java:133)
at org.eclipse.jetty.io.SelectorManager.connectionOpened(SelectorManager.java:324)
at org.eclipse.jetty.io.ManagedSelector.createEndPoint(ManagedSelector.java:402)
at org.eclipse.jetty.io.ManagedSelector.access$2100(ManagedSelector.java:65)
at org.eclipse.jetty.io.ManagedSelector$CreateEndPoint.run(ManagedSelector.java:1069)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.base/java.lang.Thread.run(Thread.java:829)
What did you expect to see?
No errors in logs, and same TraceID on all traces coming from performing a query (across different nodes)
What did you see instead?
Mentioned above Error, that is spammed across all pods in deployment.
Differ4ent TraceID for one query, each node in a cluster produce a new one.
What version are you using?
1.7.0
The text was updated successfully, but these errors were encountered:
Describe the bug:
Im trying to add auto instrumentation to trino, and for the most part it is working. I can see and collect traces for bunch of different libraries, also adding @WithSpan annotation create them.
But traces created have different
TraceID
for each different node in the cluster (I am running trino on kubernetes with multiple nodes), even though it should be all connected as I just run one query.I suspect below
JettyHttpClient9TracingInterceptor
is to blame, as I can see it all over logs of every node in the cluster, and it started showing as soon as I added opentelemetry-javaagent to JVM configs.Steps to reproduce:
Add -javaagent:/usr/lib/trino/lib/opentelemetry-javaagent-1.7.0.jar to trino JVM config and try to tun it:
I build an example github actions workflow to demonstrate:
https://github.com/przemekak/trino/blob/przemekak/opentelemetry-bug/.github/workflows/ci.yml
In example run of this workflow:
https://github.com/przemekak/trino/runs/4024970957?check_suite_focus=true
In step
Run trino
you can see mentioned NullPointerException all over the logs of trino server.What did you expect to see?
No errors in logs, and same
TraceID
on all traces coming from performing a query (across different nodes)What did you see instead?
Mentioned above Error, that is spammed across all pods in deployment.
Differ4ent
TraceID
for one query, each node in a cluster produce a new one.What version are you using?
1.7.0
The text was updated successfully, but these errors were encountered: