Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sentry-tracing can significantly slow down instrumented code if the TransportThread can't keep up #543

Closed
FSMaxB opened this issue Feb 2, 2023 · 1 comment · Fixed by #546
Labels
bug tracing also known as performance monitoring, stuff like transactions and spans.

Comments

@FSMaxB
Copy link
Contributor

FSMaxB commented Feb 2, 2023

Environment

sentry 0.29.1 with features contexts, panic, reqwest, and rustls
sentry-tower in version 0.29.1 with the feature http
sentry-tracing in version 0.29.1 with default features

Steps to Reproduce

This is most likely to be reproduced with traces_sample_rate set to 1.0 but it will eventually also happen with lower sample rates as well.

  1. Configure the DSN to something that will cause requests to run into a timeout.
  2. Instrument your code with tracing and register the layer
  3. Run your code in such a way that it produces 32 or more traces in a few seconds.

Expected Result

The overhead from the sentry tracing layer should be minimal.

Actual Result

The instrumented code is blocking, waiting for the traces to be sent to sentry.

Detailed explanation

I've spent almost a week tracking down hanging integration tests after moving a test machine from one location to another.

I ultimately found out that we were configuring a DSN of https://[email protected]/1 in integration tests which, of course, is not a valid DSN, but it worked fine because it had always caused connection errors immediately. Only in that particular network the machine was moved to, it didn't produce a connection error immediately, but only after a timeout of somewhere between ~30s and ~90s. This led to a channel in sentry filling up and the integration tests essentially hanging for hours in CI.

The channel in question is used in TransportThread here:

let (sender, receiver) = sync_channel(30);

It is a completely synchronous, bounded channel with capacity of 30. (that's why things start slowing down on the 32nd request because one envelope is already in flight, 30 are in the channel and the 32nd is the one that hangs, waiting to be written into the channel).

The envelopes are enqueued here:

pub fn send(&self, envelope: Envelope) {
let _ = self.sender.send(Task::SendEnvelope(envelope));
}

And this is the code where things were blocked waiting for the network timeout:

match request.send().await {

Although this issue didn't happen in production for us, it very well could if that same call blocks for some reason (e.g. a network issue or an issue in your sentry.io backend infrastructure). Let's imagine for example that sentry.io has an issue where incoming requests are accepted but no data flows, all customers using sentry-rust would suddenly find that every operation that is sampled will block for multiple tens of seconds.

How to fix

The easiest fix I can come up with is to start dropping transactions if the TransportThread can't keep up, because I think a running application is more important than not losing any traces.

This could, for example, be achieved by changing:

pub fn send(&self, envelope: Envelope) {
let _ = self.sender.send(Task::SendEnvelope(envelope));
}

To:

    pub fn send(&self, envelope: Envelope) {
        if self.sender.try_send(Task::SendEnvelope(envelope)).is_err() {
            // Some error message that an envelope was dropped
        }
    }

There might be other ways or places to achieve that same goal. But ultimately, the tracing instrumentation should not, under any circumstance, be allowed to block the instrumented code to such an extent as waiting for an HTTP request to go through.

@loewenheim loewenheim added bug tracing also known as performance monitoring, stuff like transactions and spans. labels Feb 3, 2023
@Swatinem
Copy link
Member

Swatinem commented Feb 3, 2023

Thanks for the thorough analysis!

Dropping envelopes is a good quick fix. I don’t think we want to move to an unbounded channel either, as that would rather lead to runaway memory usage.
Actually sending more envelopes concurrently with the async-capable transports might be a middle ground solution to send as much as possible while not blocking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug tracing also known as performance monitoring, stuff like transactions and spans.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants