-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sentry background worker is chronically blocking async event loop when many exceptions are raised #2824
Comments
Hey @cpvandehey ! Thanks for the great ticket! |
Hey @cpvandehey, thanks for writing in. Definitely agree with you that our async support could definitely use some improvements (see e.g. #1735, #2007, #2184, and multiple other issues). Using an aiohttp client and an asyncio task both sounds doable and would go a long way in making the SDK more async friendly. |
We could detect if |
Hey @sentrivana / @antonpirker , any update on the progress for this? Happy to help |
Hey @cpvandehey, no news on this, but PRs are always welcome if you feel like giving this a shot. |
I see the milestone for this task was removed. @antonpirker, should we still consider writing our own attempt? |
Hey @cpvandehey, sorry for the confusion regarding the milestone. Previously we were (mis)using milestones to group issues together, but have now decided to abandon that system. Nothing has changed priority wise. |
Alright, I think im going to start to implement this. Stay tuned. |
Coming up for air after a few hours invested/tinkering. I realized a few things that I should discuss before proceeding:
Exhales Like most async integrations, they seem easy at the surface, but end up touching a lot of the code. Im wondering if I am on the right track with what the python sentry folks want for this design. I would love for this to be collaborative and iterative. Let me know your thoughts on the approach above :) |
Hey @cpvandehey ! Thanks for this great issue and your motivation! You are right, our async games is currently not the best, and we should, and will improve on it. To your deep dive:
Currently we are in the middle of doing a big refactor, where we try to use OpenTelementry (OTel) under the hood for our performance instrumentation. We should not do the OTel and the async refactoring at the same time, this will lead to lots of complexity and head aches. So I proposal is, that we first finish the OTel refactor and then tackle the async refactor. As this is a huge task we should then create a milestone and split the task up in smaller chunks, that can be tackled by multiple people at the same time. |
yes
sounds good! |
Hey Sentry folks.
Just bumping this ticket again. I assume the repo is in a better state to start this effort? |
Hi @cpvandehey, thanks for the bump! I do agree the repo is in a better shape. Moreover I've started working on an experimental HTTP/2 transport using httpcore and looks like it has native async support for that part. Since I'm lending some of my time to the Python SDK nowadays and working in a similar area, I think we can work together on the async support too. I don't think I'm as well-versed as you are when it comes to async game in Python so I can use some of the code you say you've already written such as the async background worker (I wonder if we actually need it with async as all we need is a queue and the event loops should handle the worker logic, right?). Anyway, this is hopefully coming and your involvement is much appreciated! |
All our ingest endpoints support HTTP/2 and some even HTTP/3 which are significantly more efficient compared to HTTP/1.1 with multiplexing and, header compression, connection reuse and 0-RTT TLS. This patch adds an experimental HTTP2Transport with the help of httpcore library. It makes minimal changes to the original HTTPTransport that said with httpcore we should be able to implement asyncio support easily and remove the worker logic (see #2824). This should also open the door for future HTTP/3 support (see encode/httpx#275). --------- Co-authored-by: Ivana Kellyer <[email protected]>
How do you use Sentry?
Self-hosted/on-premise
Version
1.40.6
Steps to Reproduce
Hello! And thanks for reading my ticket :)
The python sentry client is a synchronous client library that is retrofitted to fit the async model (by spinning off separate threads to avoid disrupting the event loop thread -- see background worker (1) for thread usage).
Under healthy conditions, the sentry client doesn’t need to make many web requests. However, if conditions become rocky and exceptions are frequently raised (caught or uncaught), the sentry client may become an extreme inhibitor to the app event loop (assuming high sample rate). This is due to the necessary OS thread context switching that effectively pauses/blocks the event loop to work on other threads (i.e the background worker (1)). This is not a recommended pattern (obviously) due to the costs of switching threads, but can be useful for quickly/lazily retrofitting sync code.
Relevant flow - in short:
Every time an exception is raised (caught or uncaught) in my code, a web request is immediately made to dump the data to sentry when sampled. Since sentry’s background worker is thread based (1), this will trigger an thread context switch and then a synchronous web request to dump the data to sentry. When applications receive many exceptions in a short period of time, this becomes a context switching nightmare.
Suggestion:
In an ideal world, sentry would asyncify its Background worker to use a task (1) and its transport layer (2) would use aiohttp. I don't think this is of super high complexity, but I could be wrong.
An immediate workaround could be made with more background worker control. If sentry’s background worker made web requests to dump data at configurable intervals, it would behave far more efficiently for event loops apps. At the moment, the background worker always dumps data immediately with regards to exceptions. In my opinion, if sentry is flushing data at app exit, having a 60 second timer to dump data would alleviate most of the symptoms I described above without ever losing data (albeit it would be up to 60 seconds slower).
(1) -
sentry-python/sentry_sdk/worker.py
Line 20 in 1b0e932
(2) -
sentry-python/sentry_sdk/transport.py
Line 244 in 1b0e932
Expected Result
I expect to have less thread context switching when using sentry.
Actual Result
I see a lot of thread context switching when there are high exception rates.
The text was updated successfully, but these errors were encountered: