-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reimplement rework cancel #2095
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2095 +/- ##
==========================================
- Coverage 70.67% 63.87% -6.81%
==========================================
Files 306 306
Lines 61995 61961 -34
==========================================
- Hits 43816 39578 -4238
- Misses 18179 22383 +4204
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
I started taking a look too, will find time sometime soon to review this. I was also thinking of an alternative way to pass cancellation token via Async context all the way to TdsParser.. Have you considered that as an option? Ideally, I would expect Parser to stop executing further as soon as cancellation is requested (switch to Attention mode, same as a timeout) and as lock is released we can have Cancel execute just fine! |
The CI failed on the test that was added to show that the rework was flawed. I spent some time trying to identify what's going wrong just by reading the test and looking at the code. This might be wrong because i can't reproduce the test failure locally. I think that the inner connection is closed but still needs to drain the packet data that it has available. It is put into the connection pool and picked up again causing the old packet data to available to the new query which we see as a difference in the query result output parameter. What I could really do with is a reliable reproduction. I'm going to spin up an azure db over the weekend if I can to see if latency allows me to see the problem locally.
I hadn't considered that. It sounds like a good idea to investigate. |
Interested to know if there are any plans to progress this further? @cheenamalhotra / @Wraith2 given your recent contributions have either of you gotten any further with this issue? |
No. I don't have a way to replicate the problem so i can't investigate further. |
Thank you for the quick response, much appreciated. With the original issue #44 being specific to while loops, in relation to this bug do you know if we should be concerned about Cancel causing deadlock for other types of queries? |
Closing stale PRs, please open a new one when ready with feedback implemented. |
Reimplementation of #956 to see what fails and why. The extra test added in #1352 passes locally.
/cc @cheenamalhotra in case you're interested or can help pinpoint what's going on here