-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/util/log: alter bufferedSink to handle writes during sync flush #108928
Conversation
For those interested, a deeper analysis was documented here. |
31a41fa
to
7a73824
Compare
18226f3
to
7aaa203
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 4 of 4 files at r1.
Reviewable status:complete! 0 of 0 LGTMs obtained (waiting on @abarganier and @dhartunian)
pkg/util/log/buffered_sink.go
line 227 at r1 (raw file):
err := bs.mu.buf.appendMsg(msg) if err != nil { if errors.Is(err, errMsgTooLarge) {
why the error conditional? presumably if an error is returned by appendMsg
, by API contract the buffer wasn't taken over and can be discarded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall code but I'll let @knz approve it since there was a comment to address
Reviewed 1 of 1 files at r2, all commit messages.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @abarganier and @dhartunian)
Previously, if the bufferedSink had a synchronous flush scheduled, and an additional write (via the `output()` function) was sent to the bufferedSink, the bufferedSink would panic. After some investigation & analysis of the code, this approach was found to be unnecessary. We can gracefully handle this scenario without panicking. Instead, we can buffer the message to be included in the upcoming flush. In this scenario, if an additional forceSync output() call is sent to the bufferedSink, when one is already scheduled, we cannot make the call synchronous. Instead, we can buffer the message in the imminent flush, and return. Because of this, we change the name of the forceSync option to tryForceSync, to indicate that it's best-effort and not an ironclad guarantee. Release note: none
There are multiple undeferred lock usages in the bufferedSink beyond just the one that surrounds the call to `appendMsg`. This small patch cleans these usages up to use `defer` instead. Release note: none
d57a9e0
to
96acea6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @dhartunian, @knz, and @Santamaura)
pkg/util/log/buffered_sink.go
line 227 at r1 (raw file):
Previously, knz (Raphael 'kena' Poss) wrote…
why the error conditional? presumably if an error is returned by
appendMsg
, by API contract the buffer wasn't taken over and can be discarded.
I guess it was a bit of extra caution on my end, but you're right, if appendMsg
returns an error it's safe to assume the thing didn't get appended lol.
Removed the conditional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r3.
Reviewable status:complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @dhartunian and @knz)
TFTR! bors r=knz,Santamaura |
Build succeeded: |
109186: pkg/util/log: flush buffered network sinks on panic r=knz a=abarganier Previously, our crash reporter system would flush file log sinks as part of the process to handle a panic. This was an incomplete process, since buffered network sinks were not included in part of this flush process. This means that many times, panic logs would not make it to the network target, leading to a loss in observability. This patch introduces `log.FlushAllSync()`, which flushes both file and buffered network log sinks. It then updates the crash reporter to call into this, instead of just flushing file log sinks. `FlushAllSync()` contains timeout logic to prevent the process from completing if one of the underlying child sinks that a bufferedSink wraps becomes unavailable/hangs on its `output()` call. We originally attempted to fix this in #101562, but a bug in the bufferedSink code led us to roll back those changes. The bug in the bufferedSink code has since been fixed (#108928), so we can safely introduce this logic again. Release note: none Fixes: #106345 109578: rpc: increase gRPC server timeout from 1x to 2x NetworkTimeout r=andrewbaptist a=erikgrinaker This is intended as a conservative backport that changes as little as possible. For 23.2, we should restructure these settings a bit, possibly by removing NetworkTimeout and using independent timeouts for each component/parameter, since they have unique considerations (e.g. whether they are enforced above the Go runtime or by the OS, to what extent they are subject to RPC head-of-line blocking, etc). --- This patch increases the gRPC server timeout from 1x to 2x NetworkTimeout. This timeout determines how long the server will wait for a TCP send to receive a TCP ack before automatically closing the connection. gRPC enforces this via the OS TCP stack by setting TCP_USER_TIMEOUT on the network socket. While NetworkTimeout should be sufficient here, we have seen instances where this is affected by node load or other factors, so we set it to 2x NetworkTimeout to avoid spurious closed connections. An aggressive timeout is not particularly beneficial here, because the client-side timeout (in our case the CRDB RPC heartbeat) is what matters for recovery time following network or node outages -- the server side doesn't really care if the connection remains open for a bit longer. Touches #109317. Epic: none Release note (ops change): The default gRPC server-side send timeout has been increased from 2 seconds to 4 seconds (1x to 2x of COCKROACH_NETWORK_TIMEOUT), to avoid spurious connection failures in certain scenarios. This can be controlled via the new environment variable COCKROACH_RPC_SERVER_TIMEOUT. 109610: kv: remove assertions around non-txn'al locking reqs r=nvanbenschoten a=nvanbenschoten Closes #107860. Closes #109222. Closes #109581. Closes #109582. We might want to re-introduce these assertions in the future and reject these requests higher up the stack. For now, just remove them to deflake tests. Release note: None Co-authored-by: Alex Barganier <[email protected]> Co-authored-by: Erik Grinaker <[email protected]> Co-authored-by: Nathan VanBenschoten <[email protected]>
Previously, if the bufferedSink had a synchronous flush scheduled, and an additional write (via the
output()
function) was sent to the bufferedSink, the bufferedSink would panic.After some investigation & analysis of the code, this approach was found to be unnecessary. We can gracefully handle this scenario without panicking. Instead, we can buffer the message to be included in the upcoming flush.
In this scenario, if an additional forceSync output() call is sent to the bufferedSink, when one is already scheduled, we cannot make the call synchronous. Instead, we can buffer the message in the imminent flush, and return.
Because of this, we change the name of the forceSync option to tryForceSync, to indicate that it's best-effort and not an ironclad guarantee.
Release note: none
Fixes: #106345
NB: A follow up PR will reintroduce the flush trigger into the crash reporter / process shutdown procedure (similar to #101562, which was reverted). This PR focuses on the bufferedSink changes themselves, to keep discussion focused.