You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once a batch is received, they are dispatched to the publisher, which transforms and sends them through the libbeat pipeline to be recorded in Elasticsearch.
By default, agents will close the stream after 10 seconds, or after it reaches a certain size (~750K). So if an agent sends fewer than 10 events, the processor/stream code will generally block waiting for the stream to end before it dispatches to the publisher.
We should consider adding a timeout (or context with timeout) to the StreamReader.Read method to avoid this.
The text was updated successfully, but these errors were encountered:
report events in batches when either: we have a minimum of 10 events, 1 second passes, or the stream ends (addresses this issue)
On master with heavy.ndjson the benchmark I get ~19MB/s, with this branch I get ~27MB/s. Once #3551 is done, as mentioned in #1285 (comment), it would no longer be possible to parallelise decode/validate; but I expect validation will be so fast that it won't matter.
The
processor/stream
code reads in batches of 10 events at a time:apm-server/processor/stream/processor.go
Line 279 in 17e0f7a
Once a batch is received, they are dispatched to the publisher, which transforms and sends them through the libbeat pipeline to be recorded in Elasticsearch.
By default, agents will close the stream after 10 seconds, or after it reaches a certain size (~750K). So if an agent sends fewer than 10 events, the
processor/stream
code will generally block waiting for the stream to end before it dispatches to the publisher.We should consider adding a timeout (or context with timeout) to the
StreamReader.Read
method to avoid this.The text was updated successfully, but these errors were encountered: