You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As things stand now, we flush after every attempt to send a notification. For users writing small numbers of notifications, this is probably fine since it minimizes latency. For users writing zillions of notifications, it's probably a poor use of system resources, since flush is an expensive operation.
I propose:
Add a handler to the pipeline that flushes after N bytes have been written or M milliseconds of inactivity have elapsed.
Stop flushing after every write.
This will increase latency for single notifications, but may improve throughput for bulk sending operations. We'll definitely want to measure changes in throughput before deciding whether to ship something like this.
The text was updated successfully, but these errors were encountered:
Some very crude testing suggests that there are significant performance gains to be had here. I wrote a very simple benchmark app that sends 100k small notifications to a local mock server. By flushing after every 100 notifications instead of every single notification, throughput increased from ~14.5k notifications/second to ~17.4k notifications/second.
As an aside, I'll be keeping an eye on netty/netty#1759, which may introduce an upstream fix some time in the future.
As things stand now, we
flush
after every attempt to send a notification. For users writing small numbers of notifications, this is probably fine since it minimizes latency. For users writing zillions of notifications, it's probably a poor use of system resources, sinceflush
is an expensive operation.I propose:
This will increase latency for single notifications, but may improve throughput for bulk sending operations. We'll definitely want to measure changes in throughput before deciding whether to ship something like this.
The text was updated successfully, but these errors were encountered: