-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(s2n-quic-transport): Adds stream batching functionality #2433
Conversation
/// | ||
/// The `stream::Controller` will be notified of streams that have been | ||
/// closed to allow for further streams to be opened. | ||
#[cfg(test)] | ||
pub fn iterate_transmission_list<F>(&mut self, controller: &mut stream::Controller, mut func: F) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason we need to keep this and not just merge the two?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two reasons to iterate over the transmission list currently. One is to send on each stream node in the transmission list. The other is to read the order of the transmission list (used only in testing).
Basically I now need to distinguish these two codepaths, because when we're reading over the list, we don't want to increment the transmission counter. Obviously we do want that behavior when sending on the list. So the two separate codepaths are now iterate_transmission_list
, which is only used for tests, and send_on_transmission_list
, which is what contains the counting logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Nice work
Release Summary:
Adds stream batching functionality to s2n-quic sending behavior. Stream batching is a sending strategy which provides each stream with the opportunity to fill up a packet "batch-size" times, before then passing that priority to the next stream.
Resolved issues:
Previous iteration of this PR: #2428
Description of changes:
Adds a way for users to turn on stream batching. The current sending strategy can be described as stream batching with a size of 1 (this can be seen with the fact that all current tests pass with the default steam batch size of 1). With this change we can express batch sizes > 1.
Essentially now the transmission list keeps track of how many times its head node gets to be at the front of the list. The head node is popped off the list and moved to the back of the list once the limit is reached. I had to extract the sending behavior out of the iterate_interruptable macro because we only want this behavior when we're trying to send on the transmission list.
Call-outs:
Testing:
Includes a unit test of the batching feature.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.