-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MessagePackSerializer.MaxAsyncBuffer
to speed up small async deserializations
#159
Add MessagePackSerializer.MaxAsyncBuffer
to speed up small async deserializations
#159
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 4 out of 6 changed files in this pull request and generated no suggestions.
Files not reviewed (2)
- src/Nerdbank.MessagePack/net8.0/PublicAPI.Unshipped.txt: Language not supported
- src/Nerdbank.MessagePack/netstandard2.0/PublicAPI.Unshipped.txt: Language not supported
Comments skipped due to low confidence (4)
test/Nerdbank.MessagePack.Tests/AsyncSerializationTests.cs:48
- The word 'sufficently' is misspelled. It should be 'sufficiently'.
// Verify that with a sufficently low async buffer, the async paths are taken.
test/Nerdbank.MessagePack.Tests/AsyncSerializationTests.cs:105
- [nitpick] The property name 'AsyncDeserializationCounter' should be 'asyncDeserializationCounter' to follow naming conventions.
internal int AsyncDeserializationCounter { get; set; }
src/Nerdbank.MessagePack/MessagePackSerializer.cs:39
- [nitpick] Consider using a constant or a more descriptive variable name instead of the magic number
1 * 1024 * 1024
for better readability.
private int maxAsyncBuffer = 1 * 1024 * 1024;
src/Nerdbank.MessagePack/MessagePackSerializer.cs:665
- Ensure that the new behavior introduced with
MaxAsyncBuffer
is covered by tests to verify its functionality.
private async ValueTask<T?> DeserializeAsync<T>(PipeReader reader, ITypeShapeProvider provider, MessagePackConverter<T> converter, CancellationToken cancellationToken)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #159 +/- ##
==========================================
- Coverage 81.42% 76.95% -4.48%
==========================================
Files 146 145 -1
Lines 10907 10534 -373
Branches 1519 1466 -53
==========================================
- Hits 8881 8106 -775
- Misses 1957 1991 +34
- Partials 69 437 +368
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
…serializations Since synchronous deserialization is substantially faster than async deserialization, we prefer sync. But sync requires that all msgpack data be pre-buffered. That's a reasonable trade-off, assuming the msgpack data is reasonably small. When it is large, the slower async path may be preferable to avoid unbounded memory consumption. Closes #155
c27975c
to
38d6680
Compare
CC: @eiriktsarpalis |
Since synchronous deserialization is substantially faster than async deserialization, we prefer sync. But sync requires that all msgpack data be pre-buffered. That's a reasonable trade-off, assuming the msgpack data is reasonably small. When it is large, the slower async path may be preferable to avoid unbounded memory consumption.
Closes #155
Before
After
Not all benchmarks show 0 tax for async deserialization for small jobs. The above works because it's a reasonably large array but still under the threshold for synchronous deserialization. But very tiny deserializations still pay a tax due to buffering overhead (even if it's already in memory) that sync deserialization APIs can avoid.