Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support bytes::Bytes & bytes::BytesMut payloads for binary & text messaging #465

Merged
merged 13 commits into from
Dec 14, 2024

Conversation

alexheretic
Copy link
Contributor

@alexheretic alexheretic commented Dec 14, 2024

This PR builds on top of #462

  • Adds Utf8Payload for text messages backed by Payload.
  • Tri-variant Payload for Bytes, BytesMut & Vec.
    • Bytes gives flexibility supporting shared data & cheap cloning.
    • BytesMut provides best read performance as we can get these from the new read buffer sans copying & unmask optimially too.
    • Vec provides best write performance. I couldn't get BytesMut to perform as well even changing the bench user code. Since it doesn't increase Payload 40 bytes size it seems worth having.
  • Replace the read buffer with a BytesMut based one that can then emit payloads without copying.
  • Optimise incomplete message handling so complete messages do not need to be copied (general case for small messages).
  • Add Payload::share for cheap cloning.
  • Make Payload::eq compare data, so allow equality across variants.
  • Remove and change a bunch of fns & types to suit the new internals.
  • Add general &[u8]/&str From impl and use const fn from_static to construct static Payload/Utf8Payload data.

The main advantage here is extending the bytes flexibility to Text types and better read performance.

Benchmarks

Write performance remains a little slower, no more than noted in the parent PR. However, read performance is significantly faster.

$ cargo bench --bench \* -- --quick --noplot --baseline master
read+unmask 100k small messages (server)
                        time:   [7.8768 ms 7.9103 ms 7.9187 ms]
                        change: [-38.736% -37.861% -36.967%] (p = 0.06 > 0.05)

read 100k small messages (client)
                        time:   [7.1168 ms 7.1322 ms 7.1939 ms]
                        change: [-34.273% -33.840% -33.406%] (p = 0.07 > 0.05)

write 100k small messages then flush (server)
                        time:   [4.6548 ms 4.6852 ms 4.6928 ms]
                        change: [+6.0437% +6.5546% +7.0663%] (p = 0.10 > 0.05)

write+mask 100k small messages then flush (client)
                        time:   [6.0834 ms 6.0875 ms 6.1039 ms]
                        change: [+6.0147% +6.6040% +7.1980%] (p = 0.10 > 0.05)

Copy link
Member

@daniel-abramov daniel-abramov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're fast! 🙂

Interesting decision to switch from ReadBuffer to BytesMut for the in_buffer! It's nice that it helps to spare additional allocations (like that one with a vector that had been previously allocated just to read and return a payload from the in_buffer).

@daniel-abramov daniel-abramov merged commit 40d7dd6 into snapview:master Dec 14, 2024
7 checks passed
@Zarathustra2
Copy link

Wow that is awesome!

@daniel-abramov Do you have an ETA when you plan on releasing 0.25 for tungstenite & tokio-tungstenite ?

@daniel-abramov
Copy link
Member

Wow that is awesome!

@daniel-abramov Do you have an ETA when you plan on releasing 0.25 for tungstenite & tokio-tungstenite ?

Published! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants