Skip to content
This repository was archived by the owner on Apr 16, 2020. It is now read-only.

Poor large binary streaming throughput #75

Closed
gwihlidal opened this issue Jul 23, 2018 · 9 comments
Closed

Poor large binary streaming throughput #75

gwihlidal opened this issue Jul 23, 2018 · 9 comments

Comments

@gwihlidal
Copy link

gwihlidal commented Jul 23, 2018

I'm currently running into a problem of poor streaming throughput using tower-grpc, where it's taking 45 seconds to send a 168mb file from client to server - the binary data is chunked up and streamed, and I've tried 16k, 64k, 1mb, 2mb, and 3mb chunk sizes (all around ~45 seconds).

After reading some of these threads, I have a suspicion that there could be a similar problem here:

Notably:

Expanding stream window on receiving large messages

This is an optimization used by gRPC-C to achieve performance benefits for large messages. The idea is that when there’s an active read by the application on the receive side, we can effectively bypass stream-level flow control to request the whole message. This proves to be very helpful with large messages. Since the application is already committed to reading and has allocated enough memory for it, it makes sense that we send a proactive large window update (if necessary) to get the whole message rather than receiving it in chunks and sending window updates when we run low on window.

This optimization alone provided a 10x improvement for large messages on high-latency networks.

I'm not exactly sure how or where to make this change with tower, so I was hoping the resident experts would have an idea or suggestions on what to look at for fixing the throughput.

Thank you!
Graham

@seanmonstar
Copy link
Contributor

Yea, we've mentioned in passing before that adding BPD would be useful, but haven't prioritized.

As a stop-gap, if you expect to be streaming large bodies and would like to lift the window size limit, you should be able to do so by configuring the h2 server or client before giving it to tower-grpc. For the client, configure the intial_window_size and initial_connection_window_size. The methods are the same for the server builder.

@gwihlidal
Copy link
Author

Hi Sean,

Apologies, I'm a bit new to tower and h2 - would you mind showing me a brief snippet that properly does this configuration?

Thank you

@seanmonstar
Copy link
Contributor

I'll use the hello_world server example. When creating a new tower_h2::Server (here), you can replace the Default::default() with an h2::server::Builder. So, a snippet updated to change the window sizes could look like:

// snip ...

let new_service = server::GreeterServer::new(Greet);

let mut builder = h2::server::Builder::new();
builder.initial_window_size(some_number);
builder.initial_connection_window_size(maybe_some_other_number);

let h2 = Server::new(new_service, builder, reactor.clone());

// snip ...

@gwihlidal
Copy link
Author

Thank you! I have the server configured now, but also having issues figuring out how to modify the tower client example in a similar manner.

@gwihlidal
Copy link
Author

gwihlidal commented Jul 26, 2018

My current attempt was to do this:

.and_then(move |socket: TcpStream| {
                    // Bind the HTTP/2.0 connection
                    let mut builder = h2::client::Builder::default();
                    builder.initial_window_size(65536 * 2048); // for an RPC
                    builder.initial_connection_window_size(65536 * 2048); // for a connection
                    Handshake::new(socket, reactor, &builder)
                    //Connection::handshake(socket, reactor)
                        .map_err(|_| panic!("failed HTTP/2.0 handshake"))
                })

but this gives me method new is private. Unsure if there is a way to tunnel a builder in a different way, or if we need to get a handshake method that can take a builder as a param?

@gwihlidal
Copy link
Author

The new method is only public within the crate:
https://github.com/tower-rs/tower-h2/blob/master/src/client/connection.rs#L201

And trying to do the following will fail because inner and executor are private fields of Handshake:

let mut builder = h2::client::Builder::default();
                    builder.initial_window_size(65536 * 2048); // for an RPC
                    builder.initial_connection_window_size(65536 * 2048); // for a connection

                    let inner = builder.handshake(socket);
                    let handshake = Handshake {
                        inner,
                        executor: reactor,
                    }.map_err(|_| panic!("failed HTTP/2.0 handshake"));

@seanmonstar
Copy link
Contributor

Yea, looks like the only way currently is to use the Connect type to do the tcpstream connect and h2 handshake together. You build a Connect, and then use it as a NewService.

@gwihlidal
Copy link
Author

Thanks - I've temporarily forked tower-h2 and made the Handshake::new method public (it seems like this should be a safe thing to expose). Is it possible to make such a change in master?

https://github.com/SEED-EA/tower-h2/commit/37a56689d37ee366b0ebfb7002a725def6832ec4

Note: I've changed my client/server to 65536 * 2048 as per the grpc posts on the subject, and I'm getting much better transfer rates.

Cheers,
Graham

@gwihlidal
Copy link
Author

Hi, any update on this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants