Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP 2.0 Support #47

Closed
tomchristie opened this issue Jul 3, 2017 · 31 comments
Closed

HTTP 2.0 Support #47

tomchristie opened this issue Jul 3, 2017 · 31 comments

Comments

@tomchristie
Copy link
Member

tomchristie commented Jul 3, 2017

Using the h2 package.

@graingert
Copy link
Member

@tomchristie if you're going to use h2 for http2, it might be worth using https://github.com/njsmith/h11 for http 1.1.

As they both have the same interface

@tomchristie
Copy link
Member Author

I'd certainly like to have h11 as an option for a pure python interface yes.

@graingert
Copy link
Member

@tomchristie there's been lots of talk about getting httptools and h11 to share an interface.

@tomchristie
Copy link
Member Author

That would be pretty ace. Anywhere worth following?

@graingert
Copy link
Member

There was some stuff on IRC a while ago, and https://github.com/njsmith/h11/issues/9

@graingert
Copy link
Member

mostly worried about:

httptools is definitely faster -- but AFAICT that's mostly because it does less. There are a lot of fiddly bits required to correctly implement HTTP, and those have a cost, which h11 pays.

@kgriffs
Copy link

kgriffs commented Dec 14, 2017

+1 for being PyPy-friendly. I think a lot of people who are interested in uvicorn will want to use PyPy as well for the same reason (performance).

@graingert
Copy link
Member

@thedrow fyi Hyper is implemented using h2

@tomchristie
Copy link
Member Author

We're in a good place to tackle this now, against 0.2.
If anyone wants to take it on I'd suggest taking a look at both daphne and hypercorn, which both have implementations built on the h2 package.

@gvbgduh
Copy link
Member

gvbgduh commented Oct 25, 2019

@tomchristie How do you think this one is important? If there's no higher priority around I can give it a crack with h2.

@tomchristie
Copy link
Member Author

It’d be a great addition yup!
Shouldn’t be too hard to tackle - there’s the h11 and httptools implementations to work from, plus hypercorn’s http2 support, so lots of existing work to help with getting an implementation done.

@tomchristie
Copy link
Member Author

@gvbgduh - https://pypi.org/project/trustme/ may come in handy here.
Really we ought to be using that in our test cases already.
It's possible that there's some useful command line stuff we could do here, eg. something along the lines of a --mock-certificate flag that allows uvicorn to generate a mock certificate on startup. Not dug into it enough to understand exactly what'd need to happen there.

@gvbgduh
Copy link
Member

gvbgduh commented Nov 6, 2019

That's awesome, thanks a lot @tomchristie! I'll give it a try.

@gvbgduh
Copy link
Member

gvbgduh commented Nov 6, 2019

Yes, it actually works well!
It might be a bit rough yet, but the following code does the job.

@pytest.fixture()
def CA():
    yield trustme.CA()


@pytest.fixture()
def server_cert_file(CA):
    server_cert = CA.issue_cert("localtest.me").private_key_and_cert_chain_pem
    with server_cert.tempfile() as ca_temp_path:
        yield ca_temp_path


@pytest.fixture()
async def client_cert_file(CA):
    with CA.cert_pem.tempfile() as ca_temp_path:
        yield ca_temp_path


@pytest.mark.asyncio
@pytest.fixture()
async def async_test_client(client_cert_file):
    async with AsyncClient(
        http_versions=["HTTP/1.1"],
        base_url="https://localtest.me:8000",
        verify=client_cert_file,
    ) as async_test_client:
        yield async_test_client


@pytest.mark.asyncio
async def test_baseline_h11(async_test_client, CA, server_cert_file):
    config = Config(
        app=App, limit_max_requests=3, http="h11", ssl_certfile=server_cert_file
    )
    server = Server(config=config)
    config.load()
    CA.configure_trust(config.ssl)

    # Prepare the coroutine to serve the request
    run_request = server.serve()

    # Run coroutines
    results = await asyncio.gather(
        *[
            run_request,
            async_test_client.get("/"),
            async_test_client.get("/"),
            async_test_client.get("/"),
        ]
    )

Update: And httpx also speaks HTTP/2 as well now 🎉

@gvbgduh
Copy link
Member

gvbgduh commented Dec 14, 2019

@tomchristie I've been going round in circles with some issues/questions for a while. I'm currently implementing the simplest flow and to finish this I'd love to hear your advice/opinion about what's the best strategy to close the connection.

The matter is that the request-response is tied to the streams against the connection, and on the one hand, it's advised that

Servers are encouraged to maintain open connections for as long as
possible but are permitted to terminate idle connections if
necessary.

as of https://tools.ietf.org/html/rfc7540#section-9.1.

On the other hand, I see that in hypercorn (with not very thorough debugging) a connection is closed just after all streams are done (in absence of the keep-alive policies). Shall we proceed the same way or keep them for a while being idle?

It's probably an additional question how to manage the keep-alive policy as well.

I also wonder if it's worth keeping in mind the connection reuse as of https://tools.ietf.org/html/rfc7540#section-9.1.1, but it seems to be rather a complex feature.

@tomchristie
Copy link
Member Author

I don’t see any reason to differ from our HTTP/1.1 keep-alive policy.

The “keep them hanging around for as long as possible” seems like nonsense to me.

@gvbgduh
Copy link
Member

gvbgduh commented Dec 15, 2019

Yeah, fair enough indeed, not sure how I stuck in other thoughts, but it took to write it down to see it's quite obvious. Thanks!

@wyqsmith
Copy link

Any Progress about this issue?

@Kludex
Copy link
Member

Kludex commented Mar 10, 2021

I'm going to give it a try on this, on this weekend.

@nstrelow
Copy link

Kludex, did you have any luck? ^^

@Kludex
Copy link
Member

Kludex commented Apr 21, 2021

I was on it and then I forgot I was working on it. 😅

I'll continue on it this weekend (and open a PR, even if uncompleted).

EDIT: I'm too busy. I will not be able to provide a PR any time soon. If interested, feel free to give it a try. 😗

@Vibhu-Agarwal
Copy link
Contributor

Hello folks, I've been doing some work in this PR.
I think it's in a decent shape and some discussions could be started.
(also, I'm pretty bored and out-of-ideas right now, working alone on this)

I've written most of the important points in the first comment so that it's easier to review.

I'll ping about it in the Gitter as well 👍

@Kludex
Copy link
Member

Kludex commented Sep 18, 2021

@tomchristie Should I assume this issue is not valid anymore, considering what was discussed here?

@tomchristie
Copy link
Member Author

Yes - let's close this off.

uvicorn doesn't have HTTP/2 support, and that's okay.

It also doesn't have trio support, and that's okay too.

We ought to do a good good of promoting Hypercorn, which does have both of those features, and make sure that we're focused on improving the ASGI ecosystem as a whole, rather than trying to "win" in any particular category.

Being scoped to just asyncio + HTTP/1.1 + websockets is plenty enough for this particular project.

@synodriver
Copy link

Yes, now I can see this because many asgi servers just run behind a proxy (like nginx), so those proxy servers can be used to add http/2 support and similar features.

@Tinche
Copy link

Tinche commented Feb 3, 2022

IMO http2 is much more interesting for browser <-> server communication, which usually is handled by something in front of uvicorn. I'd say for microservice <-> microservice communication true http/2 (multiplexing streams) actually complicates load balancing a lot for not that much benefit.

@Tinche
Copy link

Tinche commented Feb 3, 2022

In other words, good call if we're dealing with limited resources.

@adriangb
Copy link
Member

adriangb commented Feb 3, 2022

Fun story: at some company I worked for 3 engineers spent 2 weeks rewriting a WSGI app into an ASGI app that would use Hypercorn just to "get HTTP/2" under the false impression that would help with client performance. Obviously the real issue was high-level architecture, but the part that's funny for this story is that I was able to pull up logs showing that most clients had already been using HTTP/2 because the app was behind NGINX anyway.

@jkemp101
Copy link

jkemp101 commented Feb 3, 2022

I haven't dug into this in a while but I'm interested in HTTP/2 to avoid 503s in the typical setup of browser -> load balancer -> app where you regularly are deploying the app. HTTP/1.1 can't guarantee not getting 503s if the app processes roll whereas HTTP/2's shutdown protocol can. It only affects a small percentage of requests but isn't negligible in a high traffic site with regular deploys. Hopefully I'm not remembering the facts wrong on this one.

@tomchristie
Copy link
Member Author

Fun story: at some company I worked for 3 engineers spent 2 weeks rewriting a WSGI app into an ASGI app that would use Hypercorn just to "get HTTP/2" under the false impression that would help with client performance.

Ah well now - that's a really pertinent anecdote. We've certainly got plenty we could be doing better in Uvicorn to help guide folks towards sensible engineering choices, rather than ooohhhh shiny.

Uvicorn's played a helpful part in helping the ASGI ecosystem evolve. And "it's not slower that the tightly integrated server/framework approach" was an important part in arguing the case for ASGI. However at this point we can ease off on that messaging, with a view too...

  • Promoting the ASGI ecosystem as a whole. Ensuring Daphne and Hypercorn are prominently highlighted.
  • Trying to guide users towards prioritzing aspects other than "fast because microbenchmark X". Eg. What's more robust, what's lowest complexity, what's likely to have longevity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

13 participants