-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should AsyncRead/AsyncWrite required pinned self? #1272
Comments
I believe the argument is that rust will gain support for async generators, and at that point I'm personally hesitant to forward proof at this point and would be include to switch AsyncRead / AsyncWrite back to At some point in the future, if it becomes obvious that these traits should take |
cc @seanmonstar |
After rust-lang/futures-rs#1454, the status quo in
This is a weird compromise where:
Using unboxed Of course, reconciling definitions of |
Many And, afaik, you need boxing or |
Note that the |
This is the current definition of
AsyncRead
(omitting provided methods):Pin
guarantees that the value will not be moved until it is dropped. For futures, this makes a lot of sense because once a future yields its value, it no longer has any use and can be dropped.However, for
AsyncRead
/AsyncWrite
types, I think the guaranteePin
makes is too strong.We'll eventually want higher level APIs like these either on this trait, or on an extension trait:
But unless all
AsyncRead
/AsyncWrite
implementations areUnpin
, there will be no way for these methods to ensure thatSelf
is pinned from within the future implementation.The guarantee should be that
Self
is pinned whilst the asynchronous operation (which ends with the destruction of the returned future) is underway, not thatSelf
is pinned forever.One way to solve this problem would be to introduce a more powerful
Future
trait, eg.And then have
AsyncRead
expose this method instead ofpoll_read
:This still allows the caller to avoid keeping around a persistent buffer just in case there is data to be read, but it can also be safely adapted into an API based on
std::future::Future
at the cost of using a persistent buffer:There is an additional benefit too: if the underlying implementation relies on there being a persistent buffer anyway (as I believe is the case on windows?) then the implementation could override the default "into_future" method to have the OS write directly into the provided buffer.
The text was updated successfully, but these errors were encountered: