-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to handle Writable | Hup? #449
Comments
If this is true, then it is a bug. Receiving HUP should make the socket readable and the next read should return 0. |
Hi, macOS version 10.13.6. Follow the steps below to reproduce the problem:
let x = self.poll_read_ready(mio::Ready::readable());
println!("poll read ready {:?}", x);
if let Async::NotReady = x? {
return Err(io::ErrorKind::WouldBlock.into())
}
let r = self.get_mut().read(buf);
println!("poll read {:?}", r);
if is_wouldblock(&r) {
self.clear_read_ready(mio::Ready::readable())?;
}
poll read ready Ok(Ready(Readable | Hup))
poll read Ok(0)
poll read ready Ok(NotReady)
# ......
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup))
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable | Hup))
poll read Ok(0)
client wrote 6 bytes and received 6 bytes |
Hi @SerhoLiu, can I get you to let me know which version of I've been trying to reproduce this by following the steps you posted above (on macOS 10.12.6). I don't see the behaviour you described on It's possible this was fixed between versions? Proxy logs from `master`Listening on: 127.0.0.1:8081
Proxying to: 127.0.0.1:8080
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(8)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(8)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable));
poll read Ok(9)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(9)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable | Hup));
poll read Ok(0)
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable | Hup));
poll read Ok(0)
client wrote 17 bytes and received 17 bytes Proxy logs from `tokio-reactor` v0.1.2Listening on: 127.0.0.1:8081
Proxying to: 127.0.0.1:8080
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(6)
// ...
poll read ready Ok(Ready(Readable | Hup));
poll read Ok(0)
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Ok(0)
client wrote 19 bytes and received 19 bytes Proxy logs from `tokio-reactor` v0.1.1Listening on: 127.0.0.1:8081
Proxying to: 127.0.0.1:8080
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(8)
// ...
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Ok(0)
client wrote 21 bytes and received 21 bytes |
It looks like the issue also exists in the current release version of Proxy logs from `tokio-reactor` v0.1.3Listening on: 127.0.0.1:8081
Proxying to: 127.0.0.1:8080
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(NotReady);
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Readable));
poll read Ok(5)
poll read ready Ok(Ready(Readable));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Readable | Hup));
poll read Ok(0)
poll read ready Ok(NotReady);
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Err(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })
poll read ready Ok(Ready(Hup));
poll read Ok(0)
client wrote 15 bytes and received 15 bytes |
I did a It looks like the only change in that commit that could actually effect behaviour was the version bump for @stjepang, do you have any insight into this? |
@hawkw Are you testing that on an x86-64 machine? Assuming the answer is "yes", I'm surprised because
Just double-checking: was the prior commit Also, I wonder whether the bug persists if we use the latest master branch, but change |
@stjepang I changed the Also, to answer your other questions: yes, I'm testing on x86_64 (Core i7 7920HQ); and yes, the prior commit I tested was d91c775. |
@hawkw I use https://github.com/tokio-rs/tokio/tree/e964c4136c91c535fb2dfee0a3327dc5a6599903 produce the problem #449 (comment), this problem also exists on FreeBSD 11.2. On my computer, current master also has this problem 😅. So I guess it might be related to the behavior of kqueue, maybe |
@SerhoLiu interesting! Thanks for the additional information. I'll keep looking into it. |
Tests must be written to catch this bug in the future. |
@stjepang It is always possible as |
I ran the reproduction against the current master a few more times, and it looks like (on my machine, at least) the issue does still exist, but it occurs intermittently. That would explain why @stjepang, it seems like the |
I ran into this problem recently with tokio-0.1.16. We have two middle boxes that proxy traffic between each other. In this scenario, it is important for us to maintain half-closed state. However, futures are polled excessively when it is actually in half-closed state. The key trigger logic is inside PollEvented2's write method. Suppose someone has a simple future that only does "try_ready!(Socket.write(buf))". Then when this future starts to return NotReady, and the remote socket has shutdown the write channel, following will happen:
This will continue until a threshold (run local task for 32 times) is reached, so the worker will turn and call reactor(epoll). But even epoll won't break the loop unless the underlying write buffer has been cleared. HUP makes it work like a spinlock. To fix this, we may need to distinguish RDHUP and HUP event, so that write is not affected by read shutdown on the other side. However, this also has something to do with mio, which currently does not distinguish the RDHUP and HUP (https://github.com/carllerche/mio/blob/master/src/sys/unix/epoll.rs#L231). I have no idea how to deal with this without breaking current users. This becomes a serious problem for us. It easily freezes a multi-core machine with only a handful of connections. Wondering if we can discuss about the solution. I am more than happy to contribute to it, too. |
Are you able to provide a failing test case? |
yes. I attached a ready to build tarball and a README inside. Let me know if anything is unclear. You may find similar tarball in quininer/tokio-rustls#33, which is quite related I believe. That one uses TLS, so it's more complicated. But the condition to trigger the problem is the same: client shuts down write side, and the server side PollEvented.write returns EAGAIN due to write pressure or packet loss. |
@carllerche do you have any plan to look into this issue (again) recently? It's being a showstopper for us right now so I hope it can be dealt with as soon as possible. The root cause of the problem is pretty clear to us at the moment, would you like me to add more details here or in a new ticket, such as approaches that we have tried to fix the problem? |
Could you open a PR w/ a failing test case illustrating the problem? I think I understand, but it would be most helpful to have this as a minimized test case that can be committed to the repo. |
The problem is if a HUP is not treated as writable, then in some cases, there is no signal to the write tasks to shutdown. One strategy would be to treat a HUP as a write event only one time (to avoid the loop). |
Got it. I took "failing test case" as "some external program". I will create a PR for this. Thanks |
Currently, EPOLLRDHUP sets UnixReady::hup(). This is incorrect behavior because higher-level libraries like tokio (correctly) assume that UnixReady::hup() is unclearable since it signals that both the read and write halfs are shutdown. However, EPOLLRDHUP only means that the TCP stream has been half-closed and, in reality, such a half-closed stream can still be written to. This will fix a current issue with tokio, which is that tokio infinitely attempts to write to a half-closed socket that's returning WouldBlock when it's write buffer is full, an issue which manifests with excessive CPU usage. I think this may help some of the issues discussed in tokio-rs/tokio#449 After this change, EOF will still be propogated correctly, because read-hangups also trigger read-readiness via EPOLLIN. However, if handling of EPOLLRDHUP is desired to be retained, I believe it should be implemented as another readiness kind on UnixReady, perhaps UnixReady::read_hup(). Possible concern of a breaking change: Since it's not currently possible for a user of mio to differentiate between EPOLLHUP and EPOLLRDHUP, it must be that no users of mio currently are. There _may_ be applications that test the "health" of a socket by checking for UnixRead::hup(), which would previously trigger on EPOLLRDHUP but will no longer with this change. This will change such applications from considering a half-closed connection as closed to considering it open. However, I still beleive this change is a correction of the semantics of HUP and the desired behavior such applications was already ambiguous. Note: if this is an agreed upon change for epoll, I think a similar change is in order for kqueue. I _think_ this would be to only set UnixReady::hup() if (e.filter == libc::EVFILT_READ && e.flags & libc::EV_EOF != 0), but I will leave that change to someone more knowledgeable with kqueue.
Currently, EPOLLRDHUP sets UnixReady::hup(). This is incorrect behavior because higher-level libraries like tokio (correctly) assume that UnixReady::hup() is unclearable since it signals that both the read and write halfs are shutdown. However, EPOLLRDHUP only means that the TCP stream has been half-closed and, in reality, such a half-closed stream can still be written to. This will fix a current issue with tokio, which is that tokio infinitely attempts to write to a half-closed socket that's returning WouldBlock when it's write buffer is full, an issue which manifests with excessive CPU usage. I think this may help some of the issues discussed in tokio-rs/tokio#449 After this change, EOF will still be propagated correctly, because read-hangups also trigger read-readiness via EPOLLIN. However, if handling of EPOLLRDHUP is desired to be retained, I believe it should be implemented as another readiness kind on UnixReady, perhaps UnixReady::read_hup(). Possible concern of a breaking change: Since it's not currently possible for a user of mio to differentiate between EPOLLHUP and EPOLLRDHUP, it must be that no users of mio currently are. There _may_ be applications that test the "health" of a socket by checking for UnixRead::hup(), which would previously trigger on EPOLLRDHUP but will no longer with this change. This will change such applications from considering a half-closed connection as closed to considering it open. However, I still beleive this change is a correction of the semantics of HUP and the desired behavior such applications was already ambiguous. Note: if this is an agreed upon change for epoll, I think a similar change is in order for kqueue. I _think_ this would be to only set UnixReady::hup() if (e.filter == libc::EVFILT_READ && e.flags & libc::EV_EOF != 0), but I will leave that change to someone more knowledgeable with kqueue.
Currently, EPOLLRDHUP sets UnixReady::hup(). This is incorrect behavior because higher-level libraries like tokio (correctly) assume that UnixReady::hup() is unclearable since it signals that both the read and write halfs are shutdown. In reality, EPOLLRDHUP only means that the TCP stream has been half-closed and such a half-closed stream can still be written to. This will fix a current issue with tokio, which is that tokio infinitely attempts to write to a half-closed socket that's returning WouldBlock when it's write buffer is full, an issue which manifests with excessive CPU usage. I think this may help some of the issues discussed in tokio-rs/tokio#449 After this change, EOF will still be propagated correctly, because read-hangups also trigger read-readiness via EPOLLIN. However, if handling of EPOLLRDHUP is desired to be retained, I believe it should be implemented as another readiness kind on UnixReady, perhaps UnixReady::read_hup(). Possible concern of a breaking change: Since it's not currently possible for a user of mio to differentiate between EPOLLHUP and EPOLLRDHUP, it must be that no users of mio currently are. There _may_ be applications that test the "health" of a socket by checking for UnixRead::hup(), which would previously trigger on EPOLLRDHUP but will no longer with this change. This will change such applications from considering a half-closed connection as closed to considering it open. However, I still beleive this change is a correction of the semantics of HUP and the desired behavior such applications was already ambiguous. Note: if this is an agreed upon change for epoll, I think a similar change is in order for kqueue. I _think_ this would be to only set UnixReady::hup() if (e.filter == libc::EVFILT_READ && e.flags & libc::EV_EOF != 0), but I will leave that change to someone more knowledgeable with kqueue.
Currently, EPOLLRDHUP sets UnixReady::hup(). This is incorrect behavior because higher-level libraries like tokio (correctly) assume that UnixReady::hup() is unclearable since it signals that both the read and write halfs are shutdown. In reality, EPOLLRDHUP only means that the TCP stream has been half-closed and such a half-closed stream can still be written to. This will fix a current issue with tokio, which is that tokio infinitely attempts to write to a half-closed socket that's returning WouldBlock when it's write buffer is full, an issue which manifests with excessive CPU usage. I think this may help some of the issues discussed in tokio-rs/tokio#449 After this change, EOF will still be propagated correctly, because read-hangups also trigger read-readiness via EPOLLIN. However, if handling of EPOLLRDHUP is desired to be retained, I believe it should be implemented as another readiness kind on UnixReady, perhaps UnixReady::read_hup(). Possible concern of a breaking change: Since it's not currently possible for a user of mio to differentiate between EPOLLHUP and EPOLLRDHUP, it must be that no users of mio currently are. There _may_ be applications that test the "health" of a socket by checking for UnixRead::hup(), which would previously trigger on EPOLLRDHUP but will no longer with this change. This will change such applications from considering a half-closed connection as closed to considering it open. However, I still beleive this change is a correction of the semantics of HUP and the desired behavior such applications was already ambiguous. Note: if this is an agreed upon change for epoll, I think a similar change is in order for kqueue. I _think_ this would be to only set UnixReady::hup() if (e.filter == libc::EVFILT_READ && e.flags & libc::EV_EOF != 0), but I will leave that change to someone more knowledgeable with kqueue.
Currently, EPOLLRDHUP sets UnixReady::hup(). This is incorrect behavior because higher-level libraries like tokio (correctly) assume that UnixReady::hup() is unclearable since it signals that both the read and write halfs are shutdown. In reality, EPOLLRDHUP only means that the TCP stream has been half-closed and such a half-closed stream can still be written to. This will fix a current issue with tokio, which is that tokio infinitely attempts to write to a half-closed socket that's returning WouldBlock when it's write buffer is full, an issue which manifests with excessive CPU usage. I think this may help some of the issues discussed in tokio-rs/tokio#449 After this change, EOF will still be propagated correctly, because read-hangups also trigger read-readiness via EPOLLIN. However, if handling of EPOLLRDHUP is desired to be retained, I believe it should be implemented as another readiness kind on UnixReady, perhaps UnixReady::read_hup(). Possible concern of a breaking change: Since it's not currently possible for a user of mio to differentiate between EPOLLHUP and EPOLLRDHUP, it must be that no users of mio currently are. There _may_ be applications that test the "health" of a socket by checking for UnixRead::hup(), which would previously trigger on EPOLLRDHUP but will no longer with this change. This will change such applications from considering a half-closed connection as closed to considering it open. However, I still beleive this change is a correction of the semantics of HUP and the desired behavior such applications was already ambiguous. This also fixes a similar issue with kqueue.
This should be resolved by the upcoming release of mio: tokio-rs/mio#942. |
On my macOS version 10.14.4, update mio to v0.6.17 also has this problem. I can't write some minimized test case, but can follow the steps below to reproduce the problem:
diff --git a/src/sys/unix/kqueue.rs b/src/sys/unix/kqueue.rs
index 371e0cf..7179d95 100644
--- a/src/sys/unix/kqueue.rs
+++ b/src/sys/unix/kqueue.rs
@@ -319,6 +319,13 @@ impl Events {
}
}
+ for event in &self.events {
+ if event.readiness().is_hup() {
+ assert!(event.readiness().is_writable(), "EV_EOF and EVFILT_WRITE");
+ assert!(event.readiness().is_readable(), "EV_EOF with EVFILT_WRITE not read ready");
+ }
+ }
+
ret
}
As I said, EV_EOF with EVFILT_WRITE does not indicate EVFILT_READ. |
Yep. Braden and I have talked about it so epoll is good now. |
@SerhoLiu Just because you don't get readable in the same call to poll doesn't mean the socket isn't readable. I'm not exactly sure what specific behavior you would like to see changed. |
@carllerche ok, but #449 (comment) problem still exists, maybe it’s not related to tokio-rs/mio#942 ... |
The latest release of mio fixes this on kqueue platforms. |
Related questions: tokio-rs/tokio-socks5#13
When socket is
Writable | Hup
, https://github.com/tokio-rs/tokio/blob/master/tokio-reactor/src/poll_evented.rs#L520poll_read_ready
returnOk(Ready(Hup))
, butself.get_ref().read(buf)
returnWouldBlock
, how to handle with this situation?any help?
The text was updated successfully, but these errors were encountered: