-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shouldn't close on the Sink return Async::NotReady until the remote has responded with a close frame? #69
Comments
From trying this out in the library I use tokio-tungstenite from, I notice the following. If the remote end only tries to send, and doesn't do a read, it will never see the close frame, and thus never answer it. The code that triggers this is test code. In real life code there is almost always a reader task which will process incoming messages in a loop, triggering a read wakeup when a new message arrives, but it does mean that with such an implementation of close, it should always be combined with a timeout, just in case, to avoid the program hanging. |
Playing around with this a bit more, I notice that it's not possible to receive incoming messages anymore after sending a close frame. Since the remote might be sending out a data message just when you send a close, it seems like the receiving end should remain functional until we receive an acknowledgement. That can avoid data loss. Since the sending on the remote did not return an error, and it happened before they acknowledged a close frame, it seems to me that tungstenite should not silently force data loss on the user. I can't really see a good reason to swallow incoming data whilst waiting for the close to be acknowledged. Client code can choose to ignore those messages if they want, but tungstenite shouldn't make that decision. |
I think it would be problematic to implement this logic as that would mean that I believe in real use cases the users would not call
Yes, it was before we introduced
If you only have a
Right. Since WebSocket is a bidirectional connection, normally the user is interested in both sending messages and receiving. If only
Oh, that sounds bad, does it really behave so? I shortly checked the
I totally agree with you! If the messages get lost in the described scenario, it's a bug which must be fixed. Were you able to reproduce this with logs? |
I started checking your pull request and I see that indeed it may loose the data. There is a remark in RFC for this particular case though (https://tools.ietf.org/html/rfc6455#section-5.5.1).
But I do agree that it seems to be more logical to process the messages and not silently drop them in this case. |
Thanks for looking into it. The remark from the RFC is why I said "The rfc is not specific about this". It doesn't state that an endpoint should continue processing incoming messages, but I think it makes sense as a default and in any case it should be up to the app dev and not tungstenite to decide this.
I actually started implementing this (in my lib that works on top of tokio-tungstenite) and will post a link here as soon as I have finished all the error handling and documentation. The reason that I didn't try for tokio-tungstenite is because I haven't looked into what it implies in futures 0.1. I have more experience with std futures and in that case it's a matter of returning Pending from close, storing the waker and waking up the task in the reading part when a close frame is detected or the stream returns None. There is a point that I have not yet considered, which is when the client code would not split the reader and writer apart. In that case, indeed, the await on the close would just block forever since the user won't be polling the reading part anymore. I probably just end up documenting that users should run close concurrently from the reader part, but I 'll have to think about that a bit. |
@application-developer-DA Ok, so I dug some more through tungstenite source code and so I now see that it handles keeping the underlying connection alive long enough by waiting to send you I did a writeup of what I found out so others can benefit from documentation rather than having to dig the source code: snapview/tungstenite-rs#79. |
Seems to have been somewhat discussed in #4.
close
being async, it seems to me that it should only resolve once the remote has confirmed the close. This allows client code to await the close before dropping underlying network connection in order to close down properly.In an async-await version this would imply storing the waker and waking up a task waiting for the close to complete.
The text was updated successfully, but these errors were encountered: