-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unbuffered chunked streaming #324
Comments
That sounds interesting and there was #129 (comment) which seems related. It was always kind of clear that forking the reverse proxy would eventually happen but I'm glad that for the most part I didn't have to. So my suggestion would be this:
|
@tommyvicananza I believe this is related #655 and may solve your issue. It does not implement a configurable flush but does re-add the Flush method that was lost in a previous change. |
Hi We have the same issue (fabio:1.5.15-go1.15.5). Our use-case is to display, by streaming (SSE), all statuses of a given object. Because the size of the buffer is never reached, the content is never displayed. Updating proxy.flushinterval in fabio.properties has no impact. Is it possible to configure the size of buffer to be taken into account while streaming? |
This is a serious problem when putting Fabio in front of Hashicorp Nomad: hashicorp/nomad#5404 If I access my nomad instance directly, then viewing the logs of any running job works fine. But if I try to view them when connecting via Fabio, it doesn't work. Apparently What's the status on this issue please? |
I hit the same problem. @sbrl - per-route config for this would be ideal, would you be willing to submit this as a PR @tommyvicananza ? The workaround for now is to set In the environment I'm working on, globally disabling buffering should be fine, but for many this will be very inefficient. Nomad does not trigger the SSE handling, as it uses content-type "application/json" |
Thanks, @james-masson! I've implemented that as a temporary fix. It seems to make it to work in some scenarios where it didn't before, but not others (e.g. when there aren't many logs) - so there's still work that needs doing here.
Does that mean that even implementing this fix won't completely solve the problem? If so, that explains the behaviour I'm seeing? |
I tried to set |
Unbuffered chunked streaming
Fabio uses the go
ReverseProxy
which buffers the outgoing response to the client.It uses a
ResponseWriter
with a buffered writer which buffers all outgoing data with a size of4kB
.As workaround an
Accept: text/event-stream
request header can be set to enable periodically flushing of that buffer. This needs to be set already in the incoming request, because the flush interval must be set already in theReverseProxy
constructor.Setting the accept header is not suitable for us, because we need to get the chunked resource directly from the browser. It streams live data.
In addition buffering up to 4kB before forwarding the response increases the latency for each request.
That's why we modified the default
ReverseProxy
implementation and added aFlush()
after the write to the response writer which solves both problems for us.If you think it makes sense to adopt this change we are happy to open a pull request.
The text was updated successfully, but these errors were encountered: