-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limit vertex size with 50000 to avoid huge grpc message making client… #2456
Conversation
369fb30
to
e52da28
Compare
This does not look correct. All items after 50000 seem to be sent separately. If your case is that you have a lot of log messages with small |
Not all items after 50000 seem to be sent separately, it's every 50000 items to be sent separately, In this case, limiting the number of messages is more accurate than limiting the size of the log, and it may be better to limit the number of messages and the size of the log at the same time. |
Still, where is that 50000 coming from? Take the size of |
Similar to 1M of log data size, 50000 is an experience value which can make size of message under the safe line. And agree with you, logically vertexLog.Size() is much better but it's so expensive. Considering that this kind of scene is relatively rare, other two methods is enough, which one you prefer? |
I mean that there is no actual known limit now. Sometimes the limit might be 1M (eg. if it is a single log) but if there are a lot of items it could be much higher (1M + 50K*empty record size). We should stick to one precise limit. |
I think the server's sending buffer limit should be consistent with the client's receiving buffer limit (now it's 16M) |
There is no need to set it very big as it also means that the client view becomes less responsive. I think 1MB is fine, just need to count it correctly. |
398e355
to
c6a7be8
Compare
control/control.go
Outdated
@@ -325,6 +325,8 @@ func (c *Controller) Status(req *controlapi.StatusRequest, stream controlapi.Con | |||
eg.Go(func() error { | |||
return c.solver.Status(ctx, req.Ref, ch) | |||
}) | |||
emtpyLogVertex := controlapi.VertexLog{} | |||
emtpyLogVertexSize := emtpyLogVertex.Size() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can make this global and move setting it to init()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
control/control.go
Outdated
// avoid logs growing big and split apart if they do | ||
if logSize > 1024*1024 { | ||
ss.Vertexes = nil | ||
ss.Statuses = nil | ||
ss.Logs = ss.Logs[i+1:] | ||
retry = true | ||
break | ||
}else { | ||
retry = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after first big message the retry always be true and there is an endless loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, looks weird indeed. maybe the retry := false
should be inside the for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
control/control.go
Outdated
// avoid logs growing big and split apart if they do | ||
if logSize > 1024*1024 { | ||
ss.Vertexes = nil | ||
ss.Statuses = nil | ||
ss.Logs = ss.Logs[i+1:] | ||
retry = true | ||
break | ||
}else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't look formatted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please squash the commits as well
…t buffer overflow Signed-off-by: sunchunming <[email protected]>
90d30bd
to
bb24010
Compare
|
fix issue in #1539
when run maven compile in Dockerfile there are a lot of log vertex in status grpc response(>200000) . The grpc header and other fields in vertex message create grpc message exceeded 20M, which cause receiving buffer in buildctl overflow