-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go/runtime/host/multi: Propagate consensus sync requests to next version #5433
Conversation
04b29ac
to
7a9b9a6
Compare
Codecov Report
@@ Coverage Diff @@
## master #5433 +/- ##
==========================================
- Coverage 66.96% 66.37% -0.60%
==========================================
Files 533 533
Lines 56365 56375 +10
==========================================
- Hits 37746 37419 -327
- Misses 14236 14530 +294
- Partials 4383 4426 +43
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I would just reorder lines to keep host := agg.active.host
together with if
check, and host.Call
together with its error handling.
We could also call the next host in a go routine, to speed things up. Not sure if needed, though.
Not part of this PR, but it looks like call could be removed from the retry. It would be more obvious that call to next rt will only be made once if retried multiple times.
var (
activeHost host.Runtime
nextHost host.Runtime
)
callFn := func() error {
agg.l.RLock()
defer agg.l.RUnlock()
if agg.active == nil {
return ErrNoActiveVersion
}
activeHost = agg.active.host
if agg.next != nil {
nextHost = agg.next.host
}
return nil
}
7a9b9a6
to
73417e6
Compare
_, nextErr := nextHost.Call(ctx, body) | ||
if nextErr != nil { | ||
agg.logger.Warn("failed to propagate runtime request to next version", | ||
"id", agg.ID(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In another PR we could also update the logger, e.g. logging.GetLogger("runtime/host/multi").With("id", id)
. I guess here the version is important, but it can also be read from the logs.
Previously periodic consensus sync requests were not propagated to the next (e.g. upcoming) runtime version. This could result in the runtime's consensus view going stale which would make the attestations too old so they would be rejected during scheduling. Additionally, key manager update requests should also be propagated to ensure the runtime is ready immediately when activated, avoiding any potential race conditions.
73417e6
to
0ffa2f9
Compare
Previously periodic consensus sync requests were not propagated to the next (e.g. upcoming) runtime version. This could result in the runtime's consensus view going stale which would make the attestations too old so they would be rejected during scheduling.