-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support maxUnavailable in rollout #65
Support maxUnavailable in rollout #65
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kerthcet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
31dd616
to
15ed653
Compare
317dc4b
to
f82172a
Compare
Main changes:
Other updates:
Another concern is about the
so I remove the rolling update for size update right now, and make it immutable as original. Please take a look and sorry for the slow response because I'm right at kubecon last week. cc @ahg-g @liurupeng |
/hold cancel |
@kerthcet we have someone working on the e2e test already, that work is in progress, thanks! |
a7e6bfc
to
a7db8d3
Compare
|
||
return nil | ||
} | ||
|
||
// updates the condition of the leaderworkerset to either Progressing or Available. | ||
func (r *LeaderWorkerSetReconciler) updateConditions(ctx context.Context, lws *leaderworkerset.LeaderWorkerSet) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qq, will you add the lws.status.updatedReplica in the next PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For functionality, seems not that necessary, but for visibility, seems useful, maybe I'll add it later, but on my original design, I didn't have the plan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TL;DR: I'll add the field.
templateHash := utils.LeaderWorkerTemplateHash(lws) | ||
stop: | ||
for i := replicas - 1; i >= 0; i-- { | ||
for j := len(stsList.Items) - 1; j >= 0; j-- { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qq, why we traverse the stsList from the last to the first one as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because at first I though the items will start from the last index to 0, which will accelerate the calculation, but seems not that case, but I didn't change the implementation for lazy reason.
Anyway I think it doesn't make any difference? 😅
Thanks @kerthcet , this is a great implementation! Agree we should allow updating size in the next change, (this change is already large enough). Regarding updating the size will trigger the StatefulSet upgrade, could you elaborate more on this? When upgrade happens, both the template hash label and size annotation will change on the leader statefulset at the same time right? Will the leader statefulSet start a rolling update again and what is the behavior you saw? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really good, thanks Kante!
e4473ca
to
f25bd10
Compare
/test pull-lws-test-integration-main |
1 similar comment
/test pull-lws-test-integration-main |
I think an immutable size is fine and a reasonable tradeoff to simplify the rolling update logic |
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
Signed-off-by: kerthcet <[email protected]>
c6408ba
to
b906059
Compare
Signed-off-by: kerthcet <[email protected]>
311c0f4
to
7ff79a2
Compare
Last 2 commit:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of nits, otherwise this is good from my end!
test/e2e/e2e_test.go
Outdated
testing.UpdateWorkerTemplate(ctx, k8sClient, lws) | ||
|
||
// Wait for leaderWorkerSet ready again. | ||
testing.ExpectLeaderWorkerSetAvailable(ctx, k8sClient, lws, "All replicas are ready") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do we ensure that this is checking the availability status after the update? isn't it possible that we are still checking the previous status before the lws got a chance to pick the update and change the availability status to false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this down then we'll check the hash first to make sure we're validating the new revision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure this will make a difference, it would be nice if we can check that the template hash changed, and then check that the lws is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we already do as you said, so the process is:
- update the lws
ExpectValidLeaderStatefulSet
will check that the template hash is already the updated one, seelws/test/testutils/validators.go
Lines 121 to 124 in 59f5a76
hash := utils.LeaderWorkerTemplateHash(&lws) if sts.Labels[leaderworkerset.TemplateRevisionHashKey] != hash { return fmt.Errorf("mismatch template revision hash for leader statefulset, got: %s, want: %s", sts.Spec.Template.Labels[leaderworkerset.TemplateRevisionHashKey], hash) } ExpectLeaderWorkerSetAvailable
validate the lws is available.
Signed-off-by: kerthcet <[email protected]>
75bc593
to
2b9da9c
Compare
All set @ahg-g |
lgtm as well, I will wait for @ahg-g to approve |
/label tide/merge-method-squash Great work! |
What type of PR is this?
/kind feature
What this PR does / why we need it
Which issue(s) this PR fixes
part of #3
Special notes for your reviewer
Does this PR introduce a user-facing change?