-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync deployment replicas when it is controlled by hpa #4707
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #4707 +/- ##
==========================================
+ Coverage 51.41% 51.68% +0.27%
==========================================
Files 250 247 -3
Lines 24979 24777 -202
==========================================
- Hits 12842 12807 -35
+ Misses 11431 11264 -167
Partials 706 706
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
2c47882
to
6c3063b
Compare
pkg/controllers/hpaautolabelretain/hpa_auto_label_retain_predicate.go
Outdated
Show resolved
Hide resolved
pkg/controllers/hpaautolabelretain/hpa_auto_label_retain_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/deploymentreplicassyncer/deployment_replicas_syncer_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/deploymentreplicassyncer/deployment_replicas_syncer_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/deploymentreplicassyncer/deployment_replicas_syncer_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/deploymentreplicassyncer/deployment_replicas_syncer_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/deploymentreplicassyncer/deployment_replicas_syncer_controller.go
Outdated
Show resolved
Hide resolved
pkg/controllers/hpaautolabelretain/hpa_auto_label_retain_controller.go
Outdated
Show resolved
Hide resolved
5ea3451
to
814d389
Compare
This feature needs a release notes. /assign @XiShanYongYe-Chang |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Signed-off-by: chaosi-zju <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
As for issue #4058, the previous implemention can not cover all cases, such as:
spec.replicas
in template, the modifition would be retained, and thus causing ready replias not equal to expect replicas when getting deployment (e.g: the user get deployment and see replicas ready/expect=2/2; then user manually updatespec.replicas
to 3; then user get deployment agagin and see replicas ready/expect=2/3; this displayment is not suitable for user to understand)Duplicated
schedule type, thespec.replicas
in resource template has different meaning, it is not suitable to do this kind of replicas sync.So, I introduced a
deploymentReplicasSyncer
to replace previoushpaReplicasSyncer
:hpa
is not the way we recommend, whileFederatedHPA
is what we recommend. However, a small number of existing users have already used deployment in this way, and we have to support it; while other resources have not been used by users, we hope that they will directly use the recommended way.)HPA
resource takes a detour and hides a lot of unpredictable results. It is more suitable to directly watchingDeployment
resource. That is, replacespec.replicas
withstatus.replicas
if the two is not equal.As for expected system behavior, it can be concluded as follows:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
much appreciate everyone involved in this issue to participate in the review @XiShanYongYe-Chang @lxtywypc @chaunceyjiang
feel free to provide better resolution, however, your resolution should generally meet follow cases:
case 1) template (ready/expect=2/2), propagate to two cluster -> hpa scale all member cluster replicas to 4 -> template (ready/expect=4/4), even in the scenario of hpa abnormal、member cluster power down...
case 2) template (ready/expect=2/2), propagate to two cluster -> user update
spec.replicas
in template from 2 to 3 -> no matter what intermediate states is but should ensure that the final state is 2/2, and cannot be 2/3.case 3) template (ready/expect=1/1), propagate to two cluster -> user update
spec.replicas
in template from 1 to 2 -> final state should be 2/2, that means one member cluster previously doesn't have deployment, now it has one replica.Does this PR introduce a user-facing change?: