Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dependency resource propagate failed when already created in member cluster. #4410

Closed
CharlesQQ opened this issue Dec 13, 2023 · 11 comments · Fixed by #4418
Closed

dependency resource propagate failed when already created in member cluster. #4410

CharlesQQ opened this issue Dec 13, 2023 · 11 comments · Fixed by #4418
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@CharlesQQ
Copy link
Member

CharlesQQ commented Dec 13, 2023

What happened:
dependency resource like configmap propagate failed, when deployment have configmap volume, configmaps already created in member cluster.
image

deployment has pp, and deployment rb is:
image
image

configmap hasn't pp, configmap rb is:
image

What you expected to happen:
dependency resource propagate successful.

How to reproduce it (as minimally and precisely as possible):

  1. create configmap in member cluster manully.
  2. create deployment, pp in controller plane, deployment contain configmap volume

Anything else we need to know?:

Environment:

  • Karmada version: 1.8
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@CharlesQQ CharlesQQ added the kind/bug Categorizes issue or PR as related to a bug. label Dec 13, 2023
@CharlesQQ CharlesQQ changed the title dependency resource rb field spec.conflictResolution=Abort even though main resource rb field spec.conflictResolution=Overwrite dependency resource propagate failed when already created in member cluster. Dec 13, 2023
@CharlesQQ
Copy link
Member Author

After I motify configmap rb field spec.conflictResolution=Overwrite, configmap propagate successfully.

@XiShanYongYe-Chang
Copy link
Member

For the current propagation of resources that dependency, if it has not been hit by PropagationPolicy for propagate before, the default value of the field spec.conflictResolution in rb is Abort. Therefore, this behavior is currently as expected.

Hi @CharlesQQ, If these resources already exist in the member cluster, why do you need to forcefully propagate the following resources?

Ask for some ideas from other people @chaunceyjiang @GitHubxsy @RainbowMango

@chaunceyjiang
Copy link
Member

Hi @CharlesQQ, If these resources already exist in the member cluster, why do you need to forcefully propagate the following resources?

Personally, I prefer to set the dependent resources to spec.conflictResolution: Overwrite. Semantically speaking, if A depends on B and A forcibly overwrites the resources of the member cluster, then B should also forcibly overwrite the resources of the member cluster. The reason is to keep the resource status in the member clusters consistent with that in control plane.

@CharlesQQ
Copy link
Member Author

Hi @CharlesQQ, If these resources already exist in the member cluster, why do you need to forcefully propagate the following resources?

because configmaps.spec.data might be changed by API, it's should be propagated to member cluster.

@CharlesQQ
Copy link
Member Author

Hi @CharlesQQ, If these resources already exist in the member cluster, why do you need to forcefully propagate the following resources?

Personally, I prefer to set the dependent resources to spec.conflictResolution: Overwrite. Semantically speaking, if A depends on B and A forcibly overwrites the resources of the member cluster, then B should also forcibly overwrite the resources of the member cluster. The reason is to keep the resource status in the member clusters consistent with that in control plane.

Agreed!

@chaunceyjiang
Copy link
Member

because configmaps.spec.data might be changed by API, it's should be propagated to member cluster.

make sense.

@RainbowMango
Copy link
Member

Personally, I prefer to set the dependent resources to spec.conflictResolution: Overwrite
I agree with @chaunceyjiang.

The ConflictResolution is declared on PropagationPolicy it should apply to all matched resources.

@CharlesQQ Have you figured out the root cause?

@chaunceyjiang
Copy link
Member

/assign

I will fix this issue, our environment extensively uses this feature.

@XiShanYongYe-Chang
Copy link
Member

XiShanYongYe-Chang commented Dec 14, 2023

This issue seems to be discussed more like a new requirement rather than a bug. Can we change its label to "feature"?

Here, it may be necessary to clarify a scenario. If the dependent resources have been propagated before and the conflictResolution policy is Abort, how do we want to handle it?

@chaunceyjiang
Copy link
Member

Here, it may be necessary to clarify a scenario. If the dependent resources have been propagated before and the conflictResolution policy is Abort, how do we want to handle it?

Good question!

My suggestion is: if A depends on B, then the value of conflictResolution derived from B should be divided into two cases.

  • Case one, if this RB is generated because B is selected by a certain PP, then in this case the value of B's RB conflictResolution should be consistent with B's PP. (It is also worth mentioning that even if at this time the value of PP is Abort, it doesn't matter, because B will always propagate to the member cluster and remain consistent with the control plane)
  • Case two: if this RB is generated because of A's dependency, then in this case the value of B's RB conflictResolution should be consistent with A's PP.

@XiShanYongYe-Chang
Copy link
Member

I agree with this point.
/cc @RainbowMango

I change the label.
/kind feature
/remove-kind bug

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

5 participants