Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube->kms s2s Policy is Being Deleted Too Early #738

Closed
stephaniegalang opened this issue Mar 11, 2024 · 6 comments
Closed

kube->kms s2s Policy is Being Deleted Too Early #738

stephaniegalang opened this issue Mar 11, 2024 · 6 comments
Assignees
Labels
bug 🐞 Something isn't working internal-synced

Comments

@stephaniegalang
Copy link

stephaniegalang commented Mar 11, 2024

Upon cluster deletion, tf deletes the kms s2s policy before (or too soon after?) cluster deletion. This breaks the kms key that was associated with the deleted cluster, since if the kms s2s policy is deleted prior to IKS removing the cluster's association with the kms key, IKS cannot execute the removal and the key becomes un-deletable.

This appears to be happening after the #645 fix, where users are now able to provision clusters with kms keys.

The kms s2s policy MUST exist during cluster deletion. IKS may need a few seconds minutes after cluster deletion to execute the removal of the association with the kms key using the kms s2s policy.

Some fix suggestions:

Note: I am a representative of the Key Protect service. I do not have any logs associated with this bug. It is possible that this bug produces no error in tf

Affected modules

Terraform CLI and Terraform provider versions

  • Terraform version:
  • Provider version:

Terraform output

Debug output

Expected behavior

When deleting an IKS cluster associated with a kms key, a s2s policy between the cluster and kms should be in place to allow IKS to remove the association with the kms key, allowing the key to be deleted.

Actual behavior

When deleting IKS cluster associated with a kms key, the s2s policy between IKS and kms is removed. Without the s2s policy, IKS cannot remove the cluster's association with the kms key. The kms key cannot be deleted unless this association is removed

Steps to reproduce (including links and screen captures)

  1. Use tf to create an IKS cluster associated with a kms key
  2. Use tf to delete IKS cluster from step 1
  3. In UI/API/CLI, attempt to delete kms key from step 1. Fails with HTTP 409 "The key cannot be deleted because it's protecting a cloud resource that has a retention policy"

Anything else


By submitting this issue, you agree to follow our Code of Conduct

@stephaniegalang stephaniegalang added the bug 🐞 Something isn't working label Mar 11, 2024
@stephaniegalang
Copy link
Author

@Ak-sky @ocofaigh can you help with this?

@ryan-cradick
Copy link

Is the deletion of the service authorization policy required here? The policy may have existed prior to TF attempting to create it. Looking at a few logs, it will likely take at least 5 minutes from the cluster deletion request before the key association is removed.

@ocofaigh
Copy link
Member

ocofaigh commented Mar 11, 2024

@ryan-cradick We are actually explicitly creating a more locked down s2s auth policy here, so its not going to delete any one that was auto created by kube. We found we had to do this because kube only auto creates the auth policy for cluster data encryption, however for boot volume encryption, the policy needs to exist before cluster creation hence we need to explicitly create it.

@tyao117
Copy link

tyao117 commented Mar 13, 2024

This is under the assumption that the cluster is the vpc cluster.
Some suggestions:

I'm shooting some suggestions, feel free to counter with alternatives if needed.

@tyao117
Copy link

tyao117 commented Mar 13, 2024

Talking to @ocofaigh about all the proposed fixes. A depends_on will be used on the vpc iks cluster resource block

@ocofaigh
Copy link
Member

@stephaniegalang A fix has been included in https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone/releases/tag/v5.19.3 if you want to try it out? If any further issues please feel free to re-open

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working internal-synced
Projects
None yet
Development

No branches or pull requests

6 participants