-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement non-caching, per-kustomization GC-client/statusPoller for cross-cluster kubeconfigs #135
Implement non-caching, per-kustomization GC-client/statusPoller for cross-cluster kubeconfigs #135
Conversation
I don't think we should be doing any caching, re-creating the KubeConfig for each reconciliation is 💯
Yes I think we should address this when we refactor the Impersonation. |
@stealthybox can you please confirm that this works with CAPI (GC+HealthChecking)? If so I would merge this without e2e tests and figure out later how to spin un a CAPI Kind provider in GitHub Actions. I want to get this out so that other people could test this with other CAPI providers. Before releasing this, we need to add a section to the API docs that explains how the KubeConfig field works and how it can be used from a CAPI management cluster. |
792ca91
to
67a06ff
Compare
0dc660a
to
e33e7a4
Compare
I've verified that Apply, Prune/GC, HealthCheck, and Delete are all working reliably with I've added some rather verbose docs for these fields, inline with the API, along with a new section to the documentation. Previous commit failed for some reason related to the CRD not being installed -- hopefully this one is green |
38bf551
to
dcb7c76
Compare
dcb7c76
to
ceb439d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @stealthybox 🏅
Fixes #127
First shot at this after familiarizing myself with all of the clientcmd and restclient types/methods.
Pending extensive testing /w CAPI.
Clients are re-created from the KubeConfig SecretRef for each reconciliation of a particular Kustomization.
( kubeconfigs such as the ones used for CAPA managed EKS clusters are regularly refreshed behind the scenes. )
I chose not to create a cache for these clients since they only survive a single reconciliation.
We could instead maintain a map of NamespacedNames to restClients in the KustomizationReconciler.
This might be worthwhile and fairly simple -- let me know what you think.
Currently, when not specifying a KubeConfig, we return the Reconciler's general client and statusPoller.
I expect to change this for security reasons in the future Impersonation patches, since HealthChecking
could be a form of cross-tenant information disclosure, and it could be possible to trick the Garbage Collector
into deleting things you'd otherwise not have access to.
I wonder how we can e2e test this?