-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RESTMapper doesn't update to reflect new CRDs #321
RESTMapper doesn't update to reflect new CRDs #321
Comments
oh, meant to link to our workaround, openshift/cluster-network-operator#95 |
Yeah, I'd been meaning to port some of the code I wrote over to deal with resources added later. We'd probably want rate-limiting on the cache invalidation, but otherwise I'd be open to making that the default implementation. Watchable discovery would also be nice at some point :-/. Thus far, the problem is generally mitigated by the fact your pod can just fail and restart, but there are other reasons that updating discovery information is nice. Feel free to send a PR. |
/kind feature |
We came across this problem recently and for the moment have created a RestMapper using the
The idea being that if any CRDs are loaded in after the first discovery, the |
I was thinking something like a lazy discovery rest mapper, with a wrapper that invalidates on cache misses, but in a rate-limited way. You can use the DeferredDiscoveryRESTMapper with some custom wrapper around it to do the invalidation and rate limiting (since the deferredsicoveryrestmapper won't actually invalidate). |
see: kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
see: kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
see: kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
* use dynamic client Signed-off-by: Artiom Diomin <[email protected]> * Re-initialize dynamic client to drop CRD cache see: kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]> * Small review tweaks Signed-off-by: Artiom Diomin <[email protected]>
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/help-wanted |
I ran into a similar issue when starting a controller with a "Kind" Source that I know was installed after the Manager client was created, thought it was going to be a race condition since it seems to be pulling from cache: (controller-runtime/pkg/source/source.go: Start):
looks like @danwinship workaround is working for me,seems straightforward enough |
* Delete custom cache setup, no longer necessary * Consolidate client usage and use dynamic discovery (see kubernetes-sigs/controller-runtime#321 — hat tip to @danwinship for openshift/cluster-network-operator#95). Fixes [bz1711373](https://bugzilla.redhat.com/show_bug.cgi?id=1711373). * Plumb the cache through for future use
…d. (#663) * Use DynamicRESTMapper to reload REST mappings when types are not found. We see some flakes because the default DiscoveryRESTMapper caches the REST mapping and never reloads it. This causes some races if types are not available when a client is initialized. Fixes #650 I copied this fix from the PR/issue here: kubernetes-sigs/controller-runtime#321
@DirectXMan12 I've just hit this issue in Cluster API, I added a new Cached (and rate limited) RESTMapper here kubernetes-sigs/cluster-api@ee96c31. If you feel like that's a reasonable implementation, I'm happy to PR against controller-runtime as well. |
We're just about ready to get #554 merged , that should solve your problem. Can you try that? |
@DirectXMan12 Thanks, after a quick glance it looks very similar, it should work as well. I'm happy to switch once it'll be merged in controller-runtime. |
* upstream issue kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
* upstream issue kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
* upstream issue kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]>
* Upgrade controller-runtime to v0.3.0 * Upstream k8s libs are updated to kubernetes-1.15.4 release tag * Fixed many API breaking changes * Logic regarding checking version of CCM is removed Signed-off-by: Artiom Diomin <[email protected]> * Whitelist ICS license for dependencies Signed-off-by: Artiom Diomin <[email protected]> * Fix borken API in e2e tests Signed-off-by: Artiom Diomin <[email protected]> * Fix linter Signed-off-by: Artiom Diomin <[email protected]> * Removed HackIssue321InitDynamicClient hack * upstream issue kubernetes-sigs/controller-runtime#321 Signed-off-by: Artiom Diomin <[email protected]> * Return back accidentally removed initialization of dynamic client Signed-off-by: Artiom Diomin <[email protected]> * Revert "Removed HackIssue321InitDynamicClient hack" This reverts commit 73710af. Signed-off-by: Artiom Diomin <[email protected]> * Better comment reason for pki.go existance. Signed-off-by: Artiom Diomin <[email protected]>
Currently, if you use a controller-runtime
Client
to first create a CRD, and then create an instance of that CRD, you get an error about, eg,no matches for kind "NetNamespace" in version "network.openshift.io/v1"
. Because the client'sRESTMapper
is initialized at startup and then never updated with any new information about newly-available resource kinds, so it is only able to work with the kinds that existed before it was created.Our workaround for now is a hacked-up wrapper
RESTMapper
that watches for that error and reloads its cached data and tries again if it sees it. If you assume that that error will never occur other than in this case, then that seems like a plausible way to solve the problem. I could rework this into a PR for controller-runtime if you want. The only question would be whether it should replace the existing defaultRESTMapper
or if it should be an alternativeMapperProvider
function and users have to pick whether they want static vs dynamic.Or OTOH maybe the real fix should be at a lower level? I came across
DeferredDiscoveryRESTMapper
which is similar to my wrapper, but doesn't auto-invalidate on cache miss. But we could make it do that, and then change controller-runtime to use that?The text was updated successfully, but these errors were encountered: