-
Notifications
You must be signed in to change notification settings - Fork 87
Allow several controllers to watch the same resource #283
Comments
I would find this useful too. Right now in one of my controllers i monkey-patch I would quite like to be able to set this in code rather than adding additonal CLI flags though. |
@Jc2k IMO this still better be some 'external' thing (like CLI flag or env var), otherwise you can't run the same code for several instances of controller. |
Related: #23 |
There will be another issue beside the annotations. Kopf stores its handlers' statuses in So, the suffix/prefix should be also applied to the status sections of Kopf. E.g., by storing it in As a little addition, it could be a good idea to move the last-handled-state from Beside, having this huge JSON structure in the annotations is sometimes annoying to see (originally, it was done so to mimic kubectl's last-applied-configuration, which is also used for diff computation by kubectl — basically, for the same purpose as in Kopf). Both in case of the status and the annotation, the old names should be supported at least for reading, so that the change is not breaking. Writing back to those old places can be done if it is easy to implement — to support the non-breaking roll-back of Kopf's versions (but only if the old place exists there; ignore if it is non-existent). |
this patch adds -A / --annotate-with-peering flag to `kopf run` command, also defaulting to KOPF_RUN_ANNOTATE_WITH_PEERING env var. When set, the key name for annotation used by kopf to store state is prepended with peering object name used by this controller(s). In this wau several sets of kopf-based controllers, each using their own peering object name, can watch the same resource, storing their state separately. Issue zalando-incubator#283
I wouldn't mind having this as well. I have controllers that can manage conflicts internally, but I want to use kopf to watch for updates/etc since it's much more elegant than using the k8s client directly. Alternatively, if I could just hint to a handler that I didn't want it to annotate the object, but still have it break out the raw k8s event type (update, delete, create), it'd still be nice. I notice that @pshchelo has a some changes for this. Are they ready for a PR? |
(this is feature request, so forgive me for skipping the issue template)
Currently it is not possible to watch for the same k8s resource by several controllers - except for
on.event
spy handler, but events are ephemeral in k8s, so we need a more robust solution.This is needed mostly to only observe the state of some (cluster-shared) resource.
In our case this is several very stateful applications (each managed by separate kopf controller) that have to take some action on node status / metadata / annotation changes.
The problem boils down to the annotation kopf adds to the resource with the last reacted to state, and the name of this annotation field is hardcoded (
kopf.zalando.org/last-handled-configuration
).Technically we already have a run-time / configuration parameter that can distinguish different controllers - which is peering object name (
kopf run -P ..
)I propose to add this peering object name as a prefix to the annotation key that kopf adds (in case it is not 'default') like
<peering-name>.kopf.zalando.org/last-handled-configuration
. In this way several sets of different controllers (each using different peering object, which they almost anyway should do) will be able to watch the same resource without stepping on each other toes.This however might pose some problem on upgrade, so if such breaking change is to be avoided, we could add a separate CLI option to set this prefix name, or a CLI flag that will tell kopf to use peering object name as prefix for annotation field name.
The text was updated successfully, but these errors were encountered: