This repository has been archived by the owner on Sep 14, 2020. It is now read-only.
Run daemons only as long as they match the filtering criteria #342
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What do these changes do?
Run daemons only as long as they match the filtering criteria, making the daemon's filters continuously evaluated. Stop and re-spawn the daemons as soon as they stop or start matching the criteria (any number of times).
Description
While implementing an operator for EphemeralVolumeClaim resource (which is also Kopf's tutorial) with Kopf 0.27rc1, it has become clear that the newly introduced daemons (#330) have ambiguous behaviour when combined with filters: they were spawned only on the resource creation or operator startup, and never re-evaluated even when the criteria changes or the resource stops matching the criteria.
This problem didn't exist with the regular short-run handlers, as they were selected each time the changes/events happened, and never existed for long time.
This PR brings the daemons & timers with filters to clear and consistent behaviour:
Semantically, the daemon's filters define when the daemon should be running on a continuous basis, not only when it should be spawned on creation/restart (and then ignored afterwards).
The spawning/stopping can happen both due to the resource changes or the criteria changes (but triggered only on the resource changes).
For example, consider an operator:
Once created with an example object (which has
spec.field == "value"
), it will be instantly spawned.Then, we can modify the object so that it mismatches the criteria (or we could modify the criteria and trigger an event on the resource):
kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "other-value"}}'
The daemon will be stopped, as it mismatches the criteria now.
Then, we can revert the change:
kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "value-123"}}'
The daemon will be spawned again, because it matches the criteria again:
And so on.
Please also notice how the finalizer is added and removed to keep the resource blocked from deletion as long as any daemons are running, and free for deletion if not running (this was part of the original implementation, which is now adjusted to fit into this highly dynamic filtering).
A little note: once the daemon exits on its own accord, i.e. without being terminated by the framework, it is considered as intentional termination, and the daemon will never be spawned again within the current operator process.
For cross-restart prevention, there is currently no syntax feature available, but there is a simple trick to achieve it in 2 extra lines of code (documented in this PR too).
Issues/PRs
Type of changes
Checklist
CONTRIBUTORS.txt