Skip to content
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.

Run daemons only as long as they match the filtering criteria #342

Merged
merged 2 commits into from
Apr 7, 2020

Conversation

nolar
Copy link
Contributor

@nolar nolar commented Apr 6, 2020

What do these changes do?

Run daemons only as long as they match the filtering criteria, making the daemon's filters continuously evaluated. Stop and re-spawn the daemons as soon as they stop or start matching the criteria (any number of times).

Description

While implementing an operator for EphemeralVolumeClaim resource (which is also Kopf's tutorial) with Kopf 0.27rc1, it has become clear that the newly introduced daemons (#330) have ambiguous behaviour when combined with filters: they were spawned only on the resource creation or operator startup, and never re-evaluated even when the criteria changes or the resource stops matching the criteria.

This problem didn't exist with the regular short-run handlers, as they were selected each time the changes/events happened, and never existed for long time.

This PR brings the daemons & timers with filters to clear and consistent behaviour:

  • Once the resource stops matching the daemon's criteria, the daemon is stopped too.
  • Once the resource starts matching the daemon's criteria again, the daemon is started again too.

Semantically, the daemon's filters define when the daemon should be running on a continuous basis, not only when it should be spawned on creation/restart (and then ignored afterwards).

The spawning/stopping can happen both due to the resource changes or the criteria changes (but triggered only on the resource changes).


For example, consider an operator:

import asyncio
import kopf

def should_daemon_run(spec, **_):
    return spec.get('field', '').startswith('value')

@kopf.daemon('zalando.org', 'v1', 'kopfexamples', when=should_daemon_run, cancellation_timeout=1.0)
async def my_daemon(logger, **_):
    while True:
        await asyncio.sleep(5.0)
        logger.info("==> ping")

Once created with an example object (which has spec.field == "value"), it will be instantly spawned.

[2020-04-06 21:53:19,192] kopf.objects         [DEBUG   ] [default/kopf-example-1] Adding the finalizer, thus preventing the actual deletion.
[2020-04-06 21:53:19,194] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
[2020-04-06 21:53:19,199] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is invoked.
[2020-04-06 21:53:19,319] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.
[2020-04-06 21:53:24,205] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:53:29,210] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping

Then, we can modify the object so that it mismatches the criteria (or we could modify the criteria and trigger an event on the resource):

kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "other-value"}}'

The daemon will be stopped, as it mismatches the criteria now.

[2020-04-06 21:54:09,241] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:54:12,023] kopf.objects         [DEBUG   ] [default/kopf-example-1] Removing the finalizer, as there are no handlers requiring it.
[2020-04-06 21:54:12,024] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is signalled to exit by force.
[2020-04-06 21:54:12,024] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': []}}
[2020-04-06 21:54:12,027] kopf.objects         [WARNING ] [default/kopf-example-1] Daemon 'my_daemon' is cancelled. Will escalate.
[2020-04-06 21:54:12,038] kopf.objects         [DEBUG   ] [default/kopf-example-1] Sleeping was skipped because of the patch, 1.0 seconds left.
[2020-04-06 21:54:12,145] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.

Then, we can revert the change:

kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "value-123"}}'

The daemon will be spawned again, because it matches the criteria again:

[2020-04-06 21:55:05,378] kopf.objects         [DEBUG   ] [default/kopf-example-1] Adding the finalizer, thus preventing the actual deletion.
[2020-04-06 21:55:05,379] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
[2020-04-06 21:55:05,381] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is invoked.
[2020-04-06 21:55:05,503] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.
[2020-04-06 21:55:10,382] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:55:15,387] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping

And so on.

Please also notice how the finalizer is added and removed to keep the resource blocked from deletion as long as any daemons are running, and free for deletion if not running (this was part of the original implementation, which is now adjusted to fit into this highly dynamic filtering).


A little note: once the daemon exits on its own accord, i.e. without being terminated by the framework, it is considered as intentional termination, and the daemon will never be spawned again within the current operator process.

For cross-restart prevention, there is currently no syntax feature available, but there is a simple trick to achieve it in 2 extra lines of code (documented in this PR too).

Issues/PRs

Issues: #19

Related: #330 #150 #271 #317 #122

Type of changes

  • New feature (non-breaking change which adds functionality)

Checklist

  • The code addresses only the mentioned problem, and this problem only
  • I think the code is well written
  • Unit tests for the changes exist
  • Documentation reflects the changes
  • If you provide code modification, please add yourself to CONTRIBUTORS.txt

nolar added 2 commits April 6, 2020 19:43
When the resource changes so that it does not match the filters,
stop the daemon.

When the resource changes so that it starts matching the filters,
start it (either for the first time or again if previously stopped).

This also covers a case when the criteria themselves are changed.
But it is only applied on the next resource watch-event (when any
activity is at all triggered).

Semantically, the daemon's filters define when the resource should be
accompanied on a continuous timeline, not only when it should be spawned
initially: i.e., not assuming both criteria and resource as constant.
@zincr
Copy link

zincr bot commented Apr 6, 2020

🤖 zincr found 0 problems , 0 warnings

✅ Large Commits
✅ Approvals
✅ Specification
✅ Dependency Licensing

@nolar nolar added the enhancement New feature or request label Apr 6, 2020
@nolar nolar requested a review from mnarodovitch April 6, 2020 20:09
@nolar nolar merged commit 43fc69d into zalando-incubator:master Apr 7, 2020
@nolar nolar deleted the filtered-daemons branch April 7, 2020 12:06
@nolar nolar added this to the 0.27 milestone May 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants