Skip to content
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.

Settings (official and documented) #336

Merged
merged 11 commits into from
Mar 27, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 135 additions & 0 deletions docs/configuration.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
=============
Configuration
=============

There are tools to configure some of kopf functionality, like asynchronous

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Tools" makes me think of things like cmdline programs. How about "It is possible to configure..."?

tasks behaviour and logging events.


Startup configuration
=====================

Every operator has its settings (even if there are more than one operator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grammar nit: I'd word it as "... even if there is more than one operator in the same process ..."

in the same processes, e.g. due to :doc:`embedding`). The settings affect
how the framework behaves in details.

The settings can be modified in the startup handlers (see :doc:`startup`):

.. code-block:: python

import kopf
import logging

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.posting.level = logging.WARNING
settings.watching.session_timeout = 1 * 60
settings.watching.stream_timeout = 10 * 60

All the settings have reasonable defaults, so the configuration should be used
only for fine-tuning when and if necessary.

For more settings, see `kopf.OperatorSettings` and :kwarg:`settings` kwarg.


Logging events
==============

``settings.posting`` allows to control which log messages should be post as
Kubernetes events. Use ``logging`` constants or integer values to set the level:
e.g., ``logging.WARNING``, ``logging.ERROR``, etc.
The default is ``logging`.INFO``.

.. code-block:: python

import logging
import kopf

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.posting.level = logging.ERROR

The event-posting can be disabled completely (the default is to be enabled):

.. code-block:: python

import kopf

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.posting.enabled = False

.. note::

These settings also affect `kopf.event` and related functions:
`kopf.info`, `kopf.warn`, `kopf.exception`, etc --
even if they are called explicitly in the code.

To avoid these settings having impact on your code, post events
directly with an API client library instead of Kopf-provided toolkit.


Synchronous handlers
====================

``settings.execution`` allows to set a number of synchronous workers used
and redefined the asyncio executor:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "redefines", otherwise the sentence might first be parsed as "number of synchronous workers used and redefined".

Or even better, find a different word for "redefines" since that suggests the implementation of the executor is changed when I think it just gets reconfigured or maybe another object is instantiated with different parameters?

I would also change "a number" to "the number". With the former the sentence can be misunderstood as "some of the synchronous workers are somehow changed". (This is really grammar nitpicking. I expect most people who read this will know what it means as is.)


.. code-block:: python

import kopf

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.execution.max_workers = 20


It is possible to replace the whole asyncio executor used
for synchronous handlers (see :doc:`async`).

Please note that the handlers that started in a previous executor, will be
continued and finished with their original executor. This includes the startup
handler itself. To avoid it, make the on-startup handler asynchronous:

.. code-block:: python

import concurrent.futures
import kopf

@kopf.on.startup()
async def configure(settings: kopf.OperatorSettings, **_):
settings.execution.executor = concurrent.futures.ThreadPoolExecutor()


API timeouts
============

Few timeouts can be controlled when communicating with Kubernetes API:

``settings.watching.session_timeout`` (seconds) is how long the session
with a watching request will exist before terminating from the **client** side.
The default is forever (``None``).

``settings.watching.stream_timeout`` (seconds) is how long the session

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have never guessed that from the names. How about session_client_timeout and session_server_timeout?

with a watching request will exist before terminating from the **server** side.
The default is to let the server decide (``None``).

It makes no sense to set the client-side timeout shorter than the server side
timeout, but it is given to the developers' responsibility to decide.

The server-side timeouts are unpredictable, they can be in 10 seconds or
in 10 minutes. Yet, it feels wrong to assume any "good" values in a framework
(especially since it works without timeouts defined, just produces extra logs).

``settings.watching.retry_delay`` (seconds) is for how long to sleep between
watching requests -- in order to prevent API flooding in case of errors
or disconnects. The default is 0.1 seconds (nearly instant, but no flooding).

.. code-block:: python

import concurrent.futures
import kopf

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.watching.stream_timeout = 10 * 60
37 changes: 0 additions & 37 deletions docs/configuring.rst

This file was deleted.

2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Kopf: Kubernetes Operators Framework
shutdown
probing
authentication
configuring
configuration
peering

.. toctree::
Expand Down
16 changes: 16 additions & 0 deletions docs/kwargs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,22 @@ in case of retries & errors -- i.e. of the first attempt.
in case of retries & errors -- i.e. since the first attempt.


.. kwarg:: settings

Operator configuration
======================

``settings`` is passed to activity handlers (but not to resource handlers).

It is an object with predefined nested structure of containers with values,
which defines the operator's behaviour. See also: `kopf.OperatorSettings`.

It can be modified if needed (usually in the startup handlers). Every operator
(if there are more than one in the same process) has its own config.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if there is more than one


See also: :doc:`configuration`.


Resource-related kwargs
=======================

Expand Down
9 changes: 6 additions & 3 deletions examples/09-testing/test_example_09.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,16 @@ def obj_absent():
check=False, timeout=10, capture_output=True, shell=True)


def test_resource_lifecycle(mocker):
def test_resource_lifecycle():

# To prevent lengthy threads in the loop executor when the process exits.
mocker.patch('kopf.config.WatchersConfig.default_stream_timeout', 10)
settings = kopf.OperatorSettings()
settings.watching.stream_timeout = 10

# Run an operator and simulate some activity with the operated resource.
with kopf.testing.KopfRunner(['run', '--verbose', '--standalone', example_py], timeout=60) as runner:
with kopf.testing.KopfRunner(['run', '--verbose', '--standalone', example_py],
timeout=60, settings=settings) as runner:

subprocess.run(f"kubectl create -f {obj_yaml}",
shell=True, check=True, timeout=10, capture_output=True)
time.sleep(5) # give it some time to react
Expand Down
9 changes: 6 additions & 3 deletions examples/11-filtering-handlers/test_example_11.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,16 @@ def obj_absent():
check=False, timeout=10, capture_output=True, shell=True)


def test_handler_filtering(mocker):
def test_handler_filtering():

# To prevent lengthy threads in the loop executor when the process exits.
mocker.patch('kopf.config.WatchersConfig.default_stream_timeout', 10)
settings = kopf.OperatorSettings()
settings.watching.stream_timeout = 10

# Run an operator and simulate some activity with the operated resource.
with kopf.testing.KopfRunner(['run', '--verbose', '--standalone', example_py]) as runner:
with kopf.testing.KopfRunner(['run', '--verbose', '--standalone', example_py],
settings=settings) as runner:

subprocess.run(f"kubectl create -f {obj_yaml}",
shell=True, check=True, timeout=10, capture_output=True)
time.sleep(5) # give it some time to react
Expand Down
18 changes: 12 additions & 6 deletions kopf/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,15 @@
on, # as a separate name on the public namespace
)
from kopf.config import (
LOGLEVEL_INFO, # deprecated
LOGLEVEL_WARNING, # deprecated
LOGLEVEL_ERROR, # deprecated
LOGLEVEL_CRITICAL, # deprecated
EventsConfig, # deprecated
WorkersConfig, # deprecated
)
from kopf.engines.logging import (
configure,
LOGLEVEL_INFO,
LOGLEVEL_WARNING,
LOGLEVEL_ERROR,
LOGLEVEL_CRITICAL,
EventsConfig,
WorkersConfig,
)
from kopf.engines.posting import (
event,
Expand Down Expand Up @@ -62,6 +64,9 @@
build_object_reference,
build_owner_reference,
)
from kopf.structs.configuration import (
OperatorSettings,
)
from kopf.structs.credentials import (
LoginError,
ConnectionInfo,
Expand Down Expand Up @@ -121,4 +126,5 @@
'get_default_registry',
'set_default_registry',
'PRESENT', 'ABSENT',
'OperatorSettings',
]
12 changes: 10 additions & 2 deletions kopf/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,11 @@

import click

from kopf import config
from kopf.engines import logging
from kopf.engines import peering
from kopf.reactor import registries
from kopf.reactor import running
from kopf.structs import configuration
from kopf.structs import credentials
from kopf.structs import primitives
from kopf.utilities import loaders
Expand All @@ -19,6 +21,8 @@ class CLIControls:
ready_flag: Optional[primitives.Flag] = None
stop_flag: Optional[primitives.Flag] = None
vault: Optional[credentials.Vault] = None
registry: Optional[registries.OperatorRegistry] = None
settings: Optional[configuration.OperatorSettings] = None


def logging_options(fn: Callable[..., Any]) -> Callable[..., Any]:
Expand All @@ -28,7 +32,7 @@ def logging_options(fn: Callable[..., Any]) -> Callable[..., Any]:
@click.option('-q', '--quiet', is_flag=True)
@functools.wraps(fn) # to preserve other opts/args
def wrapper(verbose: bool, quiet: bool, debug: bool, *args: Any, **kwargs: Any) -> Any:
config.configure(debug=debug, verbose=verbose, quiet=quiet)
logging.configure(debug=debug, verbose=verbose, quiet=quiet)
return fn(*args, **kwargs)

return wrapper
Expand Down Expand Up @@ -64,6 +68,8 @@ def run(
liveness_endpoint: Optional[str],
) -> None:
""" Start an operator process and handle all the requests. """
if __controls.registry is not None:
registries.set_default_registry(__controls.registry)
loaders.preload(
paths=paths,
modules=modules,
Expand All @@ -74,6 +80,8 @@ def run(
priority=priority,
peering_name=peering_name,
liveness_endpoint=liveness_endpoint,
registry=__controls.registry,
settings=__controls.settings,
stop_flag=__controls.stop_flag,
ready_flag=__controls.ready_flag,
vault=__controls.vault,
Expand Down
Loading