-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle namespace deletion more gracefully in built-in controllers #84123
Handle namespace deletion more gracefully in built-in controllers #84123
Conversation
524fc6a
to
cffe4b6
Compare
cffe4b6
to
f7d6c67
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: smarterclayton The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
d619699
to
ec9fdc4
Compare
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
…ting Clients should be able to identify when a namespace is being terminated and take special action such as backing off or giving up. Add a helper for getting the cause of an error and then add a special cause to the forbidden error that namespace lifecycle admission returns. We can't change the forbidden reason without potentially breaking older clients and so cause is the appropriate tool. Add `StatusCause` and `HasStatusCause` to the errors package to make checking for causes simpler. Add `NamespaceTerminatingCause` to the v1 API as a constant.
Avoid sending an event to the namespace that is being terminated, since it will be rejected.
In some scenarios the service account and token controllers can race with namespace deletion, causing a burst of errors as they attempt to recreate secrets being deleted. Instead, detect these errors and do not retry.
Instead of reporting an event or displaying an error, simply exit when the namespace is being terminated. This reduces the amount of controller churn on namespace shutdown. While we could technically exit the entire processing loop early for very large replica sets, we should wait for more evidence that is an issue before changing that logic substantially.
Instead of reporting an event or displaying an error, simply exit when the namespace is being terminated. This reduces the amount of controller churn on namespace shutdown. While we could technically exit the entire processing loop early for very large daemon sets, we should wait for more evidence that is an issue before changing that logic substantially.
Instead of reporting an event or displaying an error, simply exit when the namespace is being terminated. This reduces the amount of controller churn on namespace shutdown. While we could technically exit the entire processing loop early for very large jobs, we should wait for more evidence that is an issue before changing that logic substantially.
…sets Instead of reporting an event or displaying an error, simply exit when the namespace is being terminated. This reduces the amount of controller churn on namespace shutdown. Unlike other controllers, we drop the replica set create error very late (in the queue handleErr) in order to avoid changing the structure of the controller substantially.
ec9fdc4
to
bd92607
Compare
/retest |
/test pull-kubernetes-e2e-gce |
/test pull-kubernetes-e2e-gce |
/retest |
/test pull-kubernetes-e2e-gce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/priority critical-urgent
/lgtm
/hold
for @tnozicka
@@ -213,7 +213,10 @@ func (c *ServiceAccountsController) syncNamespace(key string) error { | |||
sa.Namespace = ns.Name | |||
|
|||
if _, err := c.client.CoreV1().ServiceAccounts(ns.Name).Create(&sa); err != nil && !apierrs.IsAlreadyExists(err) { | |||
createFailures = append(createFailures, err) | |||
// we can safely ignore terminating namespace errors | |||
if !apierrs.HasStatusCause(err, v1.NamespaceTerminatingCause) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: this could be placed next to !apierrs.IsAlreadyExists(err)
// pod when the expectation expires. | ||
return nil | ||
if err != nil { | ||
if errors.HasStatusCause(err, v1.NamespaceTerminatingCause) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: not sure why you need to have two ifs if both return the same, comment for both handled in one should be sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -475,7 +475,7 @@ func (dc *DeploymentController) processNextWorkItem() bool { | |||
} | |||
|
|||
func (dc *DeploymentController) handleErr(err error, key interface{}) { | |||
if err == nil { | |||
if err == nil || errors.HasStatusCause(err, v1.NamespaceTerminatingCause) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will work only for unwrapped errors, but we might make it better when we get golang 1.13
I don't see the calls there wrapping the errors with context so this should be good for now
/hold cancel |
Kubernetes workload controllers deal poorly with namespaces being terminated and often go into tight backoff loops during shutdown. Since namespace termination cannot be reversed, for the majority of controllers when we detect this scenario we should eat / drop / exit the loop we are in and wait for more events. A more sophisticated mechanism could remember that a namespace was being terminated and dequeue all events that impact that namespace, but that also has to contend with races for a new namespace of the same name and is more invasive. In general, detect the namespace terminating error and exit fast, avoiding spawning other events or errors (since event creation is another example of a failure).
To make this possible, clients should be able to identify when a namespace is being terminated and
take special action such as backing off or giving up. Add a helper for getting the cause of an error and then add a special cause to the forbidden error that namespace lifecycle admission returns. We can't change the forbidden reason without potentially breaking older clients and so cause is the
appropriate tool.
This shows up in a lot of controllers during e2e runs. In theory this should take a ton of CPU pressure off the apiserver during e2e and speed up e2e runs.
/kind bug