Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider allowing serving controllers to be run for a single namespace #2107

Closed
pmorie opened this issue Sep 28, 2018 · 14 comments
Closed

Consider allowing serving controllers to be run for a single namespace #2107

pmorie opened this issue Sep 28, 2018 · 14 comments
Labels
kind/feature Well-understood/specified features, ready for coding. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pmorie
Copy link
Member

pmorie commented Sep 28, 2018

We should consider allowing the knative serving controllers to run constrained to a single namespace.

Potential benefits of doing this:

  • Controllers may operate with a lower degree of RBAC permission (permissioned for a specific namespace as opposed to for the whole cluster)
  • Impact of serving controller failures and bugs may be isolated to namespaces instead of affecting the whole cluster
  • Users can update knative at their own pace
  • May simplify meeting performance goals if individual controllers have a smaller scope

Expected Behavior

  • I can run knative serving (and by correlation build) controllers in my namespace, operating on only my namespace

Actual Behavior

  • No way to do this now

I'm leaving an area off because I don't know that we have an area this fits nicely into.

@knative-prow-robot knative-prow-robot added the kind/feature Well-understood/specified features, ready for coding. label Sep 28, 2018
@jcrossley3
Copy link
Contributor

Definitely some benefits to this, but from the FaaS developer's perspective, we shift the cost of running the controller onto the user, i.e. he has to deploy the thing that runs his functions before he can run them. Also need to figure out whether that user is expected to install his own istio and knative eventing.

@pmorie
Copy link
Member Author

pmorie commented Oct 3, 2018

Chatted with @mattmoor about this and he mentioned another closely related use-case, which is allowing all knative controllers to run in a single (arbitrary namespace). That goal likely has a heavy overlap with running controllers in an arbitrary namespace, scoped to work on only that namespace.

I originally thought I had made a good choice about opening this issue in serving, but after some thought it seems like the best place to do some initial exploration is in build, since it is a relatively simple, self-contained piece, so I will open an issue there for notes about that exploration.

@zrss
Copy link
Contributor

zrss commented Oct 8, 2018

cc

@zrss
Copy link
Contributor

zrss commented Oct 9, 2018

hi everyone (@pmorie , @jcrossley3 , @mattmoor, @lichuqiang , and so on ... ) , do we have any process on this topic. i encounter the case that is a k8s (child) on k8s (parent) arch, and this leads to each k8s cluster (child) is in a stand alone namespace of k8s (parent)

and there is a easy way do impl this through the k8s deployment NAMESPACE env injection as my previous issues mention Decoupling serving namespace instead of fixed knative-serving

env:
- name: SERVING_POD_NAMESPACE
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace

@greghaynes
Copy link
Contributor

I think this is a use case we need to support and is a great idea. A few questions:

  • Due to the extra cost do we see it as necessary to also support the "shared control plane" use case as well? i.e. manage N namespaces from a single controller namespace. In the API meeting I got the feeling of yes we do but I wanted to make sure that gets called out.
  • Do we see "Users can update knative at their own pace" as a primary goal here? In order to do this there are the non-namespaced resources issues which have been mentioned in our API WG meeting but also I think we will need to clearly separate user/system components at installation time. As a user I cant just run our existing release yaml/etc from a namespace restricted account due to the cluster-wide resources being part of it. This also makes me think we'll want to have upgrade tests for this (test version skew between user and system components) if we aim to support this.

@zrss
Copy link
Contributor

zrss commented Oct 11, 2018

support the "shared control plane" use case as well? i.e. manage N namespaces from a single controller namespace

vote for this point too ~

@pmorie
Copy link
Member Author

pmorie commented Oct 17, 2018

@jcrossley3 and I wrote down some of the initial entanglements we encountered when doing discovery for this concern on build:

knative/build#391 (comment)

@pmorie
Copy link
Member Author

pmorie commented Oct 17, 2018

Due to the extra cost do we see it as necessary to also support the "shared control plane" use case as well? i.e. manage N namespaces from a single controller namespace. In the API meeting I got the feeling of yes we do but I wanted to make sure that gets called out.

It sounded like to me like that may be a use-case in the long-term. We at Red Hat are more interested at this point in time in the use-case of the controllers scoped to a single namespace. IMO, this is probably a good tactical goal to work towards that is more actionable than controllers managing a set of namespaces. The goal of allowing the controllers to run scoped to a single namespace will provide a lot of discovery for touchpoints with areas like installation, dev experience, RBAC, testing, etc, that will be plenty to think about without the additional complication of supporting the 'set of namespaces' use-case.

I suspect that the 'set of namespaces' use-case will have some gaps that need to be closed around RBAC which may require changes in kubernetes. For example, if a controller determines the namespaces to act on via label selection, how do we ensure that such a controller operates with a dynamically determined least-privilege? It is possible for a single identity to have RoleBindings in multiple namespaces, but each must be explicitly configured with a distinct RoleBinding afaik. In the set-of-namespaces use-case, in order to avoid granting the controller a cluster-level permission on the resources it needs to work with, it would be best to be able to use a selector type mechanism to grant permission on namespaces as well.

cc @jcrossley3

@philwinder
Copy link

philwinder commented Jul 30, 2019

I would like to add my use cases to this feature request for posterity, originally in #4959.

  1. In some enterprise setups, RBAC implementations prevent "application teams" from accessing the cluster scope (among other things). Hence, we should allow KNative to work in single namespace conditions, using Roles rather than ClusterRoles.

  2. Some applications require different KNative control planes for various reasons. It might be that there are just configuration differences, or maybe there is a requirement for better segmentation, or even if this is a mutli-region k8s cluster and we want a different control plane for the different regions. Note that I'm using the term control plane to denote the serving components. So in other words, we should allow the use of multiple Knative instances in a single cluster.

A similar issue for enabling this in istio is here: istio/istio#11977

Thanks.

@knative-housekeeping-robot

Issues go stale after 90 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle stale.
Stale issues rot after an additional 30 days of inactivity and eventually close.
If this issue is safe to close now please do so by adding the comment /close.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/lifecycle stale

@knative-prow-robot knative-prow-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2019
@knative-housekeeping-robot

Stale issues rot after 30 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.
Rotten issues close after an additional 30 days of inactivity.
If this issue is safe to close now please do so by adding the comment /close.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/lifecycle rotten

@knative-prow-robot knative-prow-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2020
@mattmoor
Copy link
Member

/lifecycle rotten

I think that we are pretty close here. Most Knative components can now run in arbitrary namespaces because of the system.Namespace() library/convention we have adopted everywhere. I haven't hit a single problem with running in any namespace with github.com/mattmoor/mink, so I think that aspect of the problem is solved.

We also should have a way to tell the injected informers to scope themselves to a single namespace, so I think the main thing we are missing now is a flag to plumb that through and testing to tell us what else we are really missing?

Are there folks still interested in pursuing this? cc @pmorie

@knative-housekeeping-robot

Rotten issues close after 30 days of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/close

@knative-prow-robot
Copy link
Contributor

@knative-housekeeping-robot: Closing this issue.

In response to this:

Rotten issues close after 30 days of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Well-understood/specified features, ready for coding. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants