-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add user stories for ANP Egress Cluster control #86
Conversation
✅ Deploy Preview for kubernetes-sigs-network-policy-api ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tssurya The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
1. Since ANP is a new API, define if our current namespace and pod selectors | ||
consider host-networked pods or not? Or do we want to allow this discretion to the | ||
CNI plugins? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should avoid implementation-defined behavior as much as possible. A policy object should have a single well-defined meaning, which is the same regardless of network plugin.
The host-networked pod case is tricky because: (a) there are good use cases for being able to select host-network pods, but (b) most existing NetworkPolicy implementations are not able to distinguish traffic from different host-network pods so we can't just require it.
One possibility would be to add explicit hostNetworkPodSelector
fields in addition to the regular podSelector
, and then have clear semantics for rejecting the policy if the plugin can't implement it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should avoid implementation-defined behavior as much as possible. A policy object should have a single well-defined meaning, which is the same regardless of network plugin.
+1
The host-networked pod case is tricky because: (a) there are good use cases for being able to select host-network pods, but (b) most existing NetworkPolicy implementations are not able to distinguish traffic from different host-network pods so we can't just require it.
true.One possibility would be to add explicit
hostNetworkPodSelector
fields in addition to the regularpodSelector
, and then have clear semantics for rejecting the policy if the plugin can't implement it.
yeah that's what I had in mind, since we still want to give users the ability to select host-net pods but also want clarity on what the API fields stand for (whether it includes host-net or just regular pods), we could just go with what you said, our regular PodSelector
will select only CNI pods, and we introduce a new hostNetworkPodSelector
that can be used for specifying host-net pods. NamespacedPeer
and NamespacedPodPeer
and NamespacedPodSubject
can all be modified to include this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed this up, let me know what you think @danwinship .
/cc @astoycos @Dyanngg
docs/userstories/v1/egress-control-in-anp/egress-control-cluster-nodes.md
Outdated
Show resolved
Hide resolved
docs/userstories/v1/egress-control-in-anp/egress-control-outside-cluster.md
Outdated
Show resolved
Hide resolved
2. **As a** cluster administrator **I want** to explicitly allow traffic | ||
from cluster workloads to handpicked destinations outside the cluster and | ||
deny the rest **so that** I can guarantee cluster wide egress traffic | ||
guardrails irrespective of whatever network policies are defined by the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"guardrails" implies "deny" doesn't it? I forget what we call cluster-wide "allow" rules
(Hm... I guess it depends on what the rails are guarding...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:D maybe I can refactor this bit in a more simpler way if its confusing? What I really meant was the first user story is for deny northbound and second user story is for allow northbound towards explicit destination on top of the deny.
guardrails are deny I guess, we should be using a word that implies opposite of that....
docs/userstories/v1/egress-control-in-anp/egress-control-outside-cluster.md
Outdated
Show resolved
Hide resolved
those workloads towards well defined destinations then those workloads should | ||
not be able to connect to any other external destinations than the explicitly | ||
allowed ones. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need another criterion for the "guardrails" case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought point 2 was the criterion for the "guardrails" case.. :D I changed the word guardrail, ptal the second iteration to see if it makes sense.
docs/userstories/v1/egress-control-in-anp/egress-control-outside-cluster.md
Outdated
Show resolved
Hide resolved
docs/userstories/v1/egress-control-in-anp/egress-control-outside-cluster.md
Outdated
Show resolved
Hide resolved
docs/userstories/v1/egress-control-in-anp/egress-control-cluster-nodes.md
Outdated
Show resolved
Hide resolved
95ce207
to
544d23b
Compare
@Dyanngg @astoycos @danwinship and anyone interested: PTAL at the new revision and add your thoughts! |
docs/userstories/v1/egress-control-in-anp/egress-control-cluster-nodes.md
Outdated
Show resolved
Hide resolved
|
||
1. Clarify that the existing `podSelector` in the ANP API are for regular | ||
non-hostNetworked pods only. These don't apply to hostNetworked pods. | ||
2. Add a new well defined optional field called `hostNetworkPodSelector` to both |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
policy rules will be very tricky to enforce though, since hostnet pods share the same node ip...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I agree with you but on L3/L4 level I am hoping because of how we don't have implicit deny's in ANP users will use this along with the port field to say what they want to allow like 6443 for k8s-server? -> but that's on a peer level... on a subject level you are right.. I need to think if we have strong use cases for host-networked pods being allowed in subject...but it is certainly something to look out for! if accidentally people block everything on this IP, then that does impact the to/from traffic of nodes...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to get this down in writing somewhere: it is theoretically possible to distinguish "traffic from hostNetwork pod A", "traffic from hostNetwork pod B", and "traffic from a node process" because each hostNetwork pod has its own cgroup and there are APIs (eg iptables -m cgroup
) that can match based on the cgroup of the socket associated with a connection.
AFAIK no one has tried to do this in practice, and the iptables-extensions
man page suggests that it only works on outbound packets, not inbound ones (though the nft
man page does not have the same warning and actually gives an example using input
, so maybe the iptables warning is out of date?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to get this down in writing somewhere: it is theoretically possible to distinguish "traffic from hostNetwork pod A", "traffic from hostNetwork pod B", and "traffic from a node process" because each hostNetwork pod has its own cgroup and there are APIs (eg
iptables -m cgroup
) that can match based on the cgroup of the socket associated with a connection.AFAIK no one has tried to do this in practice, and the
iptables-extensions
man page suggests that it only works on outbound packets, not inbound ones (though thenft
man page does not have the same warning and actually gives an example usinginput
, so maybe the iptables warning is out of date?)
thanks @danwinship! I am certainly learning a lot here! So seems like this is "tricky" based on implementations but not impossible to implement in the end...
From API perspective - these are egress stories so normal pods (to)-> host-networked pods OR normal pods (to)-> external entities outside the cluster
However I mentioned adding support for host-networked pods (to)-> normal-pods which makes this ingress use case depending on how we look at it but in the end host-networked pods are in the cluster.. Just wondering if there are asks or stories to add support for host-networked pods in the subject field.
544d23b
to
5bbdeb4
Compare
Signed-off-by: Surya Seetharaman <[email protected]>
5bbdeb4
to
05a85d3
Compare
docs/userstories/egress-control-in-anp/egress-control-outside-cluster.md
Show resolved
Hide resolved
/close in favor of #117 |
@astoycos: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Relates-to #28