Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add the nodes local IP address to OVS rules for egressnetworkpolicy #14924

Merged
merged 1 commit into from
Jun 30, 2017

Conversation

JacobTanenbaum
Copy link
Contributor

this change adds the nodes local IP address to the ovs rules when using
egressnetworkpolicies to limit egress from the cluster. Adding the nodes
local IP allows for dns resolution when dns is accessable on the node.

bug 1458849 Link

@eparis
Copy link
Member

eparis commented Jun 27, 2017

I'm guessing there is no way to only allow only the ip+dns port combo huh?

@knobunc
Copy link
Contributor

knobunc commented Jun 28, 2017

@eparis: This doesn't directly attempt to make DNS always work. What it does is to consider the external IP of the node to be allowed (since the SDN addresses on the node are always allowed too). We are explicitly not looking at their resolv.conf and allowing access to the DNS servers. If the system has a DNS server that is not on the node, then they will need to add that to the network policy to allow access. But the installer sets up a local split-hoirizon dnsmasq, and we would recommend people do that if they are not using our installer anyway.

Copy link
Contributor

@knobunc knobunc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

But we should work out how to test this...

@knobunc
Copy link
Contributor

knobunc commented Jun 28, 2017

@openshift/networking PTAL

[test][testextended][extended: networking]

Copy link
Contributor

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing there is no way to only allow only the ip+dns port combo huh?

We can. Although we'd need two rules in that case, one with tcp, tcp_port=53 and one with udp, udp_port=53, to allow both UDP and TCP DNS

@@ -407,6 +407,7 @@ func (oc *ovsController) UpdateEgressNetworkPolicyRules(policies []osapi.EgressN
otx.AddFlow("table=100, reg0=%d, cookie=1, priority=65535, actions=drop", vnid)
otx.DeleteFlows("table=100, reg0=%d, cookie=0/1", vnid)

otx.AddFlow("table=100, reg0=%d, priority=%d, ip, nw_dst=%s actions=output:2", vnid, len(policies[0].Spec.Egress)+1, nodeIP)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need a separate rule for each VNID. Just have a single rule (created in SetupOVS) with no reg0 match, with higher priority than any flow created by UpdateEgressNetworkPolicyRules. (An EgressNetworkPolicy can only have osapi.EgressNetworkPolicyMaxRules rules in it, so that plus 1 is high enough priority.)

@eparis
Copy link
Member

eparis commented Jun 28, 2017

Seems like the 2 rules would be a good idea if it is feasible. Means that the denial is only very specifically ignored instead of allow you to talk to the kubelet on 10250 and stuff like that...

@danwinship
Copy link
Contributor

@eparis well, the other thing is that you can already always talk to the kubelet by using the SDN IP address (eg, 10.128.0.1) rather than the node's public IP address. So it's not really blocking much. (I should have remembered and mentioned this in the last comment...)

@eparis
Copy link
Member

eparis commented Jun 28, 2017

the blanket 'deny all the things' rule doesn't apply?

@danwinship
Copy link
Contributor

EgressNetworkPolicy only applies to external traffic, so it doesn't apply to traffic directed at the node's SDN IP address

@eparis
Copy link
Member

eparis commented Jun 28, 2017

I wonder how hard it would be to point pods at the node sdn address for DNS instead of at the actual node DNS. And/Or is that is a good idea. Not this PR's problem.

Copy link
Contributor

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. please squash the two commits together though (plus the fix below)

@@ -177,6 +177,8 @@ func (oc *ovsController) SetupOVS(clusterNetworkCIDR, serviceNetworkCIDR, localS
// Table 100: egress network policy dispatch; edited by UpdateEgressNetworkPolicy()
// eg, "table=100, reg0=${tenant_id}, priority=2, ip, nw_dst=${external_cidr}, actions=drop
otx.AddFlow("table=100, priority=0, actions=output:2")
otx.AddFlow("table=100, priority=%d,tcp,tcp_dst=53,nw_dst=%s,actions=output:2", osapi.EgressNetworkPolicyMaxRules+2, nodeIP)
otx.AddFlow("table=100, priority=%d,udp,udp_dst=53,nw_dst=%s,actions=output:2", osapi.EgressNetworkPolicyMaxRules+1, nodeIP)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can give them the same priority; one specifies "tcp" and the other specifies "udp" so there's no packet that could match both flows, so they don't need distinct priorities.

this change adds the nodes local IP address to the ovs rules when using
egressnetworkpolicies to limit egress from the cluster. Adding the nodes
local IP allows for dns resolution when dns is accessable on the node.

bug 1458849

changelog:

 - changed the rules creation to SetupOVS()
 - made both udp and tcp rules the same priority
@danwinship
Copy link
Contributor

[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 74f0baf

@JacobTanenbaum
Copy link
Contributor Author

Flake #14043

@JacobTanenbaum
Copy link
Contributor Author

Flake #14385

@JacobTanenbaum
Copy link
Contributor Author

re[test]

@knobunc
Copy link
Contributor

knobunc commented Jun 29, 2017

[test]

@knobunc
Copy link
Contributor

knobunc commented Jun 29, 2017

[test] Last flake was AWS provisioning error.

@JacobTanenbaum
Copy link
Contributor Author

Flake #13945

@JacobTanenbaum
Copy link
Contributor Author

[test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 74f0baf

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/2838/) (Base Commit: 342ccff) (PR Branch Commit: 74f0baf)

@eparis
Copy link
Member

eparis commented Jun 30, 2017

[merge]
[testextended]

@openshift-bot
Copy link
Contributor

Evaluated for origin testextended up to 74f0baf

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/testextended SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended/799/) (Base Commit: b80c2fb) (PR Branch Commit: 74f0baf) (Extended Tests: networking)

@openshift-bot
Copy link
Contributor

openshift-bot commented Jun 30, 2017

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/1184/) (Base Commit: 383de81) (PR Branch Commit: 74f0baf) (Image: devenv-rhel7_6413)

@openshift-bot openshift-bot merged commit faae9ac into openshift:master Jun 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants