Skip to content
This repository was archived by the owner on Apr 17, 2019. It is now read-only.

[ansible] Provide External Access to Cluster Addon Services #679

Closed
danehans opened this issue Mar 30, 2016 · 13 comments
Closed

[ansible] Provide External Access to Cluster Addon Services #679

danehans opened this issue Mar 30, 2016 · 13 comments
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@danehans
Copy link

Currently, the ansible project supports cluster addons [1] by default. Many of the addon services should be exposed external to the cluster. The addon manifests do not provide a common way for externally exposing services [2]. The ansible project should provide this mechanism. Here are a few options to achieve this goal:

  1. Use the contrib/service-loadbalancer project [3] in the kube-system namespace to manage each addon service that should be externally exposed.
  2. Use the nginx ingress controller project [4] in the kube-system namespace to manage each addon service that should be externally exposed.

In either case, a new ansible role should be created to manage this functionality.

[1] https://github.com/kubernetes/kubernetes/tree/master/cluster/addons
[2] kubernetes/kubernetes#23620
[3] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[4] https://github.com/kubernetes/contrib/tree/master/ingress/controllers

/cc: @rutsky @adamschaub @eparis

@rutsky
Copy link
Contributor

rutsky commented Apr 7, 2016

@danehans @adamschaub @stephenrlouie any plans to implement this soon?

I'm exploring ways of exposing cluster addons to external network currently, and interested if you started anything to achieve this already?

I want to try Ingress controller, since it looks like promising approach.

@danehans
Copy link
Author

danehans commented Apr 7, 2016

@rutsky I was able to use https://github.com/kubernetes/contrib/tree/master/service-loadbalancer to expose addons. I have been in communication with NGINX and they are interested in making https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx work. To make the nginx ingress controller work, we have two options, one is long-term fix while the other is short-term.

  1. The Long term one: Addons you're interested in exposing via Ingress must be able to provide the ability to configure ROOT URL of the http service they provide. Like for example, the graphana addon supports it -- https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml#L66-67. Configuring the same path in the addon configuration file and the same path in Ingress resource should lead to no issues.
  2. The Short term one: A workaround using NGINX on per addon basis, not very elegant, but should be functional.

@pleshakov
Copy link

If you're using NGINX controllers from here, you can use this workaround gist to expose grafana and the dashboard

Although the grafana supports configuring its ROOT_URL, there are still issues, so this workaround also applies to the grafana.

@bprashanth
Copy link

This makes sense. How much more complicated is the feature set of re-write rules? ROOT_URL feels like a more general re-write policy that would be nicely exposed as:

    http:
      paths:
      - path: /dashboard/
        rewrite: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

Unfortunately clouds don't support re-write yet, but it's such a common task for a proxy that I don't think we should wait. I don't want to handle this on a per addon basis (either controller per addon, or enabling it on an addon like grafana that supports ROOT_URL).

@pleshakov is there a more general argument for making ROOT_URL different from simple rewrite? I'd like to keep Ingress as cross-platform as possible, so unless there's a strong case to expose both a rewrite AND a root-url, I think we should just do rewrite.

@kubernetes/sig-network @aledbf @lavalamp

@bprashanth
Copy link

In the short term, we could do this through an annotation that applies to all rules in the Ingress, and/or through a cmd line flag that applies to all ingresses satisfied by an ingress controller. We need a way to run multiple ingress controllers in a single cluster anyway (eg: one that watches only kube-system and takes --root-url and one that watches everything else and passes urls as-is).

@lavalamp
Copy link
Contributor

@erictune and I were just talking about switching addon access to go through an Ingress, we actually have started a doc about this but not published it yet. Let's pick URL paths that are easy to compose (unlikely to collide). /components/dashboard/, /addons/dashboard/, or /ui/dashboard/ instead of just /dashboard/, for example. We're thinking right now that we want to install kube-apiserver at the root of this ingress, so that's got a number of paths you should avoid colliding with.

@pleshakov
Copy link

@bprashanth If every addon could provide an option to configure the ROOT_URL of their HTTP service, exposing them via Ingress should work for existing Ingress controller implementations.

I think rewrite is a nice and useful feature. However, sometimes it does not work by simply sending a request from the load balancer to the backend at the new URL. For example, if an application in its html pages returns hard-coded absolute links. For such applications, to make it work, we inspect the returned html content and headers, and fix the links. In case of NGINX, this can be done by adding sub_filter rules, but they would be different, depending on the application. It looks like the proxy in the kube api-server does that-- https://github.com/kubernetes/kubernetes/blob/master/pkg/util/proxy/transport.go

@bprashanth
Copy link

If every addon could provide an option to configure the ROOT_URL of their HTTP service, exposing them via Ingress should work for existing Ingress controller implementations.

Agreed, but that's just another piece we need to teach everyone writing a new cluster addon about. Given that we probably need some rewrite idiom, I think we can come up with something that doesn't require the backend to explicitly cooperate.

For example, if an application in its html pages returns hard-coded absolute links. For such applications, to make it work, we inspect the returned html content and headers, and fix the links.

Isn't that the common case? We'd just document that our rewrite option is dependent on how the ingress controller implements it, some many not handle it at all, some do a dumb re-write, while those capable maintain a good user experience (eg: fixing links)

@pleshakov is there a case where > 30% users would want fine granied control over re-writes (eg: to say only re-write don't fix hard links) that can't be expressed through annotations (meaning it would be too hard to do at the Ingress scope, instead of Ingress.Rule scope)?

@pleshakov
Copy link

@bprashanth

Isn't that the common case? We'd just document that our rewrite option is dependent on how the ingress controller implements it, some many not handle it at all, some do a dumb re-write, while those capable maintain a good user experience (eg: fixing links)

The common case is the dumb rewrite. But it should work for a lot of applications. It seems to work fine for the dashboard and the grafana addons. Agree, handling of any special cases would be different from one controller to another.

@pleshakov is there a case where > 30% users would want fine granied control over re-writes (eg: to say only re-write don't fix hard links) that can't be expressed through annotations (meaning it would be too hard to do at the Ingress scope, instead of Ingress.Rule scope)?

In the case of NGINX Controller, If we offer a user to put NGINX configuration snippets in annotations in an Ingress resource that will fix any problems by rewriting the response headers and body from the backend, this will be the fine grained control, with which the user should be able to fix most of the problems. The snippets will be different, depending on the application, and they will be inserted into the configuration file. It will look ugly, but hopefully rarely used.

Another related but useful feature can be a redirect

http:
      paths:
      - path: /ui/
        redirect: 
          path: /dashboard/
          type: permanent
      - path: /dashboard/
        rewrite: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

@aledbf
Copy link
Contributor

aledbf commented Jun 2, 2016

@danehans please check the rewrite example (gcr.io/google_containers/nginx-ingress-controller:0.7)
Any feedback or suggestion is welcomed

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 15, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 14, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/ansible lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants