Skip to content
This repository was archived by the owner on Apr 17, 2019. It is now read-only.

Clarify docs around running nginx ingress controller without serviceaccounts #1639

Closed
sgsits opened this issue Aug 26, 2016 · 18 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sgsits
Copy link

sgsits commented Aug 26, 2016

I am running a Kubernetes cluster without TLS (no ca/tokens etc)
I am unable to run nginx-ingress-controller due to following errror

I0825 16:53:47.191547 1 main.go:99] Using build: https://github.com/bprashanth/contrib.git - git-b195d9b
F0825 16:53:47.191966 1 main.go:121] failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

Why is config.go>InClusterConfig() is forcing presence of serviceaccount and ca files.
I should be able to run it in unsecure environment.

@aledbf
Copy link
Contributor

aledbf commented Aug 26, 2016

@sgsits please check ir your ingress controller image version is 0.8.3.
In the new version the connection was changed to allow running in any environment in #1467 without flags
This is workflow is followed to connect to the api server

@sgsits
Copy link
Author

sgsits commented Aug 26, 2016

thanks

@sgsits sgsits closed this as completed Aug 26, 2016
@sgsits
Copy link
Author

sgsits commented Aug 26, 2016

Now I am getting following error
(As it tries to assume that master is running locally, how would I provide it the master-url?)

F0826 16:03:31.076649 1 main.go:121] no service with name default/default-http-backend found: Get http://localhost:8080/api/v1/namespaces/default/services/default-http-backend: dial tcp [::1]:8080: getsockopt: connection refused

@sgsits sgsits reopened this Aug 26, 2016
@aledbf
Copy link
Contributor

aledbf commented Aug 26, 2016

@sgsits like this export KUBERNETES_MASTER=http://172.17.4.99:8080

@sgsits
Copy link
Author

sgsits commented Aug 26, 2016

I should have clarified this earlier, I am running the ingress controller in Kubernetes cluster as a pod (using replicationController) I am wondering why it is trying to find Master using localhost.

@aledbf
Copy link
Contributor

aledbf commented Aug 26, 2016

I am wondering why it is trying to find Master using localhost.

One of the reason could be because KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT are not set.
Please add --v=10 to increase the verbosity level and be check the output.

Also provide more information about your cluster:

  • kubectl cluster-info output
  • where are you running the cluster
  • docker version
    etc.

@sgsits
Copy link
Author

sgsits commented Aug 27, 2016

Thanks again.
I am running 3 master 3 node cluster

  1. [app@sandbox-132869446-1-154153755 ~]$ kubectl cluster-info
    Kubernetes master is running at http://localhost:8080
    KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

  2. Cluster is running in onpremise custom vmware based cloud

  3. docker version 1.11.2

I observer that the kubernetes service is only available on HTTPS
[app@sandbox-132869446-1-154153755 ~]$ kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver,provider=kubernetes
Selector:
Type: ClusterIP
IP: 172.16.48.1
Port: https 443/TCP
Endpoints: 172.16.178.106:6443
Session Affinity: None
No events.

And inside my ingress controller it is not available on 8080, below are the ENV from within my ingress controller pod (( NOTE : I have manually passed KUBERNETES_MASTER using pod envs)

[app@sandbox-132869446-1-154153755 ~]$ kubectl exec nginx-ingress-controller-belmg env | grep KUBER
KUBERNETES_MASTER=http://172.16.164.171:8080
KUBERNETES_PORT_443_TCP_ADDR=172.16.48.1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_HOST=172.16.48.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://172.16.48.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://172.16.48.1:443

I am wondering how to make kubernetes service available on HTTP (against HTTPS)

@aledbf
Copy link
Contributor

aledbf commented Aug 27, 2016

KUBERNETES_MASTER=http://172.16.164.171:8080

This works only if you set --insecure-bind-address=0.0.0.0 which is not a good idea in production.

@aledbf
Copy link
Contributor

aledbf commented Aug 27, 2016

I am wondering how to make kubernetes service available on HTTP (against HTTPS)

Can you share the reason for this?

@sgsits
Copy link
Author

sgsits commented Aug 31, 2016

Apologies for delay. No good reason to use non TLS but we are in phase 1 implementation and have not yet achieved integration with our cert authority (using self signed is not what we are allowed). This cluster is running internally so for now we are testing without SSL.

Going back to my question
KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT are actually available in the env. So I am unable to understand the need to inject KUBERNETES_MASTER=http://172.16.164.171:8080

Are you saying that nginx-ingress-controller will not be able to communicate with default kubernetes service unless the pod contains secrets in /var/run/secrets/kubernetes.io/serviceaccount/token?

I also tried using --insecure-bind-address=0.0.0.0 --insecure-port=8080
however the kubernetes service is still created only on https TCP/443

[app@sandbox-132869446-1-155337960 ~]$ kubectl describe service kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver,provider=kubernetes
Selector:
Type: ClusterIP
IP: 172.16.48.1
Port: https 443/TCP
Endpoints: 172.16.179.79:6443
Session Affinity: None
No events.

Since I am going to spin up nginx-ingress-controller as a infrastructure pod, I don't want to hardcode/inject the Master URL in it. How could I make it use the kubernetes service to

@aledbf
Copy link
Contributor

aledbf commented Aug 31, 2016

I also tried using --insecure-bind-address=0.0.0.0 --insecure-port=8080
however the kubernetes service is still created only on https TCP/443

That is expected. The api server can listen in both ports.

Are you saying that nginx-ingress-controller will not be able to communicate with default kubernetes service unless the pod contains secrets in /var/run/secrets/kubernetes.io/serviceaccount/token?

No. If you enable a service account the token will be mounted in all the pods and the env vars KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT added.
Please check the doc accessing-the-api-from-a-pod

No good reason to use non TLS but we are in phase 1

That is a restriction that will affect other parts like the dns addon (or any other pod that requires interaction with the api server).
Please consider using TLS or disable ssl in the api server using --secure-port=0
(please do not choose this option and just enable ssl)

I don't want to hardcode/inject the Master URL in it.

This is not required if the cluster is configured correctly.

@suyogbarve
Copy link

I was able to consume the kubernetes (kubectl get svc kubernetes) service from inside the pod after enabling serviceaccounts. However I think non TLS communication from inside pod to kubernetes clusterIP/service is no longer supported

@bprashanth
Copy link

sounds like the issue was resolved, I'm only too happy to force people to use tls.
We should clarify docs so this is obvious, including amending any language around KUBERNETES_SERVICE_HOST/PORT that is missing (that caused the confusion in this bug)

@bprashanth bprashanth changed the title Error contrib/ingress/controllers/nginx/ Clarify docs around running nginx ingress controller without serviceaccounts Sep 6, 2016
@suyogbarve
Copy link

Thanks prashanth, will it be possible to share some details on production readiness for nginx ingress controller. Is it ready yet to be deployed to production environment, if not are there any rough/tentative timelines on it?

@dcowden
Copy link

dcowden commented Sep 26, 2016

Just in case someone comes here and has the same issue I did:

In my case, I had this behavior because of another k8s issue in which serviceaccounts are enabled, but the token is not populated: kubernetes/kubernetes#27973

To determine if you have this issue, look in your container and see if /var/run/secrets/kubernetes.io/serviceaccount/token is there. In my case, the serviceaccount directory existed ( proving that i have enabled serviceaccounts correctly), but the token file is no there.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 16, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants