Skip to content
This repository has been archived by the owner on Dec 1, 2018. It is now read-only.

Cannot see other namespaces except, kube-system and default #1279

Closed
amalkasubasinghe opened this issue Sep 5, 2016 · 20 comments
Closed

Cannot see other namespaces except, kube-system and default #1279

amalkasubasinghe opened this issue Sep 5, 2016 · 20 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. support

Comments

@amalkasubasinghe
Copy link

Hi,

I setup heapster v1.2.0-beta.2 on kubernetes 1.3.2
everything is working fine and I can load the grafana graphs also.
When I load the "Pods" dashbaord, under the podsname: I can see all the pods I have created in all namespaces. But under the namespaces it list only the default and kube-system namespace only. Any idea why is that?

@DirectXMan12
Copy link
Contributor

Do you see anything in the logs for the Heapster pod? Could you put logs from the Heapster container in some sort of pastebin-like area (a gist, pastebin, uploaded to an accessible server, etc) and post a link here so we can take a look and make sure there are no errors there?

@wangweihong
Copy link

I met this error too. I am using heapster v1.2.0-beta.1 on kubernetes v1.4.0-alpha.2.
This is my heapster containers' logs:

root@ubuntu192:~/heapster/deploy/kube-config/influxdb# docker logs e603
I0912 02:36:21.438528 1 heapster.go:69] /heapster --source=kubernetes:http://192.168.14.100:8080?inClusterConfig=false --sink=influxdb:http://monitoring-influxdb:8086
I0912 02:36:21.438625 1 heapster.go:70] Heapster version 1.2.0-beta.1
I0912 02:36:21.438652 1 configs.go:60] Using Kubernetes client with master "http://192.168.14.100:8080" and version v1
I0912 02:36:21.438669 1 configs.go:61] Using kubelet port 10255
E0912 02:38:28.777790 1 influxdb.go:217] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 12.18.70.52:8086: getsockopt: connection timed out, will retry on use
I0912 02:38:28.777838 1 influxdb.go:231] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s
I0912 02:38:28.777882 1 heapster.go:99] Starting with InfluxDB Sink
I0912 02:38:28.777893 1 heapster.go:99] Starting with Metric Sink
I0912 02:38:28.789627 1 heapster.go:189] Starting heapster on port 8082
I0912 02:39:05.090404 1 influxdb.go:209] Created database "k8s" on influxDB server at "monitoring-influxdb:8086"

I deploy my heapster by using yaml files under heapster/deploy/kube-config/influxdb. and I change heapster-controller.yaml file like this:

    spec:
containers:
- name: heapster
image: kubernetes/heapster:canary
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:http://192.168.14.100:8080?inClusterConfig=false
- --sink=influxdb:http://monitoring-influxdb:8086

and expose all port used by service files.

@DirectXMan12
Copy link
Contributor

can you run Heapster under a higher verbosity (add an additional argument in your Heapster RC of --v=5)? That might show more of what's going on.

@wangweihong
Copy link

wangweihong commented Sep 13, 2016

@DirectXMan12

I set Hespater RC command like this:

     command:
    - /heapster
    - --source=kubernetes:http://192.168.14.100:8080?inClusterConfig=false
    - --sink=influxdb:http://monitoring-influxdb:8086
    - --v=5

This is logs I got.

root@ubuntu192:~/heapster/deploy/kube-config# docker logs d358
I0913 00:38:07.718830       1 heapster.go:69] /heapster --source=kubernetes:http://192.168.14.100:8080?inClusterConfig=false --sink=influxdb:http://monitoring-influxdb:8086 --v=5
I0913 00:38:07.718931       1 heapster.go:70] Heapster version 1.2.0-beta.1
I0913 00:38:07.718958       1 configs.go:60] Using Kubernetes client with master "http://192.168.14.100:8080" and version v1
I0913 00:38:07.718974       1 configs.go:61] Using kubelet port 10255
I0913 00:38:07.724236       1 reflector.go:202] Starting reflector *api.Node (1h0m0s) from k8s.io/heapster/metrics/sources/kubelet/kubelet.go:339
I0913 00:38:07.724486       1 reflector.go:253] Listing and watching *api.Node from k8s.io/heapster/metrics/sources/kubelet/kubelet.go:339
E0913 00:40:15.017765       1 influxdb.go:217] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 12.18.26.20:8086: getsockopt: connection timed out, will retry on use
I0913 00:40:15.017811       1 influxdb.go:231] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s
I0913 00:40:15.017841       1 heapster.go:99] Starting with InfluxDB Sink
I0913 00:40:15.017859       1 heapster.go:99] Starting with Metric Sink
I0913 00:40:15.018263       1 reflector.go:202] Starting reflector *api.Pod (1h0m0s) from k8s.io/heapster/metrics/heapster.go:272
I0913 00:40:15.018317       1 reflector.go:202] Starting reflector *api.Node (1h0m0s) from k8s.io/heapster/metrics/heapster.go:280
I0913 00:40:15.018502       1 reflector.go:253] Listing and watching *api.Pod from k8s.io/heapster/metrics/heapster.go:272
I0913 00:40:15.018597       1 reflector.go:253] Listing and watching *api.Node from k8s.io/heapster/metrics/heapster.go:280
I0913 00:40:15.018566       1 reflector.go:202] Starting reflector *api.Namespace (1h0m0s) from k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84
I0913 00:40:15.018958       1 reflector.go:202] Starting reflector *api.Node (1h0m0s) from k8s.io/heapster/metrics/processors/node_autoscaling_enricher.go:96
I0913 00:40:15.019078       1 reflector.go:253] Listing and watching *api.Namespace from k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84
I0913 00:40:15.019092       1 reflector.go:253] Listing and watching *api.Node from k8s.io/heapster/metrics/processors/node_autoscaling_enricher.go:96
I0913 00:40:15.028949       1 heapster.go:189] Starting heapster on port 8082
I0913 00:41:05.000324       1 manager.go:79] Scraping metrics start: 2016-09-13 00:40:00 +0000 UTC, end: 2016-09-13 00:41:00 +0000 UTC
I0913 00:41:05.012757       1 manager.go:98] Querying source: kubelet:192.168.14.99:10255
I0913 00:41:05.016719       1 manager.go:98] Querying source: kubelet:192.168.14.101:10255
I0913 00:41:05.020291       1 kubelet.go:232] successfully obtained stats for 1 containers
I0913 00:41:05.020796       1 manager.go:98] Querying source: kubelet:192.168.14.100:10255
I0913 00:41:05.022521       1 kubelet.go:232] successfully obtained stats for 1 containers
I0913 00:41:05.042467       1 kubelet.go:232] successfully obtained stats for 72 containers

@wangweihong
Copy link

wangweihong commented Sep 13, 2016

When I select default namespace, It shows some pods (not all pods ) in kube-system namespace, kubernetes-dashboard-* and kube-dns-*, instead of real pods in default namespace.

And I got this When I select default namespace ,

I0913 00:57:05.000339 1 manager.go:79] Scraping metrics start: 2016-09-13 00:56:00 +0000 UTC, end: 2016-09-13 00:57:00 +0000 UTC
I0913 00:57:05.002737 1 manager.go:98] Querying source: kubelet:192.168.14.101:10255
I0913 00:57:05.004853 1 kubelet.go:232] successfully obtained stats for 1 containers
I0913 00:57:05.019721 1 manager.go:98] Querying source: kubelet:192.168.14.99:10255
I0913 00:57:05.023690 1 manager.go:98] Querying source: kubelet:192.168.14.100:10255
I0913 00:57:05.024931 1 kubelet.go:232] successfully obtained stats for 1 containers
I0913 00:57:05.083551 1 kubelet.go:232] successfully obtained stats for 72 containers
I0913 00:57:05.084075 1 manager.go:152] ScrapeMetrics: time: 83.58848ms size: 74
I0913 00:57:05.084097 1 manager.go:154] scrape bucket 0: 3
I0913 00:57:05.084102 1 manager.go:154] scrape bucket 1: 0
I0913 00:57:05.084105 1 manager.go:154] scrape bucket 2: 0
I0913 00:57:05.084108 1 manager.go:154] scrape bucket 3: 0
I0913 00:57:05.084112 1 manager.go:154] scrape bucket 4: 0
I0913 00:57:05.084115 1 manager.go:154] scrape bucket 5: 0
I0913 00:57:05.084124 1 manager.go:154] scrape bucket 6: 0
I0913 00:57:05.084127 1 manager.go:154] scrape bucket 7: 0
I0913 00:57:05.084137 1 manager.go:154] scrape bucket 8: 0
I0913 00:57:05.084140 1 manager.go:154] scrape bucket 9: 0
I0913 00:57:05.084144 1 manager.go:154] scrape bucket 10: 0
I0913 00:57:05.084555 1 manager.go:113] Pushing data to: Metric Sink
I0913 00:57:05.084568 1 manager.go:113] Pushing data to: InfluxDB Sink
I0913 00:57:05.084579 1 manager.go:116] Data push completed: Metric Sink
I0913 00:57:05.084582 1 manager.go:116] Data push completed: InfluxDB Sink
I0913 00:57:05.111846 1 influxdb.go:169] Exported 782 data to influxDB in 26.759744ms
I0913 00:57:14.027026 1 *reflector.go:407] k8s.io/heapster/metrics/heapster.go:280: Watch close - api.Node total 177 items received

@DirectXMan12
Copy link
Contributor

hmm... if you make the queries manually, do you see the incorrect information?

For instance /metrics/api/v1/model/namespaces, or /metrics/api/v1/model/namespaces/default/pods?

@wangweihong
Copy link

root@ubuntu192:~/heapster/deploy/kube-config# kubectl cluster-info
Kubernetes master is running at http://192.168.14.100:8080
Heapster is running at http://192.168.14.100:8080/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at http://192.168.14.100:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
monitoring-grafana is running at http://192.168.14.100:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana

If you mean <Heapster>/metrics/api/v1/model/namespaces or <Heapster>/metrics/api/v1/model/namespaces/default/pods,
no, I can't get any infos. These urls both return 404 page not found. only http://192.168.14.100:8080/api/v1/proxy/namespaces/kube-system/services/heapster/metrics andhttp://192.168.14.100:8080/metrics can get infos.

@nicolasbelanger
Copy link

+1 for this one.

I have a Kubernetes 1.3.7 cluster (using kops) on which I run heapster 1.1.0 and I have basically the same behaviour as @amalkasubasinghe and @wangweihong ... Can't see my 'custom' namespaces...

@piosz
Copy link
Contributor

piosz commented Sep 23, 2016

First, could you please try to use Kubelet Summary API as the source?

--source=kubernetes.summary_api:''

@nicolasbelanger
Copy link

That's what I did actually, here is my deployment manifest...

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: null
  generation: 1
  labels:
    k8s-addon: monitoring-standalone.addons.k8s.io
    k8s-app: heapster
    version: v1.1.0
  name: heapster
  selfLink: /apis/extensions/v1beta1/namespaces//deployments/heapster
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: heapster
      version: v1.1.0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: heapster
        version: v1.1.0
    spec:
      containers:
      - command:
        - /heapster
        - --source=kubernetes.summary_api:''
        - --sink=influxdb:http://monitoring-influxdb:8086
        - --v=5
        image: gcr.io/google_containers/heapster:v1.1.0
        imagePullPolicy: IfNotPresent
        name: heapster
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 300Mi
        terminationMessagePath: /dev/termination-log
      - command:
        - /pod_nanny
        - --cpu=80m
        - --extra-cpu=0.5m
        - --memory=140Mi
        - --extra-memory=4Mi
        - --threshold=5
        - --deployment=heapster-v1.1.0
        - --container=heapster
        - --poll-period=300000
        - --estimator=exponential
        env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: gcr.io/google_containers/addon-resizer:1.3
        imagePullPolicy: IfNotPresent
        name: heapster-nanny
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 50m
            memory: 100Mi
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}

@guiocavalcanti
Copy link

I'm having the same problem using Kubernetes 1.8.7 on CoreOS.

@nicolasbelanger
Copy link

Guys, it looks like it's only a matter of populating the namespace drop-down in the UI... You can search the wanted namespace instead of the pre-defined "default and kube-system" and then search the pod and the data comes up... It would also be interesting to have the pod list filtered by namespace (worse, currently, the pod list shows them all, but no data will come up if the namespace does not fit the pod it is currently in...)

@ricardovwag
Copy link

ricardovwag commented Nov 5, 2016

Hello,

I noticed that the default dashboard has the refresh of the namespace dropbox query set to false/never, therefore it doesn't load the full namespace list or new namespaces.

Have you checked if this is also the source of your problem ?

This can be solved by setting it to true via:

  • the dashboard UI itself - edit the dashboard template and then edit the namespace query
  • changing the dashboard file - search for the word "refresh" : "false" under the dashboard template

However i do have a question, is the namespace dropbox technically necessary ? It doesn't help much since the user has to "manually" match the namespace with the Pod.

It would make more sense, in my opinion, to have a dashboard just for namespaces and an other for pods. We will be trying this custom dashboard for our cluster.

@piosz
Copy link
Contributor

piosz commented Nov 7, 2016

@bryk for dashboard releated questions

@Vernlium
Copy link

@nicolasbelanger it's work

image

in here put your namespace

@nicolasbelanger
Copy link

@Vernlium It's funny cuz I found out the fix/issue lately. For some reason, the default datasource, in this case influxdb, is not selected in the variable definition. As soon as I set it, boom, all the namespaces will be populated in the drop-down.

screen shot 2017-08-10 at 9 31 11 am

@bg1szd
Copy link

bg1szd commented Aug 17, 2017

@nicolasbelanger Thanks for your workaround. I met the same issue. And with your work around, the non-default namespace could be listed.

@abonillabeeche
Copy link

As @nicolasbelanger said. Edit the template, change the datasource to influxdb-datasource AND change the refresh option as well, else they won't show up. Then Save.

@rootsongjc
Copy link

@nicolasbelanger Thanks you, it works! Setting grafana dashboard template datasource with influxdb-datasource and refresh with on Dashboard Load, save the template. Refresh browser. You will see the other namespaces.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. support
Projects
None yet
Development

No branches or pull requests