You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Use prometheus receiver to scrape node_exporter data and expose the data to native prometheus through prometheus exporter. The CPU usage rate obtained on the node exporter dashboard is inaccurate, which is much different from directly using prometheus to scrape node_exporter.
The flat part is to use native prometheus, and the severe jitter part is to use otelcol.
I make sure that my environment is stable and the phenomenon can be reproduced stably.
Steps to reproduce
Use prometheus receiver to scrape node_exporter data and expose the data to native prometheus through prometheus exporter.
Describe the bug
![image](https://user-images.githubusercontent.com/26060287/118085262-b41b1d00-b3f4-11eb-8354-83356b1d79d4.png)
Use prometheus receiver to scrape node_exporter data and expose the data to native prometheus through prometheus exporter. The CPU usage rate obtained on the node exporter dashboard is inaccurate, which is much different from directly using prometheus to scrape node_exporter.
The flat part is to use native prometheus, and the severe jitter part is to use otelcol.
I make sure that my environment is stable and the phenomenon can be reproduced stably.
Steps to reproduce
Use prometheus receiver to scrape node_exporter data and expose the data to native prometheus through prometheus exporter.
What did you expect to see?
Correct cpu usage
What did you see instead?
Wrong cpu usage
What version did you use?
Version: 0.26.0
What config did you use?
otelcol config:
native prometheus config:
Environment
OS: CentOS Linux release 7.4.1708 (Core)
The text was updated successfully, but these errors were encountered: