Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

googlecloudmonitoringreceiver: DataPoints are out of order #35780

Open
andrewegel opened this issue Oct 14, 2024 · 4 comments
Open

googlecloudmonitoringreceiver: DataPoints are out of order #35780

andrewegel opened this issue Oct 14, 2024 · 4 comments
Labels

Comments

@andrewegel
Copy link

andrewegel commented Oct 14, 2024

Component(s)

receiver/googlecloudmonitoring

What happened?

Description

The GoogleAPI: https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list this receiver calls states:

The points in each time series are currently returned in reverse time order (most recent to oldest).

(So newest -> oldest)

I believe this causes an issue with downstream processors such as the deltatocumulative processor which expects datapoints to be in order of oldest -> newest; Case in point, when I tried the OTEL configuration (in this issue), can see that the internal telemetry data is showing otelcol_deltatocumulative_datapoints_dropped:

% curl -sq http://127.0.0.1:8888/metrics  | grep otelcol_deltatocumulative_datapoints_dropped
# HELP otelcol_deltatocumulative_datapoints_dropped number of datapoints dropped due to given 'reason'
# TYPE otelcol_deltatocumulative_datapoints_dropped counter
otelcol_deltatocumulative_datapoints_dropped{reason="older-start",service_instance_id="07d9c375-40fc-4576-a808-3d4da3deda7e",service_name="collector",service_version="development"} 4
otelcol_deltatocumulative_datapoints_dropped{reason="out-of-order",service_instance_id="07d9c375-40fc-4576-a808-3d4da3deda7e",service_name="collector",service_version="development"} 8

This also cased a bit of an issue further down in our metrics store in AMP where the PromQL rate(router_googleapis_com_nat_sent_bytes_count[$__rate_interval]) was showing outright incorrect values when computing the rate which was off by roughly a factor of 4 (which is about the number of datapoints that were dropped between scrapes)

Steps to Reproduce

Use the sample config and you will see the debug exporter will output metrics only on the last timestamp scraped - if you turn off the deltatocumulative processor, you will see datapoints exported like so (newest -> oldest)

NumberDataPoints #0
StartTimestamp: 2024-10-14 19:19:00 +0000 UTC
Timestamp: 2024-10-14 19:20:00 +0000 UTC
Value: 5163934
NumberDataPoints #1
StartTimestamp: 2024-10-14 19:18:00 +0000 UTC
Timestamp: 2024-10-14 19:19:00 +0000 UTC
Value: 5348459
NumberDataPoints #2
StartTimestamp: 2024-10-14 19:17:00 +0000 UTC
Timestamp: 2024-10-14 19:18:00 +0000 UTC
Value: 5623164
NumberDataPoints #3
StartTimestamp: 2024-10-14 19:16:00 +0000 UTC
Timestamp: 2024-10-14 19:17:00 +0000 UTC
Value: 5159819
NumberDataPoints #4
StartTimestamp: 2024-10-14 19:15:00 +0000 UTC
Timestamp: 2024-10-14 19:16:00 +0000 UTC
Value: 5729918

Expected Result

Same as above, except the order now goes from oldest -> newest

Actual Result

Collector version

2295a09

Environment information

OpenTelemetry Collector configuration

exporters:
  debug:
    verbosity: detailed
    sampling_initial: 0
    sampling_thereafter: 1
receivers:
  googlecloudmonitoring:
    collection_interval: 5m
    project_id: <redacted>
    metrics_list:
    - metric_name: router.googleapis.com/nat/sent_bytes_count
processors:
  deltatocumulative:
    max_stale: 20m
service:
  telemetry:
    logs:
      level: DEBUG
      development: true
      encoding: console
      sampling:
        enabled: false
  pipelines:
    metrics:
      receivers:
      - googlecloudmonitoring
      processors:
      - deltatocumulative
      exporters:
      - debug

Log output

No response

Additional context

No response

@andrewegel andrewegel added bug Something isn't working needs triage New item requiring triage labels Oct 14, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Jan 8, 2025

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jan 8, 2025
@andrewegel
Copy link
Author

@dashpole @TylerHelmuth @abhishek-at-cloudwerx : Any of you think this is a real bug?

@dashpole
Copy link
Contributor

This does sound like a real issue to me

@dashpole dashpole removed the Stale label Jan 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants