You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One or more TimeSeries could not be written: timeSeries[54]: Value type for metric prometheus.googleapis.com/system_network_errors_total/counter conflicts with the existing value type (INT64).; timeSeries[53]: Value type for metric prometheus.googleapis.com/system_network_errors_total/counter conflicts with the existing value type (INT64).
Steps to Reproduce
I am using Python OTEL instrumentation to generate this data. Here is the detailed debug output from OTEL Collector.
Now, it seems to me that the data coming from the Python Instrumentation contains a float64 value. And I am guessing that in Google Managed Prometheus it is defined as Int64. Which is fine because if figure I could just cast the float to int and drop the decimals as this is a counter and decimals are irrelevant.
I don't think I understand OTTL and OTEL configuration enough to implement this. I couldn't find anything specific to what I wanted in the examples. Any assistance would be greatly appreciated.
Expected Result
I would be happy with a way to cast the floating point value to an integer value for each of the system.network.errors metric datapoints.
Actual Result
Please HELP!!!!!
Collector version
v0.112.0
Environment information
Environment
OS: Google Container OS and docker
OpenTelemetry Collector configuration
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317prometheus/scraper:
config:
global:
scrape_interval: 30sscrape_configs:
- job_name: metric-exporterscheme: httpsstatic_configs:
- targets:
- ${env:SOME_METRIC_EXPORTER}# prometheus/self-metrics:# config:# scrape_configs:# - job_name: otel-self-metrics# scrape_interval: 1m# static_configs:# - targets:# - localhost:8888statsd:
enable_metric_type: truetimer_histogram_mapping:
- statsd_type: "histogram"observer_type: "histogram"histogram:
max_size: 50
- statsd_type: "distribution"observer_type: "histogram"histogram:
max_size: 50
- statsd_type: "timing"observer_type: "summary"processors:
batch:
memory_limiter:
# drop metrics if memory usage gets too highcheck_interval: 10slimit_percentage: 95spike_limit_percentage: 20# automatically detect Cloud Run resource metadataresourcedetection:
detectors:
- env
- gcpfilter/self-metrics:
error_mode: ignoremetrics:
include:
match_type: strictmetric_names:
- otelcol_process_uptime
- otelcol_process_memory_rss
- otelcol_grpc_io_client_completed_rpcs
- otelcol_googlecloudmonitoring_point_countfilter/drop-metrics:
error_mode: ignoremetrics:
metric:
- name == "http.server.response.size"
- name == "http.server.request.size"
- name == "http.client.response.size"
- name == "http.client.request.size"
- name == "process.open_file_descriptor.count"
- name == "process.runtime.cpython.cpu.utilization"
- name == "process.runtime.cpython.thread_count"
- name == "process.runtime.cpython.memory"
- name == "http.client.duration"transform/metrics:
error_mode: ignoremetric_statements:
- context: metricconditions:
- resource.attributes["cloud.region"] == nilstatements:
- set(resource.attributes["cloud.region"], "${env:GCP_CLOUD_REGION}")
- context: datapointconditions:
- metric.name == "system.network.errors"statements:
- set(sum, value_int)exporters:
googlemanagedprometheus: # Note: this is intentionally left blankgooglecloud: # Note: this is intentionally left blankdebug:
verbosity: detailedextensions:
health_check:
endpoint: "0.0.0.0:${env:OTEL_HEALTHCHECK_PORT}"service:
extensions:
- health_checktelemetry:
logs:
level: debugpipelines:
traces:
receivers:
- otlpprocessors:
- memory_limiter
- batchexporters:
- googlecloudmetrics:
receivers:
- otlp
- prometheus/scraper
- statsdprocessors:
- memory_limiter
- filter/drop-metrics
- transform/metrics
- batchexporters:
- googlemanagedprometheus# leave this debug exporter commented out so it is# easier to enable and I don't forget about it
- debug# metrics/self-metrics:# exporters:# - googlemanagedprometheus# processors:# - filter/self-metrics# - memory_limiter# - resourcedetection# - batch# receivers:# - prometheus/self-metrics
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Component(s)
exporter/googlemanagedprometheus
What happened?
Description
I am getting this error:
Steps to Reproduce
I am using Python OTEL instrumentation to generate this data. Here is the detailed debug output from OTEL Collector.
![image](https://private-user-images.githubusercontent.com/1319809/383431954-6e152cc3-b2e2-4738-b62e-47630b19628e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg4MjMyMTIsIm5iZiI6MTczODgyMjkxMiwicGF0aCI6Ii8xMzE5ODA5LzM4MzQzMTk1NC02ZTE1MmNjMy1iMmUyLTQ3MzgtYjYyZS00NzYzMGIxOTYyOGUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwNiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDZUMDYyMTUyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9OWM1NDNjN2FhYmVkM2JlNmI2ODBiZmQyYTdmNzE4M2NkMDczNWVkNWU0YjU1YjY1ZDc0NzFkN2EzY2JkOWJkOCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.q-urJNS74mafp3UYr9ngsp7jlaQ9Bcaz323kSHI6Bfo)
Now, it seems to me that the data coming from the Python Instrumentation contains a float64 value. And I am guessing that in Google Managed Prometheus it is defined as Int64. Which is fine because if figure I could just cast the float to int and drop the decimals as this is a counter and decimals are irrelevant.
So here is what I tried:
I don't think I understand OTTL and OTEL configuration enough to implement this. I couldn't find anything specific to what I wanted in the examples. Any assistance would be greatly appreciated.
Expected Result
I would be happy with a way to cast the floating point value to an integer value for each of the
system.network.errors
metric datapoints.Actual Result
Please HELP!!!!!
Collector version
v0.112.0
Environment information
Environment
OS: Google Container OS and docker
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: