You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm using stackdriver-exporter v0.13.0, and only recently somebody noticed the https/backend_latencies have values on all buckets up to 4.410119471141699e+09 for example:
When I check GCP metrics explorer, the backend latency doesn't show these buckets so I guess there's something broken between GCP Cloud monitoring metrics and stackdriver-exporter.
I pulled the cloud monitoring metric as following example:
I suspect the last value from the previous calculation is now assigned to the rest of buckets instead of 0, so I print the buckets just to see its value to confirm the theory:
From our long term tsdb (thanos), the first data point that has this large bucket dates back to 2023-02-13 22:30:00 UTC. So I'm not sure what caused this issue with buckets distribution.
Does anybody have the same issue? And is my suspicion correct?
The text was updated successfully, but these errors were encountered:
Hi, I'm using stackdriver-exporter v0.13.0, and only recently somebody noticed the https/backend_latencies have values on all buckets up to
4.410119471141699e+09
for example:When I check GCP metrics explorer, the backend latency doesn't show these buckets so I guess there's something broken between GCP Cloud monitoring metrics and stackdriver-exporter.
I pulled the cloud monitoring metric as following example:
sampled
I cross-checked the document https://cloud.google.com/monitoring/api/ref_v3/rest/v3/TypedValue#distribution then these buckets look fine and these buckets are in low range of
num_finite_buckets
.This leads to this line in stackdriver-exporter:
stackdriver_exporter/collectors/monitoring_collector.go
Lines 546 to 553 in be2625d
I suspect the
last
value from the previous calculation is now assigned to the rest of buckets instead of0
, so I print thebuckets
just to see its value to confirm the theory:From our long term tsdb (thanos), the first data point that has this large bucket dates back to 2023-02-13 22:30:00 UTC. So I'm not sure what caused this issue with buckets distribution.
Does anybody have the same issue? And is my suspicion correct?
The text was updated successfully, but these errors were encountered: