Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent Metric Descriptions Between dockerstatsreceiver and podmanstatsreceiver Causing prometheusexporter Errors #35829

Open
bowling233 opened this issue Oct 16, 2024 · 1 comment
Labels
bug Something isn't working needs triage New item requiring triage receiver/podman

Comments

@bowling233
Copy link

Component(s)

receiver/podman

What happened?

I am using dockerstatsreceiver and podmanstatsreceiver simultaneously, but there are inconsistencies in some of the metric descriptions between them. This results in errors from the prometheusexporter (see log output for details).

Differences can be observed here:

Collector version

0.111.0

Environment information

Environment

OS: Debian 12

OpenTelemetry Collector configuration

receivers:
  docker_stats:
  podman_stats:

exporters:
  prometheus:
    endpoint: localhost:9090
    send_timestamps: true
    resource_to_telemetry_conversion:
      enabled: true

service:
  pipelines:
    metrics:
      receivers: [podman_stats, docker_stats]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [prometheus]

Log output

2024-10-16T10:15:58.991Z   error   [email protected]/log.go:23   error gathering metrics: 12 error(s) occurred:
* collected metric container_memory_usage_total_bytes ... has help "Memory usage of the container." but should have "Memory usage of the container. This excludes the cache."
* collected metric container_network_io_usage_tx_bytes_total ... has help "Bytes sent." but should have "Bytes sent by the container."
......
   {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
   github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
   github.com/prometheus/[email protected]/prometheus/promhttp/http.go:169
net/http.HandlerFunc.ServeHTTP
   net/http/server.go:2220
net/http.(*ServeMux).ServeHTTP
   net/http/server.go:2747
go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
   go.opentelemetry.io/collector/config/[email protected]/compression.go:168
go.opentelemetry.io/collector/config/confighttp.(*ServerConfig).ToServer.maxRequestBodySizeInterceptor.func2
   go.opentelemetry.io/collector/config/[email protected]/confighttp.go:553
net/http.HandlerFunc.ServeHTTP
   net/http/server.go:2220
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
   go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:177
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
   go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:65
net/http.HandlerFunc.ServeHTTP
   net/http/server.go:2220
go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
   go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:26
net/http.serverHandler.ServeHTTP
   net/http/server.go:3210
net/http.(*conn).serve
   net/http/server.go:2092

Additional context

No response

@bowling233 bowling233 added bug Something isn't working needs triage New item requiring triage labels Oct 16, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage New item requiring triage receiver/podman
Projects
None yet
Development

No branches or pull requests

1 participant