diff --git a/CHANGELOG.md b/CHANGELOG.md index a2a1dfa1aa87..7715824803ed 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,9 @@ internal API changes are not present. Main (unreleased) ----------------- +v0.37.0-rc.0 (2023-10-05) +----------------- + ### Breaking changes - Set `retry_on_http_429` to `true` by default in the `queue_config` block in static mode's `remote_write`. (@wildum) @@ -26,16 +29,13 @@ Main (unreleased) got replaced by the pair of `__meta_component_name` and `__meta_component_id` labels. (@tpaschalis) +- Flow: Allow `prometheus.exporter.unix` to be specified multiple times and used in modules. This now means all + `prometheus.exporter.unix` references will need a label `prometheus.exporter.unix "example"`. (@mattdurham) + ### Features - New Grafana Agent Flow components: - - `otelcol.connector.spanlogs` creates logs from spans. It is the flow mode equivalent - to static mode's `automatic_logging` processor. (@ptodev) - - `otelcol.connector.servicegraph` creates service graph metrics from spans. It is the - flow mode equivalent to static mode's `service_graphs` processor. (@ptodev) - - `otelcol.processor.k8sattributes` adds Kubernetes metadata as resource attributes - to spans, logs, and metrics. (@acr92) - `discovery.consulagent` discovers scrape targets from Consul Agent. (@wildum) - `discovery.kuma` discovers scrape targets from the Kuma control plane. (@tpaschalis) - `discovery.linode` discovers scrape targets from the Linode API. (@captncraig) @@ -46,13 +46,26 @@ Main (unreleased) - `discovery.serverset` discovers Serversets stored in Zookeeper. (@thampiotr) - `discovery.scaleway` discovers scrape targets from Scaleway virtual instances and bare-metal machines. (@rfratto) + - `faro.receiver` accepts Grafana Faro-formatted telemetry data over the + network and forwards it to other components. (@megumish, @rfratto) - `prometheus.exporter.azure` collects metrics from Azure. (@wildum) - `discovery.dockerswarm` discovers scrape targets from Docker Swarm. (@wildum) + - `otelcol.connector.servicegraph` creates service graph metrics from spans. It is the + flow mode equivalent to static mode's `service_graphs` processor. (@ptodev) + - `otelcol.connector.spanlogs` creates logs from spans. It is the flow mode equivalent + to static mode's `automatic_logging` processor. (@ptodev) + - `otelcol.processor.k8sattributes` adds Kubernetes metadata as resource attributes + to spans, logs, and metrics. (@acr92) - `otelcol.processor.probabilistic_sampler` samples logs and traces based on configuration options. (@mar4uk) + - `otelcol.processor.transform` transforms OTLP telemetry data using the + OpenTelemetry Transformation Language (OTTL). It is most commonly used + for transformations on attributes. - `remote.kubernetes.configmap` loads a configmap's data for use in other components (@captncraig) - `remote.kubernetes.secret` loads a secret's data for use in other components (@captncraig) - - `prometheus.exporter.agent` - scrape agent's metrics. (@hainenber) - - `prometheus.exporter.vsphere` - scrape vmware vsphere metrics. (@marctc) + - `prometheus.exporter.agent` exposes the agent's internal metrics. (@hainenber) + - `prometheus.exporter.azure` collects metrics from Azure. (@wildum) + - `prometheus.exporter.cadvisor` exposes cAdvisor metrics. (@tpaschalis) + - `prometheus.exporter.vsphere` exposes vmware vsphere metrics. (@marctc) - Flow: allow the HTTP server to be configured with TLS in the config file using the new `http` config block. (@rfratto) @@ -76,9 +89,13 @@ Main (unreleased) This will load all River files in the directory as a single configuration; component names must be unique across all loaded files. (@rfratto, @hainenber) +- Added support for `static` configuration conversion in `grafana-agent convert` and `grafana-agent run` commands. (@erikbaranowski) + - Flow: the `prometheus.scrape` component can now configure the scraping of Prometheus native histograms. (@tpaschalis) +- Flow: the `prometheus.remote_write` component now supports SigV4 and AzureAD authentication. (@ptodev) + ### Enhancements - Clustering: allow advertise interfaces to be configurable, with the possibility to select all available interfaces. (@wildum) @@ -99,7 +116,7 @@ Main (unreleased) - Flow: add `randomization_factor` and `multiplier` to retry settings in `otelcol` components. (@rfratto) - + - Add support for `windows_certificate_filter` under http tls config block. (@mattdurham) - Add `openstack` config converter to convert OpenStack yaml config (static mode) to river config (flow mode). (@wildum) @@ -127,6 +144,22 @@ Main (unreleased) - Promtail converter will now treat `global positions configuration is not supported` as a Warning instead of Error. (@erikbaranowski) +- Add new `agent_component_dependencies_wait_seconds` histogram metric and a dashboard panel + that measures how long components wait to be evaluated after their dependency is updated (@thampiotr) + +- Add additional endpoint to debug scrape configs generated inside `prometheus.operator.*` components (@captncraig) + +- Components evaluation is now performed in parallel, reducing the impact of + slow components potentially blocking the entire telemetry pipeline. + The `agent_component_evaluation_seconds` metric now measures evaluation time + of each node separately, instead of all the directly and indirectly + dependant nodes. (@thampiotr) + +- Update Prometheus dependency to v2.46.0. (@tpaschalis) + +- The `client_secret` config argument in the `otelcol.auth.oauth2` component is + now of type `secret` instead of type `string`. (@ptodev) + ### Bugfixes - Fixed `otelcol.exporter.prometheus` label names for the `otel_scope_info` @@ -142,6 +175,8 @@ Main (unreleased) - Fixed a bug where the BackOffLimit for the kubernetes tailer was always set to zero. (@anderssonw) +- Fixed a bug where Flow agent fails to load `comment` statement in `argument` block. (@hainenber) + ### Other changes - Use Go 1.21.1 for builds. (@rfratto) @@ -161,7 +196,7 @@ Main (unreleased) - Documentation updated to link discovery.http and prometheus.scrape advanced configs (@proffalken) -- Bump SNMP exporter version to v0.23 (@marctc) +- Bump SNMP exporter version to v0.24.1 (@marctc) - Switch to `IBM/sarama` module. (@hainenber) @@ -171,6 +206,11 @@ Main (unreleased) - Migrate `Check Linux/Windows build image` to GitHub Actions. (@hainenber) +- Documentation updated to correct default path from `prometheus.exporter.windows` `text_file` block (@timo1707) + +- Bump `redis_exporter` to v1.54.0 (@spartan0x117) + + v0.36.2 (2023-09-22) -------------------- diff --git a/cmd/internal/flowmode/cmd_run.go b/cmd/internal/flowmode/cmd_run.go index a493635aca49..9b989a06dd66 100644 --- a/cmd/internal/flowmode/cmd_run.go +++ b/cmd/internal/flowmode/cmd_run.go @@ -120,7 +120,7 @@ depending on the nature of the reload error. StringVar(&r.clusterName, "cluster.name", r.clusterName, "The name of the cluster to join") cmd.Flags(). BoolVar(&r.disableReporting, "disable-reporting", r.disableReporting, "Disable reporting of enabled components to Grafana.") - cmd.Flags().StringVar(&r.configFormat, "config.format", r.configFormat, "The format of the source file. Supported formats: 'flow', 'prometheus'.") + cmd.Flags().StringVar(&r.configFormat, "config.format", r.configFormat, fmt.Sprintf("The format of the source file. Supported formats: %s.", supportedFormatsList())) cmd.Flags().BoolVar(&r.configBypassConversionErrors, "config.bypass-conversion-errors", r.configBypassConversionErrors, "Enable bypassing errors when converting") return cmd } diff --git a/component/all/all.go b/component/all/all.go index b201065f3063..4d8d0fe8275d 100644 --- a/component/all/all.go +++ b/component/all/all.go @@ -30,6 +30,7 @@ import ( _ "github.com/grafana/agent/component/discovery/serverset" // Import discovery.serverset _ "github.com/grafana/agent/component/discovery/triton" // Import discovery.triton _ "github.com/grafana/agent/component/discovery/uyuni" // Import discovery.uyuni + _ "github.com/grafana/agent/component/faro/receiver" // Import faro.receiver _ "github.com/grafana/agent/component/local/file" // Import local.file _ "github.com/grafana/agent/component/local/file_match" // Import local.file_match _ "github.com/grafana/agent/component/loki/echo" // Import loki.echo @@ -81,6 +82,7 @@ import ( _ "github.com/grafana/agent/component/otelcol/processor/probabilistic_sampler" // Import otelcol.processor.probabilistic_sampler _ "github.com/grafana/agent/component/otelcol/processor/span" // Import otelcol.processor.span _ "github.com/grafana/agent/component/otelcol/processor/tail_sampling" // Import otelcol.processor.tail_sampling + _ "github.com/grafana/agent/component/otelcol/processor/transform" // Import otelcol.processor.transform _ "github.com/grafana/agent/component/otelcol/receiver/jaeger" // Import otelcol.receiver.jaeger _ "github.com/grafana/agent/component/otelcol/receiver/kafka" // Import otelcol.receiver.kafka _ "github.com/grafana/agent/component/otelcol/receiver/loki" // Import otelcol.receiver.loki @@ -92,6 +94,7 @@ import ( _ "github.com/grafana/agent/component/prometheus/exporter/apache" // Import prometheus.exporter.apache _ "github.com/grafana/agent/component/prometheus/exporter/azure" // Import prometheus.exporter.azure _ "github.com/grafana/agent/component/prometheus/exporter/blackbox" // Import prometheus.exporter.blackbox + _ "github.com/grafana/agent/component/prometheus/exporter/cadvisor" // Import prometheus.exporter.cadvisor _ "github.com/grafana/agent/component/prometheus/exporter/cloudwatch" // Import prometheus.exporter.cloudwatch _ "github.com/grafana/agent/component/prometheus/exporter/consul" // Import prometheus.exporter.consul _ "github.com/grafana/agent/component/prometheus/exporter/dnsmasq" // Import prometheus.exporter.dnsmasq diff --git a/component/common/loki/wal/wal.go b/component/common/loki/wal/wal.go index 675ac52c35e2..384e98235e6d 100644 --- a/component/common/loki/wal/wal.go +++ b/component/common/loki/wal/wal.go @@ -37,7 +37,7 @@ type wrapper struct { func New(cfg Config, log log.Logger, registerer prometheus.Registerer) (WAL, error) { // TODO: We should fine-tune the WAL instantiated here to allow some buffering of written entries, but not written to disk // yet. This will attest for the lack of buffering in the channel Writer exposes. - tsdbWAL, err := wlog.NewSize(log, registerer, cfg.Dir, wlog.DefaultSegmentSize, false) + tsdbWAL, err := wlog.NewSize(log, registerer, cfg.Dir, wlog.DefaultSegmentSize, wlog.CompressionNone) if err != nil { return nil, fmt.Errorf("failde to create tsdb WAL: %w", err) } diff --git a/component/faro/receiver/arguments.go b/component/faro/receiver/arguments.go new file mode 100644 index 000000000000..65fc6f29fb99 --- /dev/null +++ b/component/faro/receiver/arguments.go @@ -0,0 +1,93 @@ +package receiver + +import ( + "time" + + "github.com/alecthomas/units" + "github.com/grafana/agent/component/common/loki" + "github.com/grafana/agent/component/otelcol" + "github.com/grafana/river" + "github.com/grafana/river/rivertypes" +) + +// Defaults for various arguments. +var ( + DefaultArguments = Arguments{ + Server: DefaultServerArguments, + SourceMaps: DefaultSourceMapsArguments, + } + + DefaultServerArguments = ServerArguments{ + Host: "127.0.0.1", + Port: 12347, + RateLimiting: DefaultRateLimitingArguments, + MaxAllowedPayloadSize: 5 * units.MiB, + } + + DefaultRateLimitingArguments = RateLimitingArguments{ + Enabled: true, + Rate: 50, + BurstSize: 100, + } + + DefaultSourceMapsArguments = SourceMapsArguments{ + Download: true, + DownloadFromOrigins: []string{"*"}, + DownloadTimeout: time.Second, + } +) + +// Arguments configures the app_agent_receiver component. +type Arguments struct { + LogLabels map[string]string `river:"extra_log_labels,attr,optional"` + + Server ServerArguments `river:"server,block,optional"` + SourceMaps SourceMapsArguments `river:"sourcemaps,block,optional"` + Output OutputArguments `river:"output,block"` +} + +var _ river.Defaulter = (*Arguments)(nil) + +// SetToDefault applies default settings. +func (args *Arguments) SetToDefault() { *args = DefaultArguments } + +// ServerArguments configures the HTTP server where telemetry information will +// be sent from Faro clients. +type ServerArguments struct { + Host string `river:"listen_address,attr,optional"` + Port int `river:"listen_port,attr,optional"` + CORSAllowedOrigins []string `river:"cors_allowed_origins,attr,optional"` + APIKey rivertypes.Secret `river:"api_key,attr,optional"` + MaxAllowedPayloadSize units.Base2Bytes `river:"max_allowed_payload_size,attr,optional"` + + RateLimiting RateLimitingArguments `river:"rate_limiting,block,optional"` +} + +// RateLimitingArguments configures rate limiting for the HTTP server. +type RateLimitingArguments struct { + Enabled bool `river:"enabled,attr,optional"` + Rate float64 `river:"rate,attr,optional"` + BurstSize float64 `river:"burst_size,attr,optional"` +} + +// SourceMapsArguments configures how app_agent_receiver will retrieve source +// maps for transforming stack traces. +type SourceMapsArguments struct { + Download bool `river:"download,attr,optional"` + DownloadFromOrigins []string `river:"download_from_origins,attr,optional"` + DownloadTimeout time.Duration `river:"download_timeout,attr,optional"` + Locations []LocationArguments `river:"location,block,optional"` +} + +// LocationArguments specifies an individual location where source maps will be loaded. +type LocationArguments struct { + Path string `river:"path,attr"` + MinifiedPathPrefix string `river:"minified_path_prefix,attr"` +} + +// OutputArguments configures where to send emitted logs and traces. Metrics +// emitted by app_agent_receiver are exported as targets to be scraped. +type OutputArguments struct { + Logs []loki.LogsReceiver `river:"logs,attr,optional"` + Traces []otelcol.Consumer `river:"traces,attr,optional"` +} diff --git a/component/faro/receiver/exporters.go b/component/faro/receiver/exporters.go new file mode 100644 index 000000000000..731779372dde --- /dev/null +++ b/component/faro/receiver/exporters.go @@ -0,0 +1,247 @@ +package receiver + +import ( + "context" + "errors" + "sync" + "time" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/go-logfmt/logfmt" + "github.com/grafana/agent/component/common/loki" + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/grafana/agent/component/otelcol" + "github.com/grafana/loki/pkg/logproto" + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/common/model" +) + +type exporter interface { + Name() string + Export(ctx context.Context, payload payload.Payload) error +} + +// +// Metrics +// + +type metricsExporter struct { + totalLogs prometheus.Counter + totalMeasurements prometheus.Counter + totalExceptions prometheus.Counter + totalEvents prometheus.Counter +} + +var _ exporter = (*metricsExporter)(nil) + +func newMetricsExporter(reg prometheus.Registerer) *metricsExporter { + exp := &metricsExporter{ + totalLogs: prometheus.NewCounter(prometheus.CounterOpts{ + Name: "faro_receiver_logs_total", + Help: "Total number of ingested logs", + }), + totalMeasurements: prometheus.NewCounter(prometheus.CounterOpts{ + Name: "faro_receiver_measurements_total", + Help: "Total number of ingested measurements", + }), + totalExceptions: prometheus.NewCounter(prometheus.CounterOpts{ + Name: "faro_receiver_exceptions_total", + Help: "Total number of ingested exceptions", + }), + totalEvents: prometheus.NewCounter(prometheus.CounterOpts{ + Name: "faro_receiver_events_total", + Help: "Total number of ingested events", + }), + } + + reg.MustRegister(exp.totalLogs, exp.totalExceptions, exp.totalMeasurements, exp.totalEvents) + + return exp +} + +func (exp *metricsExporter) Name() string { return "receiver metrics exporter" } + +func (exp *metricsExporter) Export(ctx context.Context, p payload.Payload) error { + exp.totalExceptions.Add(float64(len(p.Exceptions))) + exp.totalLogs.Add(float64(len(p.Logs))) + exp.totalMeasurements.Add(float64(len(p.Measurements))) + exp.totalEvents.Add(float64(len(p.Events))) + return nil +} + +// +// Logs +// + +type logsExporter struct { + log log.Logger + sourceMaps sourceMapsStore + + receiversMut sync.RWMutex + receivers []loki.LogsReceiver + + labelsMut sync.RWMutex + labels model.LabelSet +} + +var _ exporter = (*logsExporter)(nil) + +func newLogsExporter(log log.Logger, sourceMaps sourceMapsStore) *logsExporter { + return &logsExporter{ + log: log, + sourceMaps: sourceMaps, + } +} + +// SetReceivers updates the set of logs receivers which will receive logs +// emitted by the exporter. +func (exp *logsExporter) SetReceivers(receivers []loki.LogsReceiver) { + exp.receiversMut.Lock() + defer exp.receiversMut.Unlock() + + exp.receivers = receivers +} + +func (exp *logsExporter) Name() string { return "logs exporter" } + +func (exp *logsExporter) Export(ctx context.Context, p payload.Payload) error { + meta := p.Meta.KeyVal() + + var errs []error + + // log events + for _, logItem := range p.Logs { + kv := logItem.KeyVal() + payload.MergeKeyVal(kv, meta) + errs = append(errs, exp.sendKeyValsToLogsPipeline(ctx, kv)) + } + + // exceptions + for _, exception := range p.Exceptions { + transformedException := transformException(exp.log, exp.sourceMaps, &exception, p.Meta.App.Release) + kv := transformedException.KeyVal() + payload.MergeKeyVal(kv, meta) + errs = append(errs, exp.sendKeyValsToLogsPipeline(ctx, kv)) + } + + // measurements + for _, measurement := range p.Measurements { + kv := measurement.KeyVal() + payload.MergeKeyVal(kv, meta) + errs = append(errs, exp.sendKeyValsToLogsPipeline(ctx, kv)) + } + + // events + for _, event := range p.Events { + kv := event.KeyVal() + payload.MergeKeyVal(kv, meta) + errs = append(errs, exp.sendKeyValsToLogsPipeline(ctx, kv)) + } + + return errors.Join(errs...) +} + +func (exp *logsExporter) sendKeyValsToLogsPipeline(ctx context.Context, kv *payload.KeyVal) error { + // Grab the current value of exp.receivers so sendKeyValsToLogsPipeline + // doesn't block updating receivers. + exp.receiversMut.RLock() + var ( + receivers = exp.receivers + ) + exp.receiversMut.RUnlock() + + line, err := logfmt.MarshalKeyvals(payload.KeyValToInterfaceSlice(kv)...) + if err != nil { + level.Error(exp.log).Log("msg", "failed to logfmt a frontend log event", "err", err) + return err + } + + ent := loki.Entry{ + Labels: exp.labelSet(), + Entry: logproto.Entry{ + Timestamp: time.Now(), + Line: string(line), + }, + } + + ctx, cancel := context.WithTimeout(ctx, 2*time.Second) // TODO(rfratto): potentially make this configurable + defer cancel() + + for _, receiver := range receivers { + select { + case <-ctx.Done(): + return err + case receiver.Chan() <- ent: + continue + } + } + + return nil +} + +func (exp *logsExporter) labelSet() model.LabelSet { + exp.labelsMut.RLock() + defer exp.labelsMut.RUnlock() + return exp.labels +} + +func (exp *logsExporter) SetLabels(newLabels map[string]string) { + exp.labelsMut.Lock() + defer exp.labelsMut.Unlock() + + ls := make(model.LabelSet, len(newLabels)) + for k, v := range newLabels { + ls[model.LabelName(k)] = model.LabelValue(v) + } + exp.labels = ls +} + +// +// Traces +// + +type tracesExporter struct { + log log.Logger + + mut sync.RWMutex + consumers []otelcol.Consumer +} + +var _ exporter = (*tracesExporter)(nil) + +func newTracesExporter(log log.Logger) *tracesExporter { + return &tracesExporter{ + log: log, + } +} + +// SetConsumers updates the set of OTLP consumers which will receive traces +// emitted by the exporter. +func (exp *tracesExporter) SetConsumers(consumers []otelcol.Consumer) { + exp.mut.Lock() + defer exp.mut.Unlock() + + exp.consumers = consumers +} + +func (exp *tracesExporter) Name() string { return "traces exporter" } + +func (exp *tracesExporter) Export(ctx context.Context, p payload.Payload) error { + if p.Traces == nil { + return nil + } + + var errs []error + for _, consumer := range exp.getTracesConsumers() { + errs = append(errs, consumer.ConsumeTraces(ctx, p.Traces.Traces)) + } + return errors.Join(errs...) +} + +func (exp *tracesExporter) getTracesConsumers() []otelcol.Consumer { + exp.mut.RLock() + defer exp.mut.RUnlock() + + return exp.consumers +} diff --git a/component/faro/receiver/exporters_test.go b/component/faro/receiver/exporters_test.go new file mode 100644 index 000000000000..84acf4aa27a8 --- /dev/null +++ b/component/faro/receiver/exporters_test.go @@ -0,0 +1,55 @@ +package receiver + +import ( + "context" + "strings" + "testing" + + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/prometheus/client_golang/prometheus" + promtestutil "github.com/prometheus/client_golang/prometheus/testutil" + "github.com/stretchr/testify/require" +) + +var metricNames = []string{ + "logs_total", + "measurements_total", + "exceptions_total", + "events_total", +} + +func Test_metricsExporter_Export(t *testing.T) { + var ( + reg = prometheus.NewRegistry() + exp = newMetricsExporter(reg) + ) + + expect := ` + # HELP faro_receiver_logs_total Total number of ingested logs + # TYPE faro_receiver_logs_total counter + faro_receiver_logs_total 2 + + # HELP faro_receiver_measurements_total Total number of ingested measurements + # TYPE faro_receiver_measurements_total counter + faro_receiver_measurements_total 3 + + # HELP faro_receiver_exceptions_total Total number of ingested exceptions + # TYPE faro_receiver_exceptions_total counter + faro_receiver_exceptions_total 4 + + # HELP faro_receiver_events_total Total number of ingested events + # TYPE faro_receiver_events_total counter + faro_receiver_events_total 5 + ` + + p := payload.Payload{ + Logs: make([]payload.Log, 2), + Measurements: make([]payload.Measurement, 3), + Exceptions: make([]payload.Exception, 4), + Events: make([]payload.Event, 5), + } + require.NoError(t, exp.Export(context.Background(), p)) + + err := promtestutil.CollectAndCompare(reg, strings.NewReader(expect), metricNames...) + require.NoError(t, err) +} diff --git a/component/faro/receiver/handler.go b/component/faro/receiver/handler.go new file mode 100644 index 000000000000..40aad51ef179 --- /dev/null +++ b/component/faro/receiver/handler.go @@ -0,0 +1,134 @@ +package receiver + +import ( + "crypto/subtle" + "encoding/json" + "net/http" + "sync" + "time" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/prometheus/client_golang/prometheus" + "github.com/rs/cors" + "golang.org/x/time/rate" +) + +const apiKeyHeader = "x-api-key" + +type handler struct { + log log.Logger + rateLimiter *rate.Limiter + exporters []exporter + errorsTotal *prometheus.CounterVec + + argsMut sync.RWMutex + args ServerArguments + cors *cors.Cors +} + +var _ http.Handler = (*handler)(nil) + +func newHandler(l log.Logger, reg prometheus.Registerer, exporters []exporter) *handler { + errorsTotal := prometheus.NewCounterVec(prometheus.CounterOpts{ + Name: "faro_receiver_exporter_errors_total", + Help: "Total number of errors produced by a receiver exporter", + }, []string{"exporter"}) + reg.MustRegister(errorsTotal) + + return &handler{ + log: l, + rateLimiter: rate.NewLimiter(rate.Inf, 0), + exporters: exporters, + errorsTotal: errorsTotal, + } +} + +func (h *handler) Update(args ServerArguments) { + h.argsMut.Lock() + defer h.argsMut.Unlock() + + h.args = args + + if args.RateLimiting.Enabled { + // Updating the rate limit to time.Now() would immediately fill the + // buckets. To allow requsts to immediately pass through, we adjust the + // time to set the limit/burst to to allow for both the normal rate and + // burst to be filled. + t := time.Now().Add(-time.Duration(float64(time.Second) * args.RateLimiting.Rate * args.RateLimiting.BurstSize)) + + h.rateLimiter.SetLimitAt(t, rate.Limit(args.RateLimiting.Rate)) + h.rateLimiter.SetBurstAt(t, int(args.RateLimiting.BurstSize)) + } else { + // Set to infinite rate limit. + h.rateLimiter.SetLimit(rate.Inf) + h.rateLimiter.SetBurst(0) // 0 burst is ignored when using rate.Inf. + } + + if len(args.CORSAllowedOrigins) > 0 { + h.cors = cors.New(cors.Options{ + AllowedOrigins: args.CORSAllowedOrigins, + AllowedHeaders: []string{apiKeyHeader, "content-type"}, + }) + } else { + h.cors = nil // Disable cors. + } +} + +func (h *handler) ServeHTTP(rw http.ResponseWriter, req *http.Request) { + h.argsMut.RLock() + defer h.argsMut.RUnlock() + + if h.cors != nil { + h.cors.ServeHTTP(rw, req, h.handleRequest) + } else { + h.handleRequest(rw, req) + } +} + +func (h *handler) handleRequest(rw http.ResponseWriter, req *http.Request) { + if !h.rateLimiter.Allow() { + http.Error(rw, http.StatusText(http.StatusTooManyRequests), http.StatusTooManyRequests) + return + } + + // If an API key is configured, ensure the request has a matching key. + if len(h.args.APIKey) > 0 { + apiHeader := req.Header.Get(apiKeyHeader) + + if subtle.ConstantTimeCompare([]byte(apiHeader), []byte(h.args.APIKey)) != 1 { + http.Error(rw, "API key not provided or incorrect", http.StatusUnauthorized) + return + } + } + + // Validate content length. + if h.args.MaxAllowedPayloadSize > 0 && req.ContentLength > int64(h.args.MaxAllowedPayloadSize) { + http.Error(rw, http.StatusText(http.StatusRequestEntityTooLarge), http.StatusRequestEntityTooLarge) + return + } + + var p payload.Payload + if err := json.NewDecoder(req.Body).Decode(&p); err != nil { + http.Error(rw, err.Error(), http.StatusBadRequest) + return + } + + var wg sync.WaitGroup + for _, exp := range h.exporters { + wg.Add(1) + go func(exp exporter) { + defer wg.Done() + + if err := exp.Export(req.Context(), p); err != nil { + level.Error(h.log).Log("msg", "exporter failed with error", "exporter", exp.Name(), "err", err) + h.errorsTotal.WithLabelValues(exp.Name()).Inc() + } + }(exp) + } + wg.Wait() + + rw.WriteHeader(http.StatusAccepted) + _, _ = rw.Write([]byte("ok")) +} diff --git a/component/faro/receiver/handler_test.go b/component/faro/receiver/handler_test.go new file mode 100644 index 000000000000..28bc53159795 --- /dev/null +++ b/component/faro/receiver/handler_test.go @@ -0,0 +1,289 @@ +package receiver + +import ( + "context" + "errors" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/alecthomas/units" + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/grafana/agent/pkg/util" + "github.com/prometheus/client_golang/prometheus" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +const emptyPayload = `{ + "traces": { + "resourceSpans": [] + }, + "logs": [], + "exceptions": [], + "measurements": [], + "meta": {} +}` + +func TestMultipleExportersAllSucceed(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + + require.Equal(t, http.StatusAccepted, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 1) + require.Len(t, exporter2.payloads, 1) +} + +func TestMultipleExportersOneFails(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", true, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + + require.Equal(t, http.StatusAccepted, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 0) + require.Len(t, exporter2.payloads, 1) +} + +func TestMultipleExportersAllFail(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", true, nil} + exporter2 = &testExporter{"exporter2", true, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + + require.Equal(t, http.StatusAccepted, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 0) + require.Len(t, exporter2.payloads, 0) +} + +func TestPayloadWithinLimit(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + MaxAllowedPayloadSize: units.Base2Bytes(len(emptyPayload)), + }) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + require.Equal(t, http.StatusAccepted, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 1) + require.Len(t, exporter2.payloads, 1) +} + +func TestPayloadTooLarge(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + MaxAllowedPayloadSize: units.Base2Bytes(len(emptyPayload) - 1), + }) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + require.Equal(t, http.StatusRequestEntityTooLarge, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 0) + require.Len(t, exporter2.payloads, 0) +} + +func TestMissingAPIKey(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + APIKey: "fakekey", + }) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + require.Equal(t, http.StatusUnauthorized, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 0) + require.Len(t, exporter2.payloads, 0) +} + +func TestInvalidAPIKey(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + APIKey: "fakekey", + }) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + req.Header.Set("x-api-key", "badkey") + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + require.Equal(t, http.StatusUnauthorized, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 0) + require.Len(t, exporter2.payloads, 0) +} + +func TestValidAPIKey(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + APIKey: "fakekey", + }) + + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + req.Header.Set("x-api-key", "fakekey") + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + require.Equal(t, http.StatusAccepted, rr.Result().StatusCode) + require.Len(t, exporter1.payloads, 1) + require.Len(t, exporter2.payloads, 1) +} + +func TestRateLimiter(t *testing.T) { + var ( + exporter1 = &testExporter{"exporter1", false, nil} + exporter2 = &testExporter{"exporter2", false, nil} + + h = newHandler( + util.TestLogger(t), + prometheus.NewRegistry(), + []exporter{exporter1, exporter2}, + ) + ) + + h.Update(ServerArguments{ + RateLimiting: RateLimitingArguments{ + Enabled: true, + Rate: 1, + BurstSize: 2, + }, + }) + + doRequest := func() *httptest.ResponseRecorder { + req, err := http.NewRequest(http.MethodPost, "/collect", strings.NewReader(emptyPayload)) + require.NoError(t, err) + + rr := httptest.NewRecorder() + h.ServeHTTP(rr, req) + return rr + } + + reqs := make([]*httptest.ResponseRecorder, 5) + for i := range reqs { + reqs[i] = doRequest() + } + + // Only 1 request is allowed per second, with a burst of 2; meaning the third + // request and beyond should be rejected. + assert.Equal(t, http.StatusAccepted, reqs[0].Result().StatusCode) + assert.Equal(t, http.StatusAccepted, reqs[1].Result().StatusCode) + assert.Equal(t, http.StatusTooManyRequests, reqs[2].Result().StatusCode) + assert.Equal(t, http.StatusTooManyRequests, reqs[3].Result().StatusCode) + assert.Equal(t, http.StatusTooManyRequests, reqs[4].Result().StatusCode) +} + +type testExporter struct { + name string + broken bool + payloads []payload.Payload +} + +func (te *testExporter) Name() string { + return te.name +} + +func (te *testExporter) Export(ctx context.Context, payload payload.Payload) error { + if te.broken { + return errors.New("this exporter is broken") + } + te.payloads = append(te.payloads, payload) + return nil +} diff --git a/component/faro/receiver/internal/payload/payload.go b/component/faro/receiver/internal/payload/payload.go new file mode 100644 index 000000000000..9bc05f8c5bba --- /dev/null +++ b/component/faro/receiver/internal/payload/payload.go @@ -0,0 +1,418 @@ +package payload + +import ( + "fmt" + "sort" + "strconv" + "strings" + "time" + + "go.opentelemetry.io/collector/pdata/pcommon" + "go.opentelemetry.io/collector/pdata/ptrace" + + "github.com/zeebo/xxh3" +) + +// Payload is the body of the receiver request +type Payload struct { + Exceptions []Exception `json:"exceptions,omitempty"` + Logs []Log `json:"logs,omitempty"` + Measurements []Measurement `json:"measurements,omitempty"` + Events []Event `json:"events,omitempty"` + Meta Meta `json:"meta,omitempty"` + Traces *Traces `json:"traces,omitempty"` +} + +// Frame struct represents a single stacktrace frame +type Frame struct { + Function string `json:"function,omitempty"` + Module string `json:"module,omitempty"` + Filename string `json:"filename,omitempty"` + Lineno int `json:"lineno,omitempty"` + Colno int `json:"colno,omitempty"` +} + +// String function converts a Frame into a human readable string +func (frame Frame) String() string { + module := "" + if len(frame.Module) > 0 { + module = frame.Module + "|" + } + return fmt.Sprintf("\n at %s (%s%s:%v:%v)", frame.Function, module, frame.Filename, frame.Lineno, frame.Colno) +} + +// Stacktrace is a collection of Frames +type Stacktrace struct { + Frames []Frame `json:"frames,omitempty"` +} + +// Exception struct controls all the data regarding an exception +type Exception struct { + Type string `json:"type,omitempty"` + Value string `json:"value,omitempty"` + Stacktrace *Stacktrace `json:"stacktrace,omitempty"` + Timestamp time.Time `json:"timestamp"` + Trace TraceContext `json:"trace,omitempty"` + Context ExceptionContext `json:"context,omitempty"` +} + +// Message string is concatenating of the Exception.Type and Exception.Value +func (e Exception) Message() string { + return fmt.Sprintf("%s: %s", e.Type, e.Value) +} + +// String is the string representation of an Exception +func (e Exception) String() string { + var stacktrace = e.Message() + if e.Stacktrace != nil { + for _, frame := range e.Stacktrace.Frames { + stacktrace += frame.String() + } + } + return stacktrace +} + +// KeyVal representation of the exception object +func (e Exception) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "timestamp", e.Timestamp.String()) + KeyValAdd(kv, "kind", "exception") + KeyValAdd(kv, "type", e.Type) + KeyValAdd(kv, "value", e.Value) + KeyValAdd(kv, "stacktrace", e.String()) + KeyValAdd(kv, "hash", strconv.FormatUint(xxh3.HashString(e.Value), 10)) + MergeKeyValWithPrefix(kv, KeyValFromMap(e.Context), "context_") + MergeKeyVal(kv, e.Trace.KeyVal()) + return kv +} + +// ExceptionContext is a string to string map structure that +// represents the context of an exception +type ExceptionContext map[string]string + +// TraceContext holds trace id and span id associated to an entity (log, exception, measurement...). +type TraceContext struct { + TraceID string `json:"trace_id"` + SpanID string `json:"span_id"` +} + +// KeyVal representation of the trace context object. +func (tc TraceContext) KeyVal() *KeyVal { + retv := NewKeyVal() + KeyValAdd(retv, "traceID", tc.TraceID) + KeyValAdd(retv, "spanID", tc.SpanID) + return retv +} + +// Traces wraps the otel traces model. +type Traces struct { + ptrace.Traces +} + +// UnmarshalJSON unmarshals Traces model. +func (t *Traces) UnmarshalJSON(b []byte) error { + unmarshaler := &ptrace.JSONUnmarshaler{} + td, err := unmarshaler.UnmarshalTraces(b) + if err != nil { + return err + } + *t = Traces{td} + return nil +} + +// MarshalJSON marshals Traces model to json. +func (t Traces) MarshalJSON() ([]byte, error) { + marshaler := &ptrace.JSONMarshaler{} + return marshaler.MarshalTraces(t.Traces) +} + +// SpanSlice unpacks Traces entity into a slice of Spans. +func (t Traces) SpanSlice() []ptrace.Span { + spans := make([]ptrace.Span, 0) + rss := t.ResourceSpans() + for i := 0; i < rss.Len(); i++ { + rs := rss.At(i) + ilss := rs.ScopeSpans() + for j := 0; j < ilss.Len(); j++ { + s := ilss.At(j).Spans() + for si := 0; si < s.Len(); si++ { + spans = append(spans, s.At(si)) + } + } + } + return spans +} + +// SpanToKeyVal returns KeyVal representation of a Span. +func SpanToKeyVal(s ptrace.Span) *KeyVal { + kv := NewKeyVal() + if s.StartTimestamp() > 0 { + KeyValAdd(kv, "timestamp", s.StartTimestamp().AsTime().String()) + } + if s.EndTimestamp() > 0 { + KeyValAdd(kv, "end_timestamp", s.StartTimestamp().AsTime().String()) + } + KeyValAdd(kv, "kind", "span") + KeyValAdd(kv, "traceID", s.TraceID().String()) + KeyValAdd(kv, "spanID", s.SpanID().String()) + KeyValAdd(kv, "span_kind", s.Kind().String()) + KeyValAdd(kv, "name", s.Name()) + KeyValAdd(kv, "parent_spanID", s.ParentSpanID().String()) + s.Attributes().Range(func(k string, v pcommon.Value) bool { + KeyValAdd(kv, "attr_"+k, fmt.Sprintf("%v", v)) + return true + }) + + return kv +} + +// LogLevel is log level enum for incoming app logs +type LogLevel string + +const ( + // LogLevelTrace is "trace" + LogLevelTrace LogLevel = "trace" + // LogLevelDebug is "debug" + LogLevelDebug LogLevel = "debug" + // LogLevelInfo is "info" + LogLevelInfo LogLevel = "info" + // LogLevelWarning is "warning" + LogLevelWarning LogLevel = "warning" + // LogLevelError is "error" + LogLevelError LogLevel = "error" +) + +// LogContext is a string to string map structure that +// represents the context of a log message +type LogContext map[string]string + +// Log struct controls the data that come into a Log message +type Log struct { + Message string `json:"message,omitempty"` + LogLevel LogLevel `json:"level,omitempty"` + Context LogContext `json:"context,omitempty"` + Timestamp time.Time `json:"timestamp"` + Trace TraceContext `json:"trace,omitempty"` +} + +// KeyVal representation of a Log object +func (l Log) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "timestamp", l.Timestamp.String()) + KeyValAdd(kv, "kind", "log") + KeyValAdd(kv, "message", l.Message) + KeyValAdd(kv, "level", string(l.LogLevel)) + MergeKeyValWithPrefix(kv, KeyValFromMap(l.Context), "context_") + MergeKeyVal(kv, l.Trace.KeyVal()) + return kv +} + +// MeasurementContext is a string to string map structure that +// represents the context of a log message +type MeasurementContext map[string]string + +// Measurement holds the data for user provided measurements +type Measurement struct { + Values map[string]float64 `json:"values,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + Trace TraceContext `json:"trace,omitempty"` + Context MeasurementContext `json:"context,omitempty"` +} + +// KeyVal representation of the exception object +func (m Measurement) KeyVal() *KeyVal { + kv := NewKeyVal() + + KeyValAdd(kv, "timestamp", m.Timestamp.String()) + KeyValAdd(kv, "kind", "measurement") + + keys := make([]string, 0, len(m.Values)) + for k := range m.Values { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + KeyValAdd(kv, k, fmt.Sprintf("%f", m.Values[k])) + } + MergeKeyVal(kv, m.Trace.KeyVal()) + MergeKeyValWithPrefix(kv, KeyValFromMap(m.Context), "context_") + return kv +} + +// SDK holds metadata about the app agent that produced the event +type SDK struct { + Name string `json:"name,omitempty"` + Version string `json:"version,omitempty"` + Integrations []SDKIntegration `json:"integrations,omitempty"` +} + +// KeyVal produces key->value representation of Sdk metadata +func (sdk SDK) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "name", sdk.Name) + KeyValAdd(kv, "version", sdk.Version) + + if len(sdk.Integrations) > 0 { + integrations := make([]string, len(sdk.Integrations)) + + for i, integration := range sdk.Integrations { + integrations[i] = integration.String() + } + + KeyValAdd(kv, "integrations", strings.Join(integrations, ",")) + } + + return kv +} + +// SDKIntegration holds metadata about a plugin/integration on the app agent that collected and sent the event +type SDKIntegration struct { + Name string `json:"name,omitempty"` + Version string `json:"version,omitempty"` +} + +func (i SDKIntegration) String() string { + return fmt.Sprintf("%s:%s", i.Name, i.Version) +} + +// User holds metadata about the user related to an app event +type User struct { + Email string `json:"email,omitempty"` + ID string `json:"id,omitempty"` + Username string `json:"username,omitempty"` + Attributes map[string]string `json:"attributes,omitempty"` +} + +// KeyVal produces a key->value representation User metadata +func (u User) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "email", u.Email) + KeyValAdd(kv, "id", u.ID) + KeyValAdd(kv, "username", u.Username) + MergeKeyValWithPrefix(kv, KeyValFromMap(u.Attributes), "attr_") + return kv +} + +// Meta holds metadata about an app event +type Meta struct { + SDK SDK `json:"sdk,omitempty"` + App App `json:"app,omitempty"` + User User `json:"user,omitempty"` + Session Session `json:"session,omitempty"` + Page Page `json:"page,omitempty"` + Browser Browser `json:"browser,omitempty"` + View View `json:"view,omitempty"` +} + +// KeyVal produces key->value representation of the app event metadata +func (m Meta) KeyVal() *KeyVal { + kv := NewKeyVal() + MergeKeyValWithPrefix(kv, m.SDK.KeyVal(), "sdk_") + MergeKeyValWithPrefix(kv, m.App.KeyVal(), "app_") + MergeKeyValWithPrefix(kv, m.User.KeyVal(), "user_") + MergeKeyValWithPrefix(kv, m.Session.KeyVal(), "session_") + MergeKeyValWithPrefix(kv, m.Page.KeyVal(), "page_") + MergeKeyValWithPrefix(kv, m.Browser.KeyVal(), "browser_") + MergeKeyValWithPrefix(kv, m.View.KeyVal(), "view_") + return kv +} + +// Session holds metadata about the browser session the event originates from +type Session struct { + ID string `json:"id,omitempty"` + Attributes map[string]string `json:"attributes,omitempty"` +} + +// KeyVal produces key->value representation of the Session metadata +func (s Session) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "id", s.ID) + MergeKeyValWithPrefix(kv, KeyValFromMap(s.Attributes), "attr_") + return kv +} + +// Page holds metadata about the web page event originates from +type Page struct { + ID string `json:"id,omitempty"` + URL string `json:"url,omitempty"` + Attributes map[string]string `json:"attributes,omitempty"` +} + +// KeyVal produces key->val representation of Page metadata +func (p Page) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "id", p.ID) + KeyValAdd(kv, "url", p.URL) + MergeKeyValWithPrefix(kv, KeyValFromMap(p.Attributes), "attr_") + return kv +} + +// App holds metadata about the application event originates from +type App struct { + Name string `json:"name,omitempty"` + Release string `json:"release,omitempty"` + Version string `json:"version,omitempty"` + Environment string `json:"environment,omitempty"` +} + +// Event holds RUM event data +type Event struct { + Name string `json:"name"` + Domain string `json:"domain,omitempty"` + Attributes map[string]string `json:"attributes,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + Trace TraceContext `json:"trace,omitempty"` +} + +// KeyVal produces key -> value representation of Event metadata +func (e Event) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "timestamp", e.Timestamp.String()) + KeyValAdd(kv, "kind", "event") + KeyValAdd(kv, "event_name", e.Name) + KeyValAdd(kv, "event_domain", e.Domain) + if e.Attributes != nil { + MergeKeyValWithPrefix(kv, KeyValFromMap(e.Attributes), "event_data_") + } + MergeKeyVal(kv, e.Trace.KeyVal()) + return kv +} + +// KeyVal produces key-> value representation of App metadata +func (a App) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "name", a.Name) + KeyValAdd(kv, "release", a.Release) + KeyValAdd(kv, "version", a.Version) + KeyValAdd(kv, "environment", a.Environment) + return kv +} + +// Browser holds metadata about a client's browser +type Browser struct { + Name string `json:"name,omitempty"` + Version string `json:"version,omitempty"` + OS string `json:"os,omitempty"` + Mobile bool `json:"mobile,omitempty"` +} + +// KeyVal produces key->value representation of the Browser metadata +func (b Browser) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "name", b.Name) + KeyValAdd(kv, "version", b.Version) + KeyValAdd(kv, "os", b.OS) + KeyValAdd(kv, "mobile", fmt.Sprintf("%v", b.Mobile)) + return kv +} + +// View holds metadata about a view +type View struct { + Name string `json:"name,omitempty"` +} + +func (v View) KeyVal() *KeyVal { + kv := NewKeyVal() + KeyValAdd(kv, "name", v.Name) + return kv +} diff --git a/component/faro/receiver/internal/payload/payload_test.go b/component/faro/receiver/internal/payload/payload_test.go new file mode 100644 index 000000000000..b219e3d8ba8f --- /dev/null +++ b/component/faro/receiver/internal/payload/payload_test.go @@ -0,0 +1,141 @@ +package payload + +import ( + "encoding/json" + "os" + "path/filepath" + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func loadTestData(t *testing.T, file string) []byte { + t.Helper() + // Safe to disable, this is a test. + // nolint:gosec + content, err := os.ReadFile(filepath.Join("../../testdata", file)) + require.NoError(t, err, "expected to be able to read file") + require.True(t, len(content) > 0) + return content +} + +func TestUnmarshalPayloadJSON(t *testing.T) { + content := loadTestData(t, "payload.json") + var payload Payload + err := json.Unmarshal(content, &payload) + require.NoError(t, err) + + now, err := time.Parse("2006-01-02T15:04:05Z0700", "2021-09-30T10:46:17.680Z") + require.NoError(t, err) + + require.Equal(t, Meta{ + SDK: SDK{ + Name: "grafana-frontend-agent", + Version: "1.0.0", + }, + App: App{ + Name: "testapp", + Release: "0.8.2", + Version: "abcdefg", + Environment: "production", + }, + User: User{ + Username: "domasx2", + ID: "123", + Email: "geralt@kaermorhen.org", + Attributes: map[string]string{"foo": "bar"}, + }, + Session: Session{ + ID: "abcd", + Attributes: map[string]string{"time_elapsed": "100s"}, + }, + Page: Page{ + URL: "https://example.com/page", + }, + Browser: Browser{ + Name: "chrome", + Version: "88.12.1", + OS: "linux", + Mobile: false, + }, + View: View{ + Name: "foobar", + }, + }, payload.Meta) + + require.Len(t, payload.Exceptions, 1) + require.Len(t, payload.Exceptions[0].Stacktrace.Frames, 26) + require.Equal(t, "Error", payload.Exceptions[0].Type) + require.Equal(t, "Cannot read property 'find' of undefined", payload.Exceptions[0].Value) + require.EqualValues(t, ExceptionContext{"ReactError": "Annoying Error", "component": "ReactErrorBoundary"}, payload.Exceptions[0].Context) + + require.Equal(t, []Log{ + { + Message: "opened pricing page", + LogLevel: LogLevelInfo, + Context: map[string]string{ + "component": "AppRoot", + "page": "Pricing", + }, + Timestamp: now, + Trace: TraceContext{ + TraceID: "abcd", + SpanID: "def", + }, + }, + { + Message: "loading price list", + LogLevel: LogLevelTrace, + Context: map[string]string{ + "component": "AppRoot", + "page": "Pricing", + }, + Timestamp: now, + Trace: TraceContext{ + TraceID: "abcd", + SpanID: "ghj", + }, + }, + }, payload.Logs) + + require.Equal(t, []Event{ + { + Name: "click_login_button", + Domain: "frontend", + Timestamp: now, + Attributes: map[string]string{ + "foo": "bar", + "one": "two", + }, + Trace: TraceContext{ + TraceID: "abcd", + SpanID: "def", + }, + }, + { + Name: "click_reset_password_button", + Timestamp: now, + }, + }, payload.Events) + + require.Len(t, payload.Measurements, 1) + + require.Equal(t, []Measurement{ + { + Values: map[string]float64{ + "ttfp": 20.12, + "ttfcp": 22.12, + "ttfb": 14, + }, + Timestamp: now, + Trace: TraceContext{ + TraceID: "abcd", + SpanID: "def", + }, + Context: MeasurementContext{ + "hello": "world", + }, + }, + }, payload.Measurements) +} diff --git a/component/faro/receiver/internal/payload/utils.go b/component/faro/receiver/internal/payload/utils.go new file mode 100644 index 000000000000..dc6e58c90760 --- /dev/null +++ b/component/faro/receiver/internal/payload/utils.go @@ -0,0 +1,73 @@ +package payload + +import ( + "fmt" + "sort" + + om "github.com/wk8/go-ordered-map" +) + +// KeyVal is an ordered map of string to interface +type KeyVal = om.OrderedMap + +// NewKeyVal creates new empty KeyVal +func NewKeyVal() *KeyVal { + return om.New() +} + +// KeyValFromMap will instantiate KeyVal from a map[string]string +func KeyValFromMap(m map[string]string) *KeyVal { + kv := NewKeyVal() + keys := make([]string, 0, len(m)) + for k := range m { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + KeyValAdd(kv, k, m[k]) + } + return kv +} + +// MergeKeyVal will merge source in target +func MergeKeyVal(target *KeyVal, source *KeyVal) { + for el := source.Oldest(); el != nil; el = el.Next() { + target.Set(el.Key, el.Value) + } +} + +// MergeKeyValWithPrefix will merge source in target, adding a prefix to each key being merged in +func MergeKeyValWithPrefix(target *KeyVal, source *KeyVal, prefix string) { + for el := source.Oldest(); el != nil; el = el.Next() { + target.Set(fmt.Sprintf("%s%s", prefix, el.Key), el.Value) + } +} + +// KeyValAdd adds a key + value string pair to kv +func KeyValAdd(kv *KeyVal, key string, value string) { + if len(value) > 0 { + kv.Set(key, value) + } +} + +// KeyValToInterfaceSlice converts KeyVal to []interface{}, typically used for logging +func KeyValToInterfaceSlice(kv *KeyVal) []interface{} { + slice := make([]interface{}, kv.Len()*2) + idx := 0 + for el := kv.Oldest(); el != nil; el = el.Next() { + slice[idx] = el.Key + idx++ + slice[idx] = el.Value + idx++ + } + return slice +} + +// KeyValToInterfaceMap converts KeyVal to map[string]interface +func KeyValToInterfaceMap(kv *KeyVal) map[string]interface{} { + retv := make(map[string]interface{}) + for el := kv.Oldest(); el != nil; el = el.Next() { + retv[fmt.Sprint(el.Key)] = el.Value + } + return retv +} diff --git a/component/faro/receiver/receiver.go b/component/faro/receiver/receiver.go new file mode 100644 index 000000000000..838d8827c5a8 --- /dev/null +++ b/component/faro/receiver/receiver.go @@ -0,0 +1,232 @@ +package receiver + +import ( + "context" + "fmt" + "sync" + "time" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/go-sourcemap/sourcemap" + "github.com/grafana/agent/component" +) + +func init() { + component.Register(component.Registration{ + Name: "faro.receiver", + Args: Arguments{}, + + Build: func(opts component.Options, args component.Arguments) (component.Component, error) { + return New(opts, args.(Arguments)) + }, + }) +} + +type Component struct { + log log.Logger + handler *handler + lazySourceMaps *varSourceMapsStore + sourceMapsMetrics *sourceMapMetrics + serverMetrics *serverMetrics + + argsMut sync.RWMutex + args Arguments + + metrics *metricsExporter + logs *logsExporter + traces *tracesExporter + + actorCh chan func(context.Context) + + healthMut sync.RWMutex + health component.Health +} + +var _ component.HealthComponent = (*Component)(nil) + +func New(o component.Options, args Arguments) (*Component, error) { + var ( + // The source maps store changes at runtime based on settings, so we create + // a lazy store to pass to the logs exporter. + varStore = &varSourceMapsStore{} + + metrics = newMetricsExporter(o.Registerer) + logs = newLogsExporter(log.With(o.Logger, "exporter", "logs"), varStore) + traces = newTracesExporter(log.With(o.Logger, "exporter", "traces")) + ) + + c := &Component{ + log: o.Logger, + handler: newHandler( + log.With(o.Logger, "subcomponent", "handler"), + o.Registerer, + []exporter{metrics, logs, traces}, + ), + lazySourceMaps: varStore, + sourceMapsMetrics: newSourceMapMetrics(o.Registerer), + serverMetrics: newServerMetrics(o.Registerer), + + metrics: metrics, + logs: logs, + traces: traces, + + actorCh: make(chan func(context.Context), 1), + } + + if err := c.Update(args); err != nil { + return nil, err + } + return c, nil +} + +func (c *Component) Run(ctx context.Context) error { + var wg sync.WaitGroup + defer wg.Wait() + + var ( + cancelCurrentActor context.CancelFunc + ) + defer func() { + if cancelCurrentActor != nil { + cancelCurrentActor() + } + }() + + for { + select { + case <-ctx.Done(): + return nil + + case newActor := <-c.actorCh: + // Terminate old actor (if any), and wait for it to return. + if cancelCurrentActor != nil { + cancelCurrentActor() + wg.Wait() + } + + // Run the new actor. + actorCtx, actorCancel := context.WithCancel(ctx) + cancelCurrentActor = actorCancel + + wg.Add(1) + go func() { + defer wg.Done() + newActor(actorCtx) + }() + } + } +} + +func (c *Component) Update(args component.Arguments) error { + newArgs := args.(Arguments) + + c.argsMut.Lock() + c.args = newArgs + c.argsMut.Unlock() + + c.logs.SetLabels(newArgs.LogLabels) + + c.handler.Update(newArgs.Server) + + c.lazySourceMaps.SetInner(newSourceMapsStore( + log.With(c.log, "subcomponent", "handler"), + newArgs.SourceMaps, + c.sourceMapsMetrics, + nil, // Use default HTTP client. + nil, // Use default FS implementation. + )) + + c.logs.SetReceivers(newArgs.Output.Logs) + c.traces.SetConsumers(newArgs.Output.Traces) + + // Create a new server actor to run. + makeNewServer := func(ctx context.Context) { + // NOTE(rfratto): we don't use newArgs here, since it's not guaranteed that + // our actor runs (we may be skipped for an existing scheduled function). + // Instead, we load the most recent args. + + c.argsMut.RLock() + var ( + args = c.args + ) + c.argsMut.RUnlock() + + srv := newServer( + log.With(c.log, "subcomponent", "server"), + args.Server, + c.serverMetrics, + c.handler, + ) + + // Reset health status. + c.setServerHealth(nil) + + err := srv.Run(ctx) + if err != nil { + level.Error(c.log).Log("msg", "server exited with error", "err", err) + c.setServerHealth(err) + } + } + + select { + case c.actorCh <- makeNewServer: + // Actor has been scheduled to run. + default: + // An actor is already scheduled to run. Don't do anything. + } + + return nil +} + +func (c *Component) setServerHealth(err error) { + c.healthMut.Lock() + defer c.healthMut.Unlock() + + if err == nil { + c.health = component.Health{ + Health: component.HealthTypeHealthy, + Message: "component is ready to receive telemetry over the network", + UpdateTime: time.Now(), + } + } else { + c.health = component.Health{ + Health: component.HealthTypeUnhealthy, + Message: fmt.Sprintf("server has terminated: %s", err), + UpdateTime: time.Now(), + } + } +} + +// CurrentHealth implements component.HealthComponent. It returns an unhealthy +// status if the server has terminated. +func (c *Component) CurrentHealth() component.Health { + c.healthMut.RLock() + defer c.healthMut.RUnlock() + return c.health +} + +type varSourceMapsStore struct { + mut sync.RWMutex + inner sourceMapsStore +} + +var _ sourceMapsStore = (*varSourceMapsStore)(nil) + +func (vs *varSourceMapsStore) GetSourceMap(sourceURL string, release string) (*sourcemap.Consumer, error) { + vs.mut.RLock() + defer vs.mut.RUnlock() + + if vs.inner != nil { + return vs.inner.GetSourceMap(sourceURL, release) + } + + return nil, fmt.Errorf("no sourcemap available") +} + +func (vs *varSourceMapsStore) SetInner(inner sourceMapsStore) { + vs.mut.Lock() + defer vs.mut.Unlock() + + vs.inner = inner +} diff --git a/component/faro/receiver/receiver_test.go b/component/faro/receiver/receiver_test.go new file mode 100644 index 000000000000..45ffa25cfa35 --- /dev/null +++ b/component/faro/receiver/receiver_test.go @@ -0,0 +1,152 @@ +package receiver + +import ( + "fmt" + "net/http" + "strings" + "sync" + "testing" + "time" + + "github.com/grafana/agent/component/common/loki" + "github.com/grafana/agent/component/otelcol" + "github.com/grafana/agent/pkg/flow/componenttest" + "github.com/grafana/agent/pkg/util" + "github.com/grafana/loki/pkg/logproto" + "github.com/phayes/freeport" + "github.com/prometheus/common/model" + "github.com/stretchr/testify/require" +) + +// Test performs an end-to-end test of the component. +func Test(t *testing.T) { + ctx := componenttest.TestContext(t) + + ctrl, err := componenttest.NewControllerFromID( + util.TestLogger(t), + "faro.receiver", + ) + require.NoError(t, err) + + freePort, err := freeport.GetFreePort() + require.NoError(t, err) + + lr := newFakeLogsReceiver(t) + + go func() { + err := ctrl.Run(ctx, Arguments{ + LogLabels: map[string]string{ + "foo": "bar", + }, + + Server: ServerArguments{ + Host: "127.0.0.1", + Port: freePort, + }, + + Output: OutputArguments{ + Logs: []loki.LogsReceiver{lr}, + Traces: []otelcol.Consumer{}, + }, + }) + require.NoError(t, err) + }() + + // Wait for the server to be running. + util.Eventually(t, func(t require.TestingT) { + resp, err := http.Get(fmt.Sprintf("http://localhost:%d/-/ready", freePort)) + require.NoError(t, err) + defer resp.Body.Close() + + require.Equal(t, http.StatusOK, resp.StatusCode) + }) + + // Send a sample payload to the server. + resp, err := http.Post( + fmt.Sprintf("http://localhost:%d/collect", freePort), + "application/json", + strings.NewReader(`{ + "traces": { + "resourceSpans": [] + }, + "logs": [{ + "message": "hello, world", + "level": "info", + "context": {"env": "dev"}, + "timestamp": "2021-01-01T00:00:00Z", + "trace": { + "trace_id": "0", + "span_id": "0" + } + }], + "exceptions": [], + "measurements": [], + "meta": {} + }`), + ) + require.NoError(t, err) + defer resp.Body.Close() + + require.Equal(t, http.StatusAccepted, resp.StatusCode) + require.Len(t, lr.GetEntries(), 1) + + expect := loki.Entry{ + Labels: model.LabelSet{ + "foo": model.LabelValue("bar"), + }, + Entry: logproto.Entry{ + Line: `timestamp="2021-01-01 00:00:00 +0000 UTC" kind=log message="hello, world" level=info context_env=dev traceID=0 spanID=0 browser_mobile=false`, + }, + } + require.Equal(t, expect, lr.entries[0]) +} + +type fakeLogsReceiver struct { + ch chan loki.Entry + + entriesMut sync.RWMutex + entries []loki.Entry +} + +var _ loki.LogsReceiver = (*fakeLogsReceiver)(nil) + +func newFakeLogsReceiver(t *testing.T) *fakeLogsReceiver { + ctx := componenttest.TestContext(t) + + lr := &fakeLogsReceiver{ + ch: make(chan loki.Entry, 1), + } + + go func() { + defer close(lr.ch) + + select { + case <-ctx.Done(): + return + case ent := <-lr.Chan(): + + lr.entriesMut.Lock() + lr.entries = append(lr.entries, loki.Entry{ + Labels: ent.Labels, + Entry: logproto.Entry{ + Timestamp: time.Time{}, // Use consistent time for testing. + Line: ent.Line, + StructuredMetadata: ent.StructuredMetadata, + }, + }) + lr.entriesMut.Unlock() + } + }() + + return lr +} + +func (lr *fakeLogsReceiver) Chan() chan loki.Entry { + return lr.ch +} + +func (lr *fakeLogsReceiver) GetEntries() []loki.Entry { + lr.entriesMut.RLock() + defer lr.entriesMut.RUnlock() + return lr.entries +} diff --git a/component/faro/receiver/server.go b/component/faro/receiver/server.go new file mode 100644 index 000000000000..16d756a780bc --- /dev/null +++ b/component/faro/receiver/server.go @@ -0,0 +1,117 @@ +package receiver + +import ( + "context" + "fmt" + "net/http" + "time" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/gorilla/mux" + "github.com/grafana/dskit/instrument" + "github.com/grafana/dskit/middleware" + "github.com/prometheus/client_golang/prometheus" +) + +type serverMetrics struct { + requestDuration *prometheus.HistogramVec + rxMessageSize *prometheus.HistogramVec + txMessageSize *prometheus.HistogramVec + inflightRequests *prometheus.GaugeVec +} + +func newServerMetrics(reg prometheus.Registerer) *serverMetrics { + m := &serverMetrics{ + requestDuration: prometheus.NewHistogramVec(prometheus.HistogramOpts{ + Name: "faro_receiver_request_duration_seconds", + Help: "Time (in seconds) spent serving HTTP requests.", + Buckets: instrument.DefBuckets, + }, []string{"method", "route", "status_code", "ws"}), + + rxMessageSize: prometheus.NewHistogramVec(prometheus.HistogramOpts{ + Name: "faro_receiver_request_message_bytes", + Help: "Size (in bytes) of messages received in the request.", + Buckets: middleware.BodySizeBuckets, + }, []string{"method", "route"}), + + txMessageSize: prometheus.NewHistogramVec(prometheus.HistogramOpts{ + Name: "faro_receiver_response_message_bytes", + Help: "Size (in bytes) of messages sent in response.", + Buckets: middleware.BodySizeBuckets, + }, []string{"method", "route"}), + + inflightRequests: prometheus.NewGaugeVec(prometheus.GaugeOpts{ + Name: "faro_receiver_inflight_requests", + Help: "Current number of inflight requests.", + }, []string{"method", "route"}), + } + reg.MustRegister(m.requestDuration, m.rxMessageSize, m.txMessageSize, m.inflightRequests) + + return m +} + +// server represents the HTTP server for which the receiver receives metrics. +// server is not dynamically updatable. To update server, shut down the old +// server and start a new one. +type server struct { + log log.Logger + args ServerArguments + handler http.Handler + metrics *serverMetrics +} + +func newServer(l log.Logger, args ServerArguments, metrics *serverMetrics, h http.Handler) *server { + return &server{ + log: l, + args: args, + handler: h, + metrics: metrics, + } +} + +func (s *server) Run(ctx context.Context) error { + r := mux.NewRouter() + r.Handle("/collect", s.handler).Methods(http.MethodPost, http.MethodOptions) + + r.HandleFunc("/-/ready", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte("ready")) + }) + + mw := middleware.Instrument{ + RouteMatcher: r, + Duration: s.metrics.requestDuration, + RequestBodySize: s.metrics.rxMessageSize, + ResponseBodySize: s.metrics.txMessageSize, + InflightRequests: s.metrics.inflightRequests, + } + + srv := &http.Server{ + Addr: fmt.Sprintf("%s:%d", s.args.Host, s.args.Port), + Handler: mw.Wrap(r), + } + + errCh := make(chan error, 1) + go func() { + level.Info(s.log).Log("msg", "starting server", "addr", srv.Addr) + errCh <- srv.ListenAndServe() + }() + + select { + case <-ctx.Done(): + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + level.Info(s.log).Log("msg", "terminating server") + + if err := srv.Shutdown(ctx); err != nil { + level.Error(s.log).Log("msg", "failed to gracefully terminate server", "err", err) + } + + case err := <-errCh: + return err + } + + return nil +} diff --git a/component/faro/receiver/sourcemaps.go b/component/faro/receiver/sourcemaps.go new file mode 100644 index 000000000000..49476c7efa4d --- /dev/null +++ b/component/faro/receiver/sourcemaps.go @@ -0,0 +1,374 @@ +package receiver + +import ( + "bytes" + "fmt" + "io" + "io/fs" + "net/http" + "net/url" + "os" + "path/filepath" + "regexp" + "strings" + "sync" + "text/template" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/go-sourcemap/sourcemap" + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/minio/pkg/wildcard" + "github.com/prometheus/client_golang/prometheus" + "github.com/vincent-petithory/dataurl" +) + +// sourceMapsStore is an interface for a sourcemap service capable of +// transforming minified source locations to the original source location. +type sourceMapsStore interface { + GetSourceMap(sourceURL string, release string) (*sourcemap.Consumer, error) +} + +// Stub interfaces for easier mocking. +type ( + httpClient interface { + Get(url string) (*http.Response, error) + } + + fileService interface { + Stat(name string) (fs.FileInfo, error) + ReadFile(name string) ([]byte, error) + } +) + +type osFileService struct{} + +func (fs osFileService) Stat(name string) (fs.FileInfo, error) { return os.Stat(name) } +func (fs osFileService) ReadFile(name string) ([]byte, error) { return os.ReadFile(name) } + +type sourceMapMetrics struct { + cacheSize *prometheus.CounterVec + downloads *prometheus.CounterVec + fileReads *prometheus.CounterVec +} + +func newSourceMapMetrics(reg prometheus.Registerer) *sourceMapMetrics { + m := &sourceMapMetrics{ + cacheSize: prometheus.NewCounterVec(prometheus.CounterOpts{ + Name: "faro_receiver_sourcemap_cache_size", + Help: "number of items in source map cache, per origin", + }, []string{"origin"}), + downloads: prometheus.NewCounterVec(prometheus.CounterOpts{ + Name: "faro_receiver_sourcemap_downloads_total", + Help: "downloads by the source map service", + }, []string{"origin", "http_status"}), + fileReads: prometheus.NewCounterVec(prometheus.CounterOpts{ + Name: "faro_receiver_sourcemap_file_reads_total", + Help: "source map file reads from file system, by origin and status", + }, []string{"origin", "status"}), + } + + reg.MustRegister(m.cacheSize, m.downloads, m.fileReads) + return m +} + +type sourcemapFileLocation struct { + LocationArguments + pathTemplate *template.Template +} + +type sourceMapsStoreImpl struct { + log log.Logger + cli httpClient + fs fileService + args SourceMapsArguments + metrics *sourceMapMetrics + locs []*sourcemapFileLocation + + cacheMut sync.Mutex + cache map[string]*sourcemap.Consumer +} + +// newSourceMapStore creates an implementation of sourceMapsStore. The returned +// implementation is not dynamically updatable; create a new sourceMapsStore +// implementation if arguments change. +func newSourceMapsStore(log log.Logger, args SourceMapsArguments, metrics *sourceMapMetrics, cli httpClient, fs fileService) *sourceMapsStoreImpl { + // TODO(rfratto): it would be nice for this to be dynamically updatable, but + // that will require swapping out the http client (when the timeout changes) + // or to find a way to inject a download timeout without modifying the http + // client. + + if cli == nil { + cli = &http.Client{Timeout: args.DownloadTimeout} + } + if fs == nil { + fs = osFileService{} + } + + locs := []*sourcemapFileLocation{} + for _, loc := range args.Locations { + tpl, err := template.New(loc.Path).Parse(loc.Path) + if err != nil { + panic(err) // TODO(rfratto): why is this set to panic? + } + + locs = append(locs, &sourcemapFileLocation{ + LocationArguments: loc, + pathTemplate: tpl, + }) + } + + return &sourceMapsStoreImpl{ + log: log, + cli: cli, + fs: fs, + args: args, + cache: make(map[string]*sourcemap.Consumer), + metrics: metrics, + locs: locs, + } +} + +func (store *sourceMapsStoreImpl) GetSourceMap(sourceURL string, release string) (*sourcemap.Consumer, error) { + // TODO(rfratto): GetSourceMap is weak to transient errors, since it always + // caches the result, even when there's an error. This means that transient + // errors will be cached forever, preventing source maps from being retrieved. + + store.cacheMut.Lock() + defer store.cacheMut.Unlock() + + cacheKey := fmt.Sprintf("%s__%s", sourceURL, release) + if sm, ok := store.cache[cacheKey]; ok { + return sm, nil + } + + content, sourceMapURL, err := store.getSourceMapContent(sourceURL, release) + if err != nil || content == nil { + store.cache[cacheKey] = nil + return nil, err + } + if content != nil { + consumer, err := sourcemap.Parse(sourceMapURL, content) + if err != nil { + store.cache[cacheKey] = nil + level.Debug(store.log).Log("msg", "failed to parse source map", "url", sourceMapURL, "release", release, "err", err) + return nil, err + } + level.Info(store.log).Log("msg", "successfully parsed source map", "url", sourceMapURL, "release", release) + store.cache[cacheKey] = consumer + store.metrics.cacheSize.WithLabelValues(getOrigin(sourceURL)).Inc() + return consumer, nil + } + + return nil, nil +} + +func (store *sourceMapsStoreImpl) getSourceMapContent(sourceURL string, release string) (content []byte, sourceMapURL string, err error) { + // Attempt to find the source map in the filesystem first. + for _, loc := range store.locs { + content, sourceMapURL, err = store.getSourceMapFromFileSystem(sourceURL, release, loc) + if content != nil || err != nil { + return content, sourceMapURL, err + } + } + + // Attempt to download the sourcemap. + // + // TODO(rfratto): check if downloading is enabled. + if strings.HasPrefix(sourceURL, "http") && urlMatchesOrigins(sourceURL, store.args.DownloadFromOrigins) { + return store.downloadSourceMapContent(sourceURL) + } + return nil, "", nil +} + +func (store *sourceMapsStoreImpl) getSourceMapFromFileSystem(sourceURL string, release string, loc *sourcemapFileLocation) (content []byte, sourceMapURL string, err error) { + if len(sourceURL) == 0 || !strings.HasPrefix(sourceURL, loc.MinifiedPathPrefix) || strings.HasSuffix(sourceURL, "/") { + return nil, "", nil + } + + var rootPath bytes.Buffer + + err = loc.pathTemplate.Execute(&rootPath, struct{ Release string }{Release: cleanFilePathPart(release)}) + if err != nil { + return nil, "", err + } + + pathParts := []string{rootPath.String()} + for _, part := range strings.Split(strings.TrimPrefix(strings.Split(sourceURL, "?")[0], loc.MinifiedPathPrefix), "/") { + if len(part) > 0 && part != "." && part != ".." { + pathParts = append(pathParts, part) + } + } + mapFilePath := filepath.Join(pathParts...) + ".map" + + if _, err := store.fs.Stat(mapFilePath); err != nil { + store.metrics.fileReads.WithLabelValues(getOrigin(sourceURL), "not_found").Inc() + level.Debug(store.log).Log("msg", "source map not found on filesystem", "url", sourceURL, "file_path", mapFilePath) + return nil, "", nil + } + level.Debug(store.log).Log("msg", "source map found on filesystem", "url", mapFilePath, "file_path", mapFilePath) + + content, err = store.fs.ReadFile(mapFilePath) + if err != nil { + store.metrics.fileReads.WithLabelValues(getOrigin(sourceURL), "error").Inc() + } else { + store.metrics.fileReads.WithLabelValues(getOrigin(sourceURL), "ok").Inc() + } + + return content, sourceURL, err +} + +func (store *sourceMapsStoreImpl) downloadSourceMapContent(sourceURL string) (content []byte, resolvedSourceMapURL string, err error) { + level.Debug(store.log).Log("msg", "attempting to download source file", "url", sourceURL) + + result, err := store.downloadFileContents(sourceURL) + if err != nil { + level.Debug(store.log).Log("msg", "failed to download source file", "url", sourceURL, "err", err) + return nil, "", err + } + + match := reSourceMap.FindAllStringSubmatch(string(result), -1) + if len(match) == 0 { + level.Debug(store.log).Log("msg", "no source map url found in source", "url", sourceURL) + return nil, "", nil + } + sourceMapURL := match[len(match)-1][2] + + // Inline sourcemap + if strings.HasPrefix(sourceMapURL, "data:") { + dataURL, err := dataurl.DecodeString(sourceMapURL) + if err != nil { + level.Debug(store.log).Log("msg", "failed to parse inline source map data url", "url", sourceURL, "err", err) + return nil, "", err + } + + level.Info(store.log).Log("msg", "successfully parsed inline source map data url", "url", sourceURL) + return dataURL.Data, sourceURL + ".map", nil + } + // Remote sourcemap + resolvedSourceMapURL = sourceMapURL + + // If the URL is relative, we need to attempt to resolve the absolute URL. + if !strings.HasPrefix(resolvedSourceMapURL, "http") { + base, err := url.Parse(sourceURL) + if err != nil { + level.Debug(store.log).Log("msg", "failed to parse source URL", "url", sourceURL, "err", err) + return nil, "", err + } + relative, err := url.Parse(sourceMapURL) + if err != nil { + level.Debug(store.log).Log("msg", "failed to parse source map URL", "url", sourceURL, "sourceMapURL", sourceMapURL, "err", err) + return nil, "", err + } + + resolvedSourceMapURL = base.ResolveReference(relative).String() + level.Debug(store.log).Log("msg", "resolved absolute source map URL", "url", sourceURL, "sourceMapURL", sourceMapURL) + } + + level.Debug(store.log).Log("msg", "attempting to download source map file", "url", resolvedSourceMapURL) + result, err = store.downloadFileContents(resolvedSourceMapURL) + if err != nil { + level.Debug(store.log).Log("msg", "failed to download source map file", "url", resolvedSourceMapURL, "err", err) + return nil, "", err + } + + return result, resolvedSourceMapURL, nil +} + +func (store *sourceMapsStoreImpl) downloadFileContents(url string) ([]byte, error) { + resp, err := store.cli.Get(url) + if err != nil { + store.metrics.downloads.WithLabelValues(getOrigin(url), "?").Inc() + return nil, err + } + defer resp.Body.Close() + + store.metrics.downloads.WithLabelValues(getOrigin(url), fmt.Sprint(resp.StatusCode)).Inc() + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("unexpected status %v", resp.StatusCode) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + return body, nil +} + +var reSourceMap = regexp.MustCompile("//[#@]\\s(source(?:Mapping)?URL)=\\s*(?P\\S+)\r?\n?$") + +func getOrigin(URL string) string { + // TODO(rfratto): why are we parsing this every time? Let's parse it once. + + parsed, err := url.Parse(URL) + if err != nil { + return "?" // TODO(rfratto): should invalid URLs be permitted? + } + return fmt.Sprintf("%s://%s", parsed.Scheme, parsed.Host) +} + +// urlMatchesOrigins returns true if URL matches at least one of origin prefix. Wildcard '*' and '?' supported +func urlMatchesOrigins(URL string, origins []string) bool { + for _, origin := range origins { + if origin == "*" || wildcard.Match(origin+"*", URL) { + return true + } + } + return false +} + +func cleanFilePathPart(x string) string { + return strings.TrimLeft(strings.ReplaceAll(strings.ReplaceAll(x, "\\", ""), "/", ""), ".") +} + +func transformException(log log.Logger, store sourceMapsStore, ex *payload.Exception, release string) *payload.Exception { + if ex.Stacktrace == nil { + return ex + } + + var frames []payload.Frame + for _, frame := range ex.Stacktrace.Frames { + mappedFrame, err := resolveSourceLocation(store, &frame, release) + if err != nil { + level.Error(log).Log("msg", "Error resolving stack trace frame source location", "err", err) + frames = append(frames, frame) + } else if mappedFrame != nil { + frames = append(frames, *mappedFrame) + } else { + frames = append(frames, frame) + } + } + + return &payload.Exception{ + Type: ex.Type, + Value: ex.Value, + Stacktrace: &payload.Stacktrace{Frames: frames}, + Timestamp: ex.Timestamp, + } +} + +func resolveSourceLocation(store sourceMapsStore, frame *payload.Frame, release string) (*payload.Frame, error) { + smap, err := store.GetSourceMap(frame.Filename, release) + if err != nil { + return nil, err + } + if smap == nil { + return nil, nil + } + + file, function, line, col, ok := smap.Source(frame.Lineno, frame.Colno) + if !ok { + return nil, nil + } + // unfortunately in many cases go-sourcemap fails to determine the original function name. + // not a big issue as long as file, line and column are correct + if len(function) == 0 { + function = "?" + } + return &payload.Frame{ + Filename: file, + Lineno: line, + Colno: col, + Function: function, + }, nil +} diff --git a/component/faro/receiver/sourcemaps_test.go b/component/faro/receiver/sourcemaps_test.go new file mode 100644 index 000000000000..63b1dedee179 --- /dev/null +++ b/component/faro/receiver/sourcemaps_test.go @@ -0,0 +1,528 @@ +package receiver + +import ( + "bytes" + "errors" + "io" + "io/fs" + "net/http" + "os" + "path/filepath" + "testing" + + "github.com/grafana/agent/component/faro/receiver/internal/payload" + "github.com/grafana/agent/pkg/util" + "github.com/prometheus/client_golang/prometheus" + "github.com/stretchr/testify/require" +) + +func Test_sourceMapsStoreImpl_DownloadSuccess(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{ + responses: []struct { + *http.Response + error + }{ + {newResponseFromTestData(t, "foo.js"), nil}, + {newResponseFromTestData(t, "foo.js.map"), nil}, + }, + } + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: true, + DownloadFromOrigins: []string{"*"}, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + &mockFileService{}, + ) + ) + + expect := &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 37, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 6, + }, + { + Colno: 2, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 7, + }, + }, + }, + } + + actual := transformException(logger, store, mockException(), "123") + require.Equal(t, []string{"http://localhost:1234/foo.js", "http://localhost:1234/foo.js.map"}, httpClient.requests) + require.Equal(t, expect, actual) +} + +func Test_sourceMapsStoreImpl_DownloadError(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{ + responses: []struct { + *http.Response + error + }{ + { + &http.Response{StatusCode: 500, Body: io.NopCloser(bytes.NewReader(nil))}, + nil, + }, + }, + } + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: true, + DownloadFromOrigins: []string{"*"}, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + &mockFileService{}, + ) + ) + + expect := mockException() + actual := transformException(logger, store, expect, "123") + require.Equal(t, []string{"http://localhost:1234/foo.js"}, httpClient.requests) + require.Equal(t, expect, actual) +} + +func Test_sourceMapsStoreImpl_DownloadHTTPOriginFiltering(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{ + responses: []struct { + *http.Response + error + }{ + {newResponseFromTestData(t, "foo.js"), nil}, + {newResponseFromTestData(t, "foo.js.map"), nil}, + }, + } + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: true, + DownloadFromOrigins: []string{"http://bar.com/"}, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + &mockFileService{}, + ) + ) + + expect := &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://foo.com/foo.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 2, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 7, + }, + }, + }, + } + + actual := transformException(logger, store, &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://foo.com/foo.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 5, + Filename: "http://bar.com/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + }, + }, + }, "123") + + require.Equal(t, []string{"http://bar.com/foo.js", "http://bar.com/foo.js.map"}, httpClient.requests) + require.Equal(t, expect, actual) +} + +func Test_sourceMapsStoreImpl_ReadFromFileSystem(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{} + + fileService = &mockFileService{ + files: map[string][]byte{ + filepath.FromSlash("/var/build/latest/foo.js.map"): loadTestData(t, "foo.js.map"), + filepath.FromSlash("/var/build/123/foo.js.map"): loadTestData(t, "foo.js.map"), + }, + } + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: false, + Locations: []LocationArguments{ + { + MinifiedPathPrefix: "http://foo.com/", + Path: filepath.FromSlash("/var/build/latest/"), + }, + { + MinifiedPathPrefix: "http://bar.com/", + Path: filepath.FromSlash("/var/build/{{ .Release }}/"), + }, + }, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + fileService, + ) + ) + + expect := &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 37, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 6, + }, + { + Colno: 6, + Filename: "http://foo.com/bar.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 2, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 7, + }, + { + Colno: 5, + Filename: "http://baz.com/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + }, + }, + } + + actual := transformException(logger, store, &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://foo.com/foo.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 6, + Filename: "http://foo.com/bar.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 5, + Filename: "http://bar.com/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + { + Colno: 5, + Filename: "http://baz.com/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + }, + }, + }, "123") + + require.Equal(t, expect, actual) +} + +func Test_sourceMapsStoreImpl_ReadFromFileSystemAndDownload(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{ + responses: []struct { + *http.Response + error + }{ + {newResponseFromTestData(t, "foo.js"), nil}, + {newResponseFromTestData(t, "foo.js.map"), nil}, + }, + } + + fileService = &mockFileService{ + files: map[string][]byte{ + filepath.FromSlash("/var/build/latest/foo.js.map"): loadTestData(t, "foo.js.map"), + }, + } + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: true, + DownloadFromOrigins: []string{"*"}, + Locations: []LocationArguments{ + { + MinifiedPathPrefix: "http://foo.com/", + Path: filepath.FromSlash("/var/build/latest/"), + }, + }, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + fileService, + ) + ) + + expect := &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 37, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 6, + }, + { + Colno: 2, + Filename: "/__parcel_source_root/demo/src/actions.ts", + Function: "?", + Lineno: 7, + }, + }, + }, + } + + actual := transformException(logger, store, &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://foo.com/foo.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 5, + Filename: "http://bar.com/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + }, + }, + }, "123") + + require.Equal(t, []string{filepath.FromSlash("/var/build/latest/foo.js.map")}, fileService.stats) + require.Equal(t, []string{filepath.FromSlash("/var/build/latest/foo.js.map")}, fileService.reads) + require.Equal(t, []string{"http://bar.com/foo.js", "http://bar.com/foo.js.map"}, httpClient.requests) + require.Equal(t, expect, actual) +} + +func Test_sourceMapsStoreImpl_FilepathSanitized(t *testing.T) { + var ( + logger = util.TestLogger(t) + + httpClient = &mockHTTPClient{} + fileService = &mockFileService{} + + store = newSourceMapsStore( + logger, + SourceMapsArguments{ + Download: false, + Locations: []LocationArguments{ + { + MinifiedPathPrefix: "http://foo.com/", + Path: filepath.FromSlash("/var/build/latest/"), + }, + }, + }, + newSourceMapMetrics(prometheus.NewRegistry()), + httpClient, + fileService, + ) + ) + + input := &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://foo.com/../../../etc/passwd", + Function: "eval", + Lineno: 5, + }, + }, + }, + } + + actual := transformException(logger, store, input, "123") + require.Equal(t, input, actual) +} + +func Test_urlMatchesOrigins(t *testing.T) { + tt := []struct { + name string + url string + origins []string + shouldMatch bool + }{ + { + name: "wildcard always matches", + url: "https://example.com/static/foo.js", + origins: []string{"https://foo.com/", "*"}, + shouldMatch: true, + }, + { + name: "exact matches", + url: "http://example.com/static/foo.js", + origins: []string{"https://foo.com/", "http://example.com/"}, + shouldMatch: true, + }, + { + name: "matches with subdomain wildcard", + url: "http://foo.bar.com/static/foo.js", + origins: []string{"https://foo.com/", "http://*.bar.com/"}, + shouldMatch: true, + }, + { + name: "no exact match", + url: "http://example.com/static/foo.js", + origins: []string{"https://foo.com/", "http://test.com/"}, + shouldMatch: false, + }, + { + name: "no exact match with subdomain wildcard", + url: "http://foo.bar.com/static/foo.js", + origins: []string{"https://foo.com/", "http://*.baz.com/"}, + shouldMatch: false, + }, + { + name: "matches with wildcard without protocol", + url: "http://foo.bar.com/static/foo.js", + origins: []string{"https://foo.com/", "*.bar.com/"}, + shouldMatch: true, + }, + } + + for _, tc := range tt { + t.Run(tc.name, func(t *testing.T) { + actual := urlMatchesOrigins(tc.url, tc.origins) + + if tc.shouldMatch { + require.True(t, actual, "expected %v to be matched from origin set %v", tc.url, tc.origins) + } else { + require.False(t, actual, "expected %v to not be matched from origin set %v", tc.url, tc.origins) + } + }) + } +} + +type mockHTTPClient struct { + responses []struct { + *http.Response + error + } + requests []string +} + +func (cl *mockHTTPClient) Get(url string) (resp *http.Response, err error) { + if len(cl.responses) > len(cl.requests) { + r := cl.responses[len(cl.requests)] + cl.requests = append(cl.requests, url) + return r.Response, r.error + } + return nil, errors.New("mockHTTPClient got more requests than expected") +} + +type mockFileService struct { + files map[string][]byte + stats []string + reads []string +} + +func (s *mockFileService) Stat(name string) (fs.FileInfo, error) { + s.stats = append(s.stats, name) + _, ok := s.files[name] + if !ok { + return nil, errors.New("file not found") + } + return nil, nil +} + +func (s *mockFileService) ReadFile(name string) ([]byte, error) { + s.reads = append(s.reads, name) + content, ok := s.files[name] + if ok { + return content, nil + } + return nil, errors.New("file not found") +} + +func newResponseFromTestData(t *testing.T, file string) *http.Response { + return &http.Response{ + Body: io.NopCloser(bytes.NewReader(loadTestData(t, file))), + StatusCode: 200, + } +} + +func mockException() *payload.Exception { + return &payload.Exception{ + Stacktrace: &payload.Stacktrace{ + Frames: []payload.Frame{ + { + Colno: 6, + Filename: "http://localhost:1234/foo.js", + Function: "eval", + Lineno: 5, + }, + { + Colno: 5, + Filename: "http://localhost:1234/foo.js", + Function: "callUndefined", + Lineno: 6, + }, + }, + }, + } +} + +func loadTestData(t *testing.T, file string) []byte { + t.Helper() + // Safe to disable, this is a test. + // nolint:gosec + content, err := os.ReadFile(filepath.Join("testdata", file)) + require.NoError(t, err, "expected to be able to read file") + require.True(t, len(content) > 0) + return content +} diff --git a/component/faro/receiver/testdata/foo.js b/component/faro/receiver/testdata/foo.js new file mode 100644 index 000000000000..b38652a4eef6 --- /dev/null +++ b/component/faro/receiver/testdata/foo.js @@ -0,0 +1,39 @@ +function throwError() { + throw new Error('This is a thrown error'); +} +function callUndefined() { + // eslint-disable-next-line no-eval + eval('test();'); +} +function callConsole(method) { + // eslint-disable-next-line no-console + console[method](`This is a console ${method} message`); +} +function fetchError() { + fetch('http://localhost:12345', { + method: 'POST' + }); +} +function promiseReject() { + new Promise((_accept, reject)=>{ + reject('This is a rejected promise'); + }); +} +function fetchSuccess() { + fetch('http://localhost:1234'); +} +function sendCustomMetric() { + window.grafanaJavaScriptAgent.api.pushMeasurement({ + type: 'custom', + values: { + my_custom_metric: Math.random() + } + }); +} +window.addEventListener('load', ()=>{ + window.grafanaJavaScriptAgent.api.pushLog([ + 'Manual event from Home' + ]); +}); + +//# sourceMappingURL=foo.js.map diff --git a/component/faro/receiver/testdata/foo.js.map b/component/faro/receiver/testdata/foo.js.map new file mode 100644 index 000000000000..0cd49989742f --- /dev/null +++ b/component/faro/receiver/testdata/foo.js.map @@ -0,0 +1 @@ +{"mappings":"SAAS,UAAU,GAAG,CAAC;IACrB,KAAK,CAAC,GAAG,CAAC,KAAK,CAAC,CAAwB;AAC1C,CAAC;SAEQ,aAAa,GAAG,CAAC;IACxB,EAAmC,AAAnC,iCAAmC;IACnC,IAAI,CAAC,CAAS;AAChB,CAAC;SAEQ,WAAW,CAAC,MAAmD,EAAE,CAAC;IACzE,EAAsC,AAAtC,oCAAsC;IACtC,OAAO,CAAC,MAAM,GAAG,kBAAkB,EAAE,MAAM,CAAC,QAAQ;AACtD,CAAC;SAEQ,UAAU,GAAG,CAAC;IACrB,KAAK,CAAC,CAAwB,yBAAE,CAAC;QAC/B,MAAM,EAAE,CAAM;IAChB,CAAC;AACH,CAAC;SAEQ,aAAa,GAAG,CAAC;IACxB,GAAG,CAAC,OAAO,EAAE,OAAO,EAAE,MAAM,GAAK,CAAC;QAChC,MAAM,CAAC,CAA4B;IACrC,CAAC;AACH,CAAC;SAEQ,YAAY,GAAG,CAAC;IACvB,KAAK,CAAC,CAAuB;AAC/B,CAAC;SAEQ,gBAAgB,GAAG,CAAC;IAC1B,MAAM,CAAS,sBAAsB,CAAC,GAAG,CAAC,eAAe,CAAC,CAAC;QAC1D,IAAI,EAAE,CAAQ;QACd,MAAM,EAAE,CAAC;YACP,gBAAgB,EAAE,IAAI,CAAC,MAAM;QAC/B,CAAC;IACH,CAAC;AACH,CAAC;AAED,MAAM,CAAC,gBAAgB,CAAC,CAAM,WAAQ,CAAC;IACpC,MAAM,CAAS,sBAAsB,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC;QAAA,CAAwB;IAAA,CAAC;AAC/E,CAAC","sources":["demo/src/actions.ts"],"sourcesContent":["function throwError() {\n throw new Error('This is a thrown error');\n}\n\nfunction callUndefined() {\n // eslint-disable-next-line no-eval\n eval('test();');\n}\n\nfunction callConsole(method: 'trace' | 'info' | 'log' | 'warn' | 'error') {\n // eslint-disable-next-line no-console\n console[method](`This is a console ${method} message`);\n}\n\nfunction fetchError() {\n fetch('http://localhost:12345', {\n method: 'POST',\n });\n}\n\nfunction promiseReject() {\n new Promise((_accept, reject) => {\n reject('This is a rejected promise');\n });\n}\n\nfunction fetchSuccess() {\n fetch('http://localhost:1234');\n}\n\nfunction sendCustomMetric() {\n (window as any).grafanaJavaScriptAgent.api.pushMeasurement({\n type: 'custom',\n values: {\n my_custom_metric: Math.random(),\n },\n });\n}\n\nwindow.addEventListener('load', () => {\n (window as any).grafanaJavaScriptAgent.api.pushLog(['Manual event from Home']);\n});\n"],"names":[],"version":3,"file":"index.28a7d598.js.map","sourceRoot":"/__parcel_source_root/"} \ No newline at end of file diff --git a/component/faro/receiver/testdata/payload.json b/component/faro/receiver/testdata/payload.json new file mode 100644 index 000000000000..5646a6b05294 --- /dev/null +++ b/component/faro/receiver/testdata/payload.json @@ -0,0 +1,330 @@ +{ + "logs": [ + { + "message": "opened pricing page", + "level": "info", + "context": { + "component": "AppRoot", + "page": "Pricing" + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + } + }, + { + "message": "loading price list", + "level": "trace", + "context": { + "component": "AppRoot", + "page": "Pricing" + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "ghj" + } + } + ], + "exceptions": [ + { + "type": "Error", + "value": "Cannot read property 'find' of undefined", + "stacktrace": { + "frames": [ + { + "colno": 42, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "in_app": true, + "lineno": 8639 + }, + { + "colno": 9, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "dispatchAction", + "in_app": true, + "lineno": 268095 + }, + { + "colno": 13, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "scheduleUpdateOnFiber", + "in_app": true, + "lineno": 273726 + }, + { + "colno": 7, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "flushSyncCallbackQueue", + "in_app": true, + "lineno": 263362 + }, + { + "colno": 13, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "flushSyncCallbackQueueImpl", + "in_app": true, + "lineno": 263374 + }, + { + "colno": 14, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "runWithPriority$1", + "lineno": 263325 + }, + { + "colno": 16, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "unstable_runWithPriority", + "lineno": 291265 + }, + { + "colno": 30, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "lineno": 263379 + }, + { + "colno": 22, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "performSyncWorkOnRoot", + "lineno": 274126 + }, + { + "colno": 11, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "renderRootSync", + "lineno": 274509 + }, + { + "colno": 9, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "workLoopSync", + "lineno": 274543 + }, + { + "colno": 16, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "performUnitOfWork", + "lineno": 274606 + }, + { + "colno": 18, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "beginWork$1", + "in_app": true, + "lineno": 275746 + }, + { + "colno": 20, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "beginWork", + "lineno": 270944 + }, + { + "colno": 24, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "updateFunctionComponent", + "lineno": 269291 + }, + { + "colno": 22, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "renderWithHooks", + "lineno": 266969 + }, + { + "colno": 74, + "filename": "http://fe:3002/static/js/main.chunk.js", + "function": "?", + "in_app": true, + "lineno": 2600 + }, + { + "colno": 65, + "filename": "http://fe:3002/static/js/main.chunk.js", + "function": "useGetBooksQuery", + "lineno": 1299 + }, + { + "colno": 85, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "Module.useQuery", + "lineno": 8495 + }, + { + "colno": 83, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "useBaseQuery", + "in_app": true, + "lineno": 8656 + }, + { + "colno": 14, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "useDeepMemo", + "lineno": 8696 + }, + { + "colno": 55, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "lineno": 8657 + }, + { + "colno": 47, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData.execute", + "in_app": true, + "lineno": 7883 + }, + { + "colno": 23, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData.getExecuteResult", + "lineno": 7944 + }, + { + "colno": 19, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData._this.getQueryResult", + "lineno": 7790 + }, + { + "colno": 24, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "new ApolloError", + "in_app": true, + "lineno": 5164 + } + ] + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + }, + "context": { + "component": "ReactErrorBoundary", + "ReactError": "Annoying Error" + } + } + ], + "measurements": [ + { + "values": { + "ttfp": 20.12, + "ttfcp": 22.12, + "ttfb": 14 + }, + "type": "page load", + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + }, + "context": { + "hello": "world" + } + } + ], + "events": [ + { + "name": "click_login_button", + "domain": "frontend", + "attributes": { + "foo": "bar", + "one": "two" + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + } + }, + { + "name": "click_reset_password_button", + "timestamp": "2021-09-30T10:46:17.680Z" + } + ], + "meta": { + "sdk": { + "name": "grafana-frontend-agent", + "version": "1.0.0" + }, + "app": { + "name": "testapp", + "release": "0.8.2", + "version": "abcdefg", + "environment": "production" + }, + "user": { + "username": "domasx2", + "id": "123", + "email": "geralt@kaermorhen.org", + "attributes": { + "foo": "bar" + } + }, + "session": { + "id": "abcd", + "attributes": { + "time_elapsed": "100s" + } + }, + "page": { + "url": "https://example.com/page" + }, + "browser": { + "name": "chrome", + "version": "88.12.1", + "os": "linux", + "mobile": false + }, + "view": { + "name": "foobar" + } + }, + "traces": { + "resourceSpans": [ + { + "resource": { + "attributes": [ + { + "key": "host.name", + "value": { + "stringValue": "testHost" + } + } + ] + }, + "instrumentationLibrarySpans": [ + { + "instrumentationLibrary": { + "name": "name", + "version": "version" + }, + "spans": [ + { + "traceId": "", + "spanId": "", + "parentSpanId": "", + "name": "testSpan", + "status": {} + }, + { + "traceId": "", + "spanId": "", + "parentSpanId": "", + "name": "testSpan2", + "status": {} + } + ] + } + ] + } + ] + } +} diff --git a/component/faro/receiver/testdata/payload_2.json b/component/faro/receiver/testdata/payload_2.json new file mode 100644 index 000000000000..eb8b18e56585 --- /dev/null +++ b/component/faro/receiver/testdata/payload_2.json @@ -0,0 +1,393 @@ +{ + "logs": [ + { + "message": "opened pricing page", + "level": "info", + "context": { + "component": "AppRoot", + "page": "Pricing" + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + } + }, + { + "message": "loading price list", + "level": "trace", + "context": { + "component": "AppRoot", + "page": "Pricing" + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "ghj" + } + } + ], + "exceptions": [ + { + "type": "Error", + "value": "Cannot read property 'find' of undefined", + "stacktrace": { + "frames": [ + { + "colno": 42, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "in_app": true, + "lineno": 8639 + }, + { + "colno": 9, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "dispatchAction", + "in_app": true, + "lineno": 268095 + }, + { + "colno": 13, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "scheduleUpdateOnFiber", + "in_app": true, + "lineno": 273726 + }, + { + "colno": 7, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "flushSyncCallbackQueue", + "in_app": true, + "lineno": 263362 + }, + { + "colno": 13, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "flushSyncCallbackQueueImpl", + "in_app": true, + "lineno": 263374 + }, + { + "colno": 14, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "runWithPriority$1", + "lineno": 263325 + }, + { + "colno": 16, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "unstable_runWithPriority", + "lineno": 291265 + }, + { + "colno": 30, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "lineno": 263379 + }, + { + "colno": 22, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "performSyncWorkOnRoot", + "lineno": 274126 + }, + { + "colno": 11, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "renderRootSync", + "lineno": 274509 + }, + { + "colno": 9, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "workLoopSync", + "lineno": 274543 + }, + { + "colno": 16, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "performUnitOfWork", + "lineno": 274606 + }, + { + "colno": 18, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "beginWork$1", + "in_app": true, + "lineno": 275746 + }, + { + "colno": 20, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "beginWork", + "lineno": 270944 + }, + { + "colno": 24, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "updateFunctionComponent", + "lineno": 269291 + }, + { + "colno": 22, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "renderWithHooks", + "lineno": 266969 + }, + { + "colno": 74, + "filename": "http://fe:3002/static/js/main.chunk.js", + "function": "?", + "in_app": true, + "lineno": 2600 + }, + { + "colno": 65, + "filename": "http://fe:3002/static/js/main.chunk.js", + "function": "useGetBooksQuery", + "lineno": 1299 + }, + { + "colno": 85, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "Module.useQuery", + "lineno": 8495 + }, + { + "colno": 83, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "useBaseQuery", + "in_app": true, + "lineno": 8656 + }, + { + "colno": 14, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "useDeepMemo", + "lineno": 8696 + }, + { + "colno": 55, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "?", + "lineno": 8657 + }, + { + "colno": 47, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData.execute", + "in_app": true, + "lineno": 7883 + }, + { + "colno": 23, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData.getExecuteResult", + "lineno": 7944 + }, + { + "colno": 19, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "QueryData._this.getQueryResult", + "lineno": 7790 + }, + { + "colno": 24, + "filename": "http://fe:3002/static/js/vendors~main.chunk.js", + "function": "new ApolloError", + "in_app": true, + "lineno": 5164 + } + ] + }, + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + } + } + ], + "measurements": [ + { + "values": { + "ttfp": 20.12, + "ttfcp": 22.12, + "ttfb": 14 + }, + "type": "page load", + "timestamp": "2021-09-30T10:46:17.680Z", + "trace": { + "trace_id": "abcd", + "span_id": "def" + } + } + ], + "meta": { + "sdk": { + "name": "grafana-frontend-agent", + "version": "1.0.0" + }, + "app": { + "name": "testapp", + "release": "0.8.2", + "version": "abcdefg", + "environment": "production" + }, + "user": { + "username": "domasx2", + "attributes": { + "foo": "bar" + } + }, + "session": { + "id": "abcd", + "attributes": { + "time_elapsed": "100s" + } + }, + "page": { + "url": "https://example.com/page" + }, + "browser": { + "name": "chrome", + "version": "88.12.1", + "os": "linux", + "mobile": false + }, + "view": { + "name": "foobar" + } + }, + "traces": { + "resourceSpans": [ + { + "resource": { + "attributes": [ + { + "key": "service.name", + "value": { + "stringValue": "unknown_service" + } + }, + { + "key": "telemetry.sdk.language", + "value": { + "stringValue": "webjs" + } + }, + { + "key": "telemetry.sdk.name", + "value": { + "stringValue": "opentelemetry" + } + }, + { + "key": "telemetry.sdk.version", + "value": { + "stringValue": "1.0.1" + } + } + ], + "droppedAttributesCount": 0 + }, + "instrumentationLibrarySpans": [ + { + "spans": [ + { + "traceId": "2d6f18da2663c7e477df23d8a8ad95b7", + "spanId": "50e64e3fac969cbb", + "parentSpanId": "9d9da6529d56706c", + "name": "documentFetch", + "kind": 1, + "startTimeUnixNano": 1646228314336100000, + "endTimeUnixNano": 1646228314351000000, + "attributes": [ + { + "key": "component", + "value": { + "stringValue": "document-load" + } + }, + { + "key": "http.response_content_length", + "value": { + "intValue": 1326 + } + } + ], + "droppedAttributesCount": 0, + "events": [ + { + "timeUnixNano": 1646228314336100000, + "name": "fetchStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314342000000, + "name": "domainLookupStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314342000000, + "name": "domainLookupEnd", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314342000000, + "name": "connectStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314330100000, + "name": "secureConnectionStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314342500000, + "name": "connectEnd", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314342700000, + "name": "requestStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314347000000, + "name": "responseStart", + "attributes": [], + "droppedAttributesCount": 0 + }, + { + "timeUnixNano": 1646228314351000000, + "name": "responseEnd", + "attributes": [], + "droppedAttributesCount": 0 + } + ], + "droppedEventsCount": 0, + "status": { + "code": 0 + }, + "links": [], + "droppedLinksCount": 0 + } + ], + "instrumentationLibrary": { + "name": "@opentelemetry/instrumentation-document-load", + "version": "0.27.1" + } + } + ] + } + ] + } +} diff --git a/component/module/file/file.go b/component/module/file/file.go index 93f1e7391738..0f0aa6c59876 100644 --- a/component/module/file/file.go +++ b/component/module/file/file.go @@ -11,6 +11,7 @@ import ( "github.com/grafana/agent/component/module" "github.com/grafana/agent/service/cluster" "github.com/grafana/agent/service/http" + otel_service "github.com/grafana/agent/service/otel" "github.com/grafana/river/rivertypes" ) @@ -19,7 +20,7 @@ func init() { Name: "module.file", Args: Arguments{}, Exports: module.Exports{}, - NeedsServices: []string{http.ServiceName, cluster.ServiceName}, + NeedsServices: []string{http.ServiceName, cluster.ServiceName, otel_service.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return New(opts, args.(Arguments)) diff --git a/component/module/git/git.go b/component/module/git/git.go index d82ce44152e2..060cf2ce369e 100644 --- a/component/module/git/git.go +++ b/component/module/git/git.go @@ -15,6 +15,7 @@ import ( "github.com/grafana/agent/component/module/git/internal/vcs" "github.com/grafana/agent/service/cluster" "github.com/grafana/agent/service/http" + otel_service "github.com/grafana/agent/service/otel" ) func init() { @@ -22,7 +23,7 @@ func init() { Name: "module.git", Args: Arguments{}, Exports: module.Exports{}, - NeedsServices: []string{http.ServiceName, cluster.ServiceName}, + NeedsServices: []string{http.ServiceName, cluster.ServiceName, otel_service.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return New(opts, args.(Arguments)) diff --git a/component/module/http/http.go b/component/module/http/http.go index 59891a4a69f4..8dfbc166dd21 100644 --- a/component/module/http/http.go +++ b/component/module/http/http.go @@ -11,6 +11,7 @@ import ( remote_http "github.com/grafana/agent/component/remote/http" "github.com/grafana/agent/service/cluster" http_service "github.com/grafana/agent/service/http" + otel_service "github.com/grafana/agent/service/otel" "github.com/grafana/river/rivertypes" ) @@ -19,7 +20,7 @@ func init() { Name: "module.http", Args: Arguments{}, Exports: module.Exports{}, - NeedsServices: []string{http_service.ServiceName, cluster.ServiceName}, + NeedsServices: []string{http_service.ServiceName, cluster.ServiceName, otel_service.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return New(opts, args.(Arguments)) diff --git a/component/module/string/string.go b/component/module/string/string.go index 9b4ab305d666..bae1e50c6d39 100644 --- a/component/module/string/string.go +++ b/component/module/string/string.go @@ -7,6 +7,7 @@ import ( "github.com/grafana/agent/component/module" "github.com/grafana/agent/service/cluster" "github.com/grafana/agent/service/http" + otel_service "github.com/grafana/agent/service/otel" "github.com/grafana/river/rivertypes" ) @@ -15,7 +16,7 @@ func init() { Name: "module.string", Args: Arguments{}, Exports: module.Exports{}, - NeedsServices: []string{http.ServiceName, cluster.ServiceName}, + NeedsServices: []string{http.ServiceName, cluster.ServiceName, otel_service.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return New(opts, args.(Arguments)) diff --git a/component/otelcol/auth/oauth2/oauth2.go b/component/otelcol/auth/oauth2/oauth2.go index 917ec12997ac..3396dca94d06 100644 --- a/component/otelcol/auth/oauth2/oauth2.go +++ b/component/otelcol/auth/oauth2/oauth2.go @@ -8,6 +8,7 @@ import ( "github.com/grafana/agent/component/otelcol" "github.com/grafana/agent/component/otelcol/auth" otel_service "github.com/grafana/agent/service/otel" + "github.com/grafana/river/rivertypes" "github.com/open-telemetry/opentelemetry-collector-contrib/extension/oauth2clientauthextension" otelcomponent "go.opentelemetry.io/collector/component" "go.opentelemetry.io/collector/config/configopaque" @@ -31,7 +32,7 @@ func init() { // Arguments configures the otelcol.auth.oauth2 component. type Arguments struct { ClientID string `river:"client_id,attr"` - ClientSecret string `river:"client_secret,attr"` + ClientSecret rivertypes.Secret `river:"client_secret,attr"` TokenURL string `river:"token_url,attr"` EndpointParams url.Values `river:"endpoint_params,attr,optional"` Scopes []string `river:"scopes,attr,optional"` diff --git a/component/otelcol/processor/transform/transform.go b/component/otelcol/processor/transform/transform.go new file mode 100644 index 000000000000..85f86e5ac1ca --- /dev/null +++ b/component/otelcol/processor/transform/transform.go @@ -0,0 +1,173 @@ +// Package transform provides an otelcol.processor.transform component. +package transform + +import ( + "fmt" + "strings" + + "github.com/grafana/agent/component" + "github.com/grafana/agent/component/otelcol" + "github.com/grafana/agent/component/otelcol/processor" + otel_service "github.com/grafana/agent/service/otel" + "github.com/mitchellh/mapstructure" + "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/ottl" + "github.com/open-telemetry/opentelemetry-collector-contrib/processor/transformprocessor" + otelcomponent "go.opentelemetry.io/collector/component" + otelextension "go.opentelemetry.io/collector/extension" +) + +func init() { + component.Register(component.Registration{ + Name: "otelcol.processor.transform", + Args: Arguments{}, + Exports: otelcol.ConsumerExports{}, + NeedsServices: []string{otel_service.ServiceName}, + + Build: func(opts component.Options, args component.Arguments) (component.Component, error) { + fact := transformprocessor.NewFactory() + return processor.New(opts, fact, args.(Arguments)) + }, + }) +} + +type ContextID string + +const ( + Resource ContextID = "resource" + Scope ContextID = "scope" + Span ContextID = "span" + SpanEvent ContextID = "spanevent" + Metric ContextID = "metric" + DataPoint ContextID = "datapoint" + Log ContextID = "log" +) + +func (c *ContextID) UnmarshalText(text []byte) error { + str := ContextID(strings.ToLower(string(text))) + switch str { + case Resource, Scope, Span, SpanEvent, Metric, DataPoint, Log: + *c = str + return nil + default: + return fmt.Errorf("unknown context %v", str) + } +} + +type contextStatementsSlice []contextStatements + +type contextStatements struct { + Context ContextID `river:"context,attr"` + Statements []string `river:"statements,attr"` +} + +// Arguments configures the otelcol.processor.transform component. +type Arguments struct { + // ErrorMode determines how the processor reacts to errors that occur while processing a statement. + ErrorMode ottl.ErrorMode `river:"error_mode,attr,optional"` + TraceStatements contextStatementsSlice `river:"trace_statements,block,optional"` + MetricStatements contextStatementsSlice `river:"metric_statements,block,optional"` + LogStatements contextStatementsSlice `river:"log_statements,block,optional"` + + // Output configures where to send processed data. Required. + Output *otelcol.ConsumerArguments `river:"output,block"` +} + +var ( + _ processor.Arguments = Arguments{} +) + +// DefaultArguments holds default settings for Arguments. +var DefaultArguments = Arguments{ + ErrorMode: ottl.PropagateError, +} + +// SetToDefault implements river.Defaulter. +func (args *Arguments) SetToDefault() { + *args = DefaultArguments +} + +// Validate implements river.Validator. +func (args *Arguments) Validate() error { + otelArgs, err := args.convertImpl() + if err != nil { + return err + } + return otelArgs.Validate() +} + +func (stmts *contextStatementsSlice) convert() []interface{} { + if stmts == nil { + return nil + } + + res := make([]interface{}, 0, len(*stmts)) + + if len(*stmts) == 0 { + return res + } + + for _, stmt := range *stmts { + res = append(res, stmt.convert()) + } + return res +} + +func (args *contextStatements) convert() map[string]interface{} { + if args == nil { + return nil + } + + return map[string]interface{}{ + "context": args.Context, + "statements": args.Statements, + } +} + +// Convert implements processor.Arguments. +func (args Arguments) Convert() (otelcomponent.Config, error) { + return args.convertImpl() +} + +// convertImpl is a helper function which returns the real type of the config, +// instead of the otelcomponent.Config interface. +func (args Arguments) convertImpl() (*transformprocessor.Config, error) { + input := make(map[string]interface{}) + + input["error_mode"] = args.ErrorMode + + if len(args.TraceStatements) > 0 { + input["trace_statements"] = args.TraceStatements.convert() + } + + if len(args.MetricStatements) > 0 { + input["metric_statements"] = args.MetricStatements.convert() + } + + if len(args.LogStatements) > 0 { + input["log_statements"] = args.LogStatements.convert() + } + + var result transformprocessor.Config + err := mapstructure.Decode(input, &result) + + if err != nil { + return nil, err + } + + return &result, nil +} + +// Extensions implements processor.Arguments. +func (args Arguments) Extensions() map[otelcomponent.ID]otelextension.Extension { + return nil +} + +// Exporters implements processor.Arguments. +func (args Arguments) Exporters() map[otelcomponent.DataType]map[otelcomponent.ID]otelcomponent.Component { + return nil +} + +// NextConsumers implements processor.Arguments. +func (args Arguments) NextConsumers() *otelcol.ConsumerArguments { + return args.Output +} diff --git a/component/otelcol/processor/transform/transform_test.go b/component/otelcol/processor/transform/transform_test.go new file mode 100644 index 000000000000..9547d30b24ec --- /dev/null +++ b/component/otelcol/processor/transform/transform_test.go @@ -0,0 +1,549 @@ +package transform_test + +import ( + "testing" + + "github.com/grafana/agent/component/otelcol/processor/transform" + "github.com/grafana/river" + "github.com/mitchellh/mapstructure" + "github.com/open-telemetry/opentelemetry-collector-contrib/processor/transformprocessor" + "github.com/stretchr/testify/require" +) + +func TestArguments_UnmarshalRiver(t *testing.T) { + tests := []struct { + testName string + cfg string + expected map[string]interface{} + errorMsg string + }{ + { + testName: "Defaults", + cfg: ` + output {} + `, + expected: map[string]interface{}{ + "error_mode": "propagate", + }, + }, + { + testName: "IgnoreErrors", + cfg: ` + error_mode = "ignore" + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + }, + }, + { + testName: "TransformIfFieldDoesNotExist", + cfg: ` + error_mode = "ignore" + trace_statements { + context = "span" + statements = [ + // Accessing a map with a key that does not exist will return nil. + "set(attributes[\"test\"], \"pass\") where attributes[\"test\"] == nil", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "span", + "statements": []interface{}{ + `set(attributes["test"], "pass") where attributes["test"] == nil`, + }, + }, + }, + }, + }, + { + testName: "RenameAttribute1", + cfg: ` + error_mode = "ignore" + trace_statements { + context = "resource" + statements = [ + "set(attributes[\"namespace\"], attributes[\"k8s.namespace.name\"])", + "delete_key(attributes, \"k8s.namespace.name\")", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `set(attributes["namespace"], attributes["k8s.namespace.name"])`, + `delete_key(attributes, "k8s.namespace.name")`, + }, + }, + }, + }, + }, + { + testName: "RenameAttribute2", + cfg: ` + error_mode = "ignore" + trace_statements { + context = "resource" + statements = [ + "replace_all_patterns(attributes, \"key\", \"k8s\\\\.namespace\\\\.name\", \"namespace\")", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `replace_all_patterns(attributes, "key", "k8s\\.namespace\\.name", "namespace")`, + }, + }, + }, + }, + }, + { + testName: "CreateAttributeFromContentOfLogBody", + cfg: ` + error_mode = "ignore" + log_statements { + context = "log" + statements = [ + "set(attributes[\"body\"], body)", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "log_statements": []interface{}{ + map[string]interface{}{ + "context": "log", + "statements": []interface{}{ + `set(attributes["body"], body)`, + }, + }, + }, + }, + }, + { + testName: "CombineTwoAttributes", + cfg: ` + error_mode = "ignore" + trace_statements { + context = "resource" + statements = [ + // The Concat function combines any number of strings, separated by a delimiter. + "set(attributes[\"test\"], Concat([attributes[\"foo\"], attributes[\"bar\"]], \" \"))", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `set(attributes["test"], Concat([attributes["foo"], attributes["bar"]], " "))`, + }, + }, + }, + }, + }, + { + testName: "ParseJsonLogs", + cfg: ` + error_mode = "ignore" + log_statements { + context = "log" + statements = [ + "merge_maps(cache, ParseJSON(body), \"upsert\") where IsMatch(body, \"^\\\\{\") ", + "set(attributes[\"attr1\"], cache[\"attr1\"])", + "set(attributes[\"attr2\"], cache[\"attr2\"])", + "set(attributes[\"nested.attr3\"], cache[\"nested\"][\"attr3\"])", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "log_statements": []interface{}{ + map[string]interface{}{ + "context": "log", + "statements": []interface{}{ + `merge_maps(cache, ParseJSON(body), "upsert") where IsMatch(body, "^\\{") `, + `set(attributes["attr1"], cache["attr1"])`, + `set(attributes["attr2"], cache["attr2"])`, + `set(attributes["nested.attr3"], cache["nested"]["attr3"])`, + }, + }, + }, + }, + }, + { + testName: "ManyStatements1", + cfg: ` + error_mode = "ignore" + trace_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"])", + "replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\")", + "limit(attributes, 100, [])", + "truncate_all(attributes, 4096)", + ] + } + trace_statements { + context = "span" + statements = [ + "set(status.code, 1) where attributes[\"http.path\"] == \"/health\"", + "set(name, attributes[\"http.route\"])", + "replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\")", + "limit(attributes, 100, [])", + "truncate_all(attributes, 4096)", + ] + } + metric_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"host.name\"])", + "truncate_all(attributes, 4096)", + ] + } + metric_statements { + context = "metric" + statements = [ + "set(description, \"Sum\") where type == \"Sum\"", + ] + } + metric_statements { + context = "datapoint" + statements = [ + "limit(attributes, 100, [\"host.name\"])", + "truncate_all(attributes, 4096)", + "convert_sum_to_gauge() where metric.name == \"system.processes.count\"", + "convert_gauge_to_sum(\"cumulative\", false) where metric.name == \"prometheus_metric\"", + ] + } + log_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\"])", + ] + } + log_statements { + context = "log" + statements = [ + "set(severity_text, \"FAIL\") where body == \"request failed\"", + "replace_all_matches(attributes, \"/user/*/list/*\", \"/user/{userId}/list/{listId}\")", + "replace_all_patterns(attributes, \"value\", \"/account/\\\\d{4}\", \"/account/{accountId}\")", + "set(body, attributes[\"http.route\"])", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "ignore", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"])`, + `replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***")`, + `limit(attributes, 100, [])`, + `truncate_all(attributes, 4096)`, + }, + }, + map[string]interface{}{ + "context": "span", + "statements": []interface{}{ + `set(status.code, 1) where attributes["http.path"] == "/health"`, + `set(name, attributes["http.route"])`, + `replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}")`, + `limit(attributes, 100, [])`, + `truncate_all(attributes, 4096)`, + }, + }, + }, + "metric_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `keep_keys(attributes, ["host.name"])`, + `truncate_all(attributes, 4096)`, + }, + }, + map[string]interface{}{ + "context": "metric", + "statements": []interface{}{ + `set(description, "Sum") where type == "Sum"`, + }, + }, + map[string]interface{}{ + "context": "datapoint", + "statements": []interface{}{ + `limit(attributes, 100, ["host.name"])`, + `truncate_all(attributes, 4096)`, + `convert_sum_to_gauge() where metric.name == "system.processes.count"`, + `convert_gauge_to_sum("cumulative", false) where metric.name == "prometheus_metric"`, + }, + }, + }, + "log_statements": []interface{}{ + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `keep_keys(attributes, ["service.name", "service.namespace", "cloud.region"])`, + }, + }, + map[string]interface{}{ + "context": "log", + "statements": []interface{}{ + `set(severity_text, "FAIL") where body == "request failed"`, + `replace_all_matches(attributes, "/user/*/list/*", "/user/{userId}/list/{listId}")`, + `replace_all_patterns(attributes, "value", "/account/\\d{4}", "/account/{accountId}")`, + `set(body, attributes["http.route"])`, + }, + }, + }, + }, + }, + { + testName: "ManyStatements2", + cfg: ` + trace_statements { + context = "span" + statements = [ + "set(name, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + trace_statements { + context = "resource" + statements = [ + "set(attributes[\"name\"], \"bear\")", + ] + } + metric_statements { + context = "datapoint" + statements = [ + "set(metric.name, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + metric_statements { + context = "resource" + statements = [ + "set(attributes[\"name\"], \"bear\")", + ] + } + log_statements { + context = "log" + statements = [ + "set(body, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + log_statements { + context = "resource" + statements = [ + "set(attributes[\"name\"], \"bear\")", + ] + } + output {} + `, + expected: map[string]interface{}{ + "error_mode": "propagate", + "trace_statements": []interface{}{ + map[string]interface{}{ + "context": "span", + "statements": []interface{}{ + `set(name, "bear") where attributes["http.path"] == "/animal"`, + `keep_keys(attributes, ["http.method", "http.path"])`, + }, + }, + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `set(attributes["name"], "bear")`, + }, + }, + }, + "metric_statements": []interface{}{ + map[string]interface{}{ + "context": "datapoint", + "statements": []interface{}{ + `set(metric.name, "bear") where attributes["http.path"] == "/animal"`, + `keep_keys(attributes, ["http.method", "http.path"])`, + }, + }, + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `set(attributes["name"], "bear")`, + }, + }, + }, + "log_statements": []interface{}{ + map[string]interface{}{ + "context": "log", + "statements": []interface{}{ + `set(body, "bear") where attributes["http.path"] == "/animal"`, + `keep_keys(attributes, ["http.method", "http.path"])`, + }, + }, + map[string]interface{}{ + "context": "resource", + "statements": []interface{}{ + `set(attributes["name"], "bear")`, + }, + }, + }, + }, + }, + { + testName: "unknown_error_mode", + cfg: ` + error_mode = "test" + output {} + `, + errorMsg: `2:17: "test" unknown error mode test`, + }, + { + testName: "bad_syntax_log", + cfg: ` + log_statements { + context = "log" + statements = [ + "set(body, \"bear\" where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "set(body, \"bear\" where attributes[\"http.path\"] == \"/animal\"": statement has invalid syntax: 1:18: unexpected token "where" (expected ")" Key*)`, + }, + { + testName: "bad_syntax_metric", + cfg: ` + metric_statements { + context = "datapoint" + statements = [ + "set(name, \"bear\" where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "set(name, \"bear\" where attributes[\"http.path\"] == \"/animal\"": statement has invalid syntax: 1:18: unexpected token "where" (expected ")" Key*)`, + }, + { + testName: "bad_syntax_trace", + cfg: ` + trace_statements { + context = "span" + statements = [ + "set(name, \"bear\" where attributes[\"http.path\"] == \"/animal\"", + "keep_keys(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "set(name, \"bear\" where attributes[\"http.path\"] == \"/animal\"": statement has invalid syntax: 1:18: unexpected token "where" (expected ")" Key*)`, + }, + { + testName: "unknown_function_log", + cfg: ` + log_statements { + context = "log" + statements = [ + "set(body, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "not_a_function(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "not_a_function(attributes, [\"http.method\", \"http.path\"])": undefined function "not_a_function"`, + }, + { + testName: "unknown_function_metric", + cfg: ` + metric_statements { + context = "datapoint" + statements = [ + "set(metric.name, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "not_a_function(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "not_a_function(attributes, [\"http.method\", \"http.path\"])": undefined function "not_a_function"`, + }, + { + testName: "unknown_function_trace", + cfg: ` + trace_statements { + context = "span" + statements = [ + "set(name, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + "not_a_function(attributes, [\"http.method\", \"http.path\"])", + ] + } + output {} + `, + errorMsg: `unable to parse OTTL statement "not_a_function(attributes, [\"http.method\", \"http.path\"])": undefined function "not_a_function"`, + }, + { + testName: "unknown_context", + cfg: ` + trace_statements { + context = "test" + statements = [ + "set(name, \"bear\") where attributes[\"http.path\"] == \"/animal\"", + ] + } + output {} + `, + errorMsg: `3:15: "test" unknown context test`, + }, + } + + for _, tc := range tests { + t.Run(tc.testName, func(t *testing.T) { + var args transform.Arguments + err := river.Unmarshal([]byte(tc.cfg), &args) + if tc.errorMsg != "" { + require.ErrorContains(t, err, tc.errorMsg) + return + } + + require.NoError(t, err) + + actualPtr, err := args.Convert() + require.NoError(t, err) + + actual := actualPtr.(*transformprocessor.Config) + + var expectedCfg transformprocessor.Config + err = mapstructure.Decode(tc.expected, &expectedCfg) + require.NoError(t, err) + + // Validate the two configs + require.NoError(t, actual.Validate()) + require.NoError(t, expectedCfg.Validate()) + + // Compare the two configs + require.Equal(t, expectedCfg, *actual) + }) + } +} diff --git a/component/prometheus/exporter/cadvisor/cadvisor.go b/component/prometheus/exporter/cadvisor/cadvisor.go new file mode 100644 index 000000000000..79542dbf0087 --- /dev/null +++ b/component/prometheus/exporter/cadvisor/cadvisor.go @@ -0,0 +1,109 @@ +package cadvisor + +import ( + "time" + + "github.com/grafana/agent/component" + "github.com/grafana/agent/component/prometheus/exporter" + "github.com/grafana/agent/pkg/integrations" + "github.com/grafana/agent/pkg/integrations/cadvisor" +) + +func init() { + component.Register(component.Registration{ + Name: "prometheus.exporter.cadvisor", + Args: Arguments{}, + Exports: exporter.Exports{}, + NeedsServices: exporter.RequiredServices(), + Build: exporter.New(createExporter, "cadvisor"), + }) +} + +func createExporter(opts component.Options, args component.Arguments, defaultInstanceKey string) (integrations.Integration, string, error) { + a := args.(Arguments) + return integrations.NewIntegrationWithInstanceKey(opts.Logger, a.Convert(), defaultInstanceKey) +} + +// DefaultArguments holds non-zero default options for Arguments when it is +// unmarshaled from river. +var DefaultArguments = Arguments{ + StoreContainerLabels: true, + AllowlistedContainerLabels: []string{""}, + EnvMetadataAllowlist: []string{""}, + RawCgroupPrefixAllowlist: []string{""}, + ResctrlInterval: 0, + StorageDuration: 2 * time.Minute, + + ContainerdHost: "/run/containerd/containerd.sock", + ContainerdNamespace: "k8s.io", + + // TODO(@tpaschalis) Do we need the default cert/key/ca since tls is disabled by default? + DockerHost: "unix:///var/run/docker.sock", + UseDockerTLS: false, + DockerTLSCert: "cert.pem", + DockerTLSKey: "key.pem", + DockerTLSCA: "ca.pem", + + DockerOnly: false, +} + +// Arguments configures the prometheus.exporter.cadvisor component. +type Arguments struct { + StoreContainerLabels bool `river:"store_container_labels,attr,optional"` + AllowlistedContainerLabels []string `river:"allowlisted_container_labels,attr,optional"` + EnvMetadataAllowlist []string `river:"env_metadata_allowlist,attr,optional"` + RawCgroupPrefixAllowlist []string `river:"raw_cgroup_prefix_allowlist,attr,optional"` + PerfEventsConfig string `river:"perf_events_config,attr,optional"` + ResctrlInterval time.Duration `river:"resctrl_interval,attr,optional"` + DisabledMetrics []string `river:"disabled_metrics,attr,optional"` + EnabledMetrics []string `river:"enabled_metrics,attr,optional"` + StorageDuration time.Duration `river:"storage_duration,attr,optional"` + ContainerdHost string `river:"containerd_host,attr,optional"` + ContainerdNamespace string `river:"containerd_namespace,attr,optional"` + DockerHost string `river:"docker_host,attr,optional"` + UseDockerTLS bool `river:"use_docker_tls,attr,optional"` + DockerTLSCert string `river:"docker_tls_cert,attr,optional"` + DockerTLSKey string `river:"docker_tls_key,attr,optional"` + DockerTLSCA string `river:"docker_tls_ca,attr,optional"` + DockerOnly bool `river:"docker_only,attr,optional"` +} + +// SetToDefault implements river.Defaulter. +func (a *Arguments) SetToDefault() { + *a = DefaultArguments +} + +// Convert returns the upstream-compatible configuration struct. +func (a *Arguments) Convert() *cadvisor.Config { + if len(a.AllowlistedContainerLabels) == 0 { + a.AllowlistedContainerLabels = []string{""} + } + if len(a.RawCgroupPrefixAllowlist) == 0 { + a.RawCgroupPrefixAllowlist = []string{""} + } + if len(a.EnvMetadataAllowlist) == 0 { + a.EnvMetadataAllowlist = []string{""} + } + + cfg := &cadvisor.Config{ + StoreContainerLabels: a.StoreContainerLabels, + AllowlistedContainerLabels: a.AllowlistedContainerLabels, + EnvMetadataAllowlist: a.EnvMetadataAllowlist, + RawCgroupPrefixAllowlist: a.RawCgroupPrefixAllowlist, + PerfEventsConfig: a.PerfEventsConfig, + ResctrlInterval: int64(a.ResctrlInterval), // TODO(@tpaschalis) This is so that the cadvisor package can re-cast back to time.Duration. Can we make it use time.Duration directly instead? + DisabledMetrics: a.DisabledMetrics, + EnabledMetrics: a.EnabledMetrics, + StorageDuration: a.StorageDuration, + Containerd: a.ContainerdHost, + ContainerdNamespace: a.ContainerdNamespace, + Docker: a.DockerHost, + DockerTLS: a.UseDockerTLS, + DockerTLSCert: a.DockerTLSCert, + DockerTLSKey: a.DockerTLSKey, + DockerTLSCA: a.DockerTLSCA, + DockerOnly: a.DockerOnly, + } + + return cfg +} diff --git a/component/prometheus/exporter/cadvisor/cadvisor_test.go b/component/prometheus/exporter/cadvisor/cadvisor_test.go new file mode 100644 index 000000000000..788ffb9094a2 --- /dev/null +++ b/component/prometheus/exporter/cadvisor/cadvisor_test.go @@ -0,0 +1,95 @@ +package cadvisor + +import ( + "testing" + "time" + + "github.com/grafana/agent/pkg/integrations/cadvisor" + "github.com/grafana/river" + "github.com/stretchr/testify/require" +) + +func TestUnmarshalRiver(t *testing.T) { + riverCfg := ` +store_container_labels = true +allowlisted_container_labels = ["label1", "label2"] +env_metadata_allowlist = ["env1", "env2"] +raw_cgroup_prefix_allowlist = ["prefix1", "prefix2"] +perf_events_config = "perf_events_config" +resctrl_interval = "1s" +disabled_metrics = ["metric1", "metric2"] +enabled_metrics = ["metric3", "metric4"] +storage_duration = "2s" +containerd_host = "containerd_host" +containerd_namespace = "containerd_namespace" +docker_host = "docker_host" +use_docker_tls = true +docker_tls_cert = "docker_tls_cert" +docker_tls_key = "docker_tls_key" +docker_tls_ca = "docker_tls_ca" +` + var args Arguments + err := river.Unmarshal([]byte(riverCfg), &args) + require.NoError(t, err) + expected := Arguments{ + StoreContainerLabels: true, + AllowlistedContainerLabels: []string{"label1", "label2"}, + EnvMetadataAllowlist: []string{"env1", "env2"}, + RawCgroupPrefixAllowlist: []string{"prefix1", "prefix2"}, + PerfEventsConfig: "perf_events_config", + ResctrlInterval: 1 * time.Second, + DisabledMetrics: []string{"metric1", "metric2"}, + EnabledMetrics: []string{"metric3", "metric4"}, + StorageDuration: 2 * time.Second, + ContainerdHost: "containerd_host", + ContainerdNamespace: "containerd_namespace", + DockerHost: "docker_host", + UseDockerTLS: true, + DockerTLSCert: "docker_tls_cert", + DockerTLSKey: "docker_tls_key", + DockerTLSCA: "docker_tls_ca", + } + require.Equal(t, expected, args) +} + +func TestConvert(t *testing.T) { + args := Arguments{ + StoreContainerLabels: true, + AllowlistedContainerLabels: []string{"label1", "label2"}, + EnvMetadataAllowlist: []string{"env1", "env2"}, + RawCgroupPrefixAllowlist: []string{"prefix1", "prefix2"}, + PerfEventsConfig: "perf_events_config", + ResctrlInterval: 1 * time.Second, + DisabledMetrics: []string{"metric1", "metric2"}, + EnabledMetrics: []string{"metric3", "metric4"}, + StorageDuration: 2 * time.Second, + ContainerdHost: "containerd_host", + ContainerdNamespace: "containerd_namespace", + DockerHost: "docker_host", + UseDockerTLS: true, + DockerTLSCert: "docker_tls_cert", + DockerTLSKey: "docker_tls_key", + DockerTLSCA: "docker_tls_ca", + } + + res := args.Convert() + expected := &cadvisor.Config{ + StoreContainerLabels: true, + AllowlistedContainerLabels: []string{"label1", "label2"}, + EnvMetadataAllowlist: []string{"env1", "env2"}, + RawCgroupPrefixAllowlist: []string{"prefix1", "prefix2"}, + PerfEventsConfig: "perf_events_config", + ResctrlInterval: int64(1 * time.Second), + DisabledMetrics: []string{"metric1", "metric2"}, + EnabledMetrics: []string{"metric3", "metric4"}, + StorageDuration: 2 * time.Second, + Containerd: "containerd_host", + ContainerdNamespace: "containerd_namespace", + Docker: "docker_host", + DockerTLS: true, + DockerTLSCert: "docker_tls_cert", + DockerTLSKey: "docker_tls_key", + DockerTLSCA: "docker_tls_ca", + } + require.Equal(t, expected, res) +} diff --git a/component/prometheus/exporter/redis/redis.go b/component/prometheus/exporter/redis/redis.go index 1d81d9c97f09..c6fc45822bf3 100644 --- a/component/prometheus/exporter/redis/redis.go +++ b/component/prometheus/exporter/redis/redis.go @@ -38,6 +38,7 @@ var DefaultArguments = Arguments{ SetClientName: true, CheckKeyGroupsBatchSize: 10000, MaxDistinctKeyGroups: 100, + ExportKeyValues: true, } type Arguments struct { @@ -61,6 +62,7 @@ type Arguments struct { CheckSingleKeys []string `river:"check_single_keys,attr,optional"` CheckStreams []string `river:"check_streams,attr,optional"` CheckSingleStreams []string `river:"check_single_streams,attr,optional"` + ExportKeyValues bool `river:"export_key_values,attr,optional"` CountKeys []string `river:"count_keys,attr,optional"` ScriptPath string `river:"script_path,attr,optional"` ScriptPaths []string `river:"script_paths,attr,optional"` @@ -116,6 +118,7 @@ func (a *Arguments) Convert() *redis_exporter.Config { CheckSingleKeys: strings.Join(a.CheckSingleKeys, ","), CheckStreams: strings.Join(a.CheckStreams, ","), CheckSingleStreams: strings.Join(a.CheckSingleStreams, ","), + ExportKeyValues: a.ExportKeyValues, CountKeys: strings.Join(a.CountKeys, ","), ScriptPath: scriptPath, ConnectionTimeout: a.ConnectionTimeout, diff --git a/component/prometheus/exporter/redis/redis_test.go b/component/prometheus/exporter/redis/redis_test.go index 39df02762d25..e69cd22f6d5c 100644 --- a/component/prometheus/exporter/redis/redis_test.go +++ b/component/prometheus/exporter/redis/redis_test.go @@ -11,33 +11,34 @@ import ( func TestRiverUnmarshal(t *testing.T) { riverConfig := ` - redis_addr = "localhost:6379" - redis_user = "redis_user" - redis_password_file = "/tmp/pass" - namespace = "namespace" - config_command = "TEST_CONFIG" - check_keys = ["key1*", "cache_*"] - check_key_groups = ["other_key%d+"] - check_key_groups_batch_size = 5000 - max_distinct_key_groups = 50 - check_single_keys = ["particular_key"] - check_streams = ["stream1*"] - check_single_streams = ["particular_stream"] - count_keys = ["count_key1", "count_key2"] - script_path = "/tmp/metrics-script.lua,/tmp/cooler-metrics-script.lua" - connection_timeout = "7s" - tls_client_key_file = "/tmp/client-key.pem" - tls_client_cert_file = "/tmp/client-cert.pem" - tls_ca_cert_file = "/tmp/ca-cert.pem" - set_client_name = false - is_tile38 = true - export_client_list = false - export_client_port = true - redis_metrics_only = false - ping_on_connect = true - incl_system_metrics = true - skip_tls_verification = false - is_cluster = true + redis_addr = "localhost:6379" + redis_user = "redis_user" + redis_password_file = "/tmp/pass" + namespace = "namespace" + config_command = "TEST_CONFIG" + check_keys = ["key1*", "cache_*"] + check_key_groups = ["other_key%d+"] + check_key_groups_batch_size = 5000 + max_distinct_key_groups = 50 + check_single_keys = ["particular_key"] + check_streams = ["stream1*"] + check_single_streams = ["particular_stream"] + export_key_values = false + count_keys = ["count_key1", "count_key2"] + script_path = "/tmp/metrics-script.lua,/tmp/cooler-metrics-script.lua" + connection_timeout = "7s" + tls_client_key_file = "/tmp/client-key.pem" + tls_client_cert_file = "/tmp/client-cert.pem" + tls_ca_cert_file = "/tmp/ca-cert.pem" + set_client_name = false + is_tile38 = true + export_client_list = false + export_client_port = true + redis_metrics_only = false + ping_on_connect = true + incl_system_metrics = true + skip_tls_verification = false + is_cluster = true ` var args Arguments err := river.Unmarshal([]byte(riverConfig), &args) @@ -58,6 +59,7 @@ func TestRiverUnmarshal(t *testing.T) { CheckStreams: []string{"stream1*"}, CheckSingleStreams: []string{"particular_stream"}, + ExportKeyValues: false, CountKeys: []string{"count_key1", "count_key2"}, ScriptPath: "/tmp/metrics-script.lua,/tmp/cooler-metrics-script.lua", @@ -110,6 +112,7 @@ func TestRiverConvert(t *testing.T) { CheckKeys: []string{"key1*", "cache_*"}, CheckKeyGroups: []string{"other_key%d+"}, CheckSingleKeys: []string{"particular_key"}, + ExportKeyValues: false, CountKeys: []string{"count_key1", "count_key2"}, CheckKeyGroupsBatchSize: 5000, MaxDistinctKeyGroups: 50, @@ -144,6 +147,7 @@ func TestRiverConvert(t *testing.T) { CheckKeys: "key1*,cache_*", CheckKeyGroups: "other_key%d+", CheckSingleKeys: "particular_key", + ExportKeyValues: false, CountKeys: "count_key1,count_key2", CheckKeyGroupsBatchSize: 5000, MaxDistinctKeyGroups: 50, diff --git a/component/prometheus/exporter/unix/unix.go b/component/prometheus/exporter/unix/unix.go index 24dd546c8eee..e33fa9a63136 100644 --- a/component/prometheus/exporter/unix/unix.go +++ b/component/prometheus/exporter/unix/unix.go @@ -11,7 +11,6 @@ func init() { Name: "prometheus.exporter.unix", Args: Arguments{}, Exports: exporter.Exports{}, - Singleton: true, NeedsServices: exporter.RequiredServices(), Build: exporter.New(createExporter, "unix"), }) diff --git a/component/prometheus/exporter/windows/windows.go b/component/prometheus/exporter/windows/windows.go index ee4b8333d61a..214302748aee 100644 --- a/component/prometheus/exporter/windows/windows.go +++ b/component/prometheus/exporter/windows/windows.go @@ -11,7 +11,6 @@ func init() { Name: "prometheus.exporter.windows", Args: Arguments{}, Exports: exporter.Exports{}, - Singleton: false, NeedsServices: exporter.RequiredServices(), Build: exporter.New(createExporter, "windows"), }) diff --git a/component/prometheus/operator/common/component.go b/component/prometheus/operator/common/component.go index feff191557b2..cc3e44768c57 100644 --- a/component/prometheus/operator/common/component.go +++ b/component/prometheus/operator/common/component.go @@ -3,6 +3,8 @@ package common import ( "context" "fmt" + "net/http" + "strings" "sync" "time" @@ -10,6 +12,7 @@ import ( "github.com/grafana/agent/component" "github.com/grafana/agent/component/prometheus/operator" "github.com/grafana/agent/service/cluster" + "gopkg.in/yaml.v3" ) type Component struct { @@ -143,3 +146,38 @@ func (c *Component) reportHealth(err error) { } } } + +func (c *Component) Handler() http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // very simple path handling + // only responds to `/scrapeConfig/$NS/$NAME` + c.mut.RLock() + man := c.manager + c.mut.RUnlock() + path := strings.Trim(r.URL.Path, "/") + parts := strings.Split(path, "/") + if man == nil || len(parts) != 3 || parts[0] != "scrapeConfig" { + w.WriteHeader(404) + return + } + ns := parts[1] + name := parts[2] + scs := man.getScrapeConfig(ns, name) + if len(scs) == 0 { + w.WriteHeader(404) + return + } + dat, err := yaml.Marshal(scs) + if err != nil { + if _, err = w.Write([]byte(err.Error())); err != nil { + return + } + w.WriteHeader(500) + return + } + _, err = w.Write(dat) + if err != nil { + w.WriteHeader(500) + } + }) +} diff --git a/component/prometheus/operator/common/crdmanager.go b/component/prometheus/operator/common/crdmanager.go index 9c8926906a1d..7ce1aa368827 100644 --- a/component/prometheus/operator/common/crdmanager.go +++ b/component/prometheus/operator/common/crdmanager.go @@ -14,6 +14,7 @@ import ( "github.com/grafana/agent/component" "github.com/grafana/agent/component/prometheus" "github.com/grafana/agent/service/cluster" + "github.com/grafana/agent/service/http" "github.com/grafana/ckit/shard" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/config" @@ -215,6 +216,17 @@ func (c *crdManager) DebugInfo() interface{} { return info } +func (c *crdManager) getScrapeConfig(ns, name string) []*config.ScrapeConfig { + prefix := fmt.Sprintf("%s/%s/%s", c.kind, ns, name) + matches := []*config.ScrapeConfig{} + for k, v := range c.scrapeConfigs { + if strings.HasPrefix(k, prefix) { + matches = append(matches, v) + } + } + return matches +} + // runInformers starts all the informers that are required to discover CRDs. func (c *crdManager) runInformers(restConfig *rest.Config, ctx context.Context) error { scheme := runtime.NewScheme() @@ -358,6 +370,11 @@ func (c *crdManager) addDebugInfo(ns string, name string, err error) { } else { debug.ReconcileError = "" } + if data, err := c.opts.GetServiceData(http.ServiceName); err == nil { + if hdata, ok := data.(http.Data); ok { + debug.ScrapeConfigsURL = fmt.Sprintf("%s%s/scrapeConfig/%s/%s", hdata.HTTPListenAddr, hdata.HTTPPathForComponent(c.opts.ID), ns, name) + } + } prefix := fmt.Sprintf("%s/%s/%s", c.kind, ns, name) c.debugInfo[prefix] = debug } @@ -400,13 +417,13 @@ func (c *crdManager) onAddPodMonitor(obj interface{}) { } func (c *crdManager) onUpdatePodMonitor(oldObj, newObj interface{}) { pm := oldObj.(*promopv1.PodMonitor) - c.clearConfigs("podMonitor", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) c.addPodMonitor(newObj.(*promopv1.PodMonitor)) } func (c *crdManager) onDeletePodMonitor(obj interface{}) { pm := obj.(*promopv1.PodMonitor) - c.clearConfigs("podMonitor", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) if err := c.apply(); err != nil { level.Error(c.logger).Log("name", pm.Name, "err", err, "msg", "error applying scrape configs after deleting "+c.kind) } @@ -450,13 +467,13 @@ func (c *crdManager) onAddServiceMonitor(obj interface{}) { } func (c *crdManager) onUpdateServiceMonitor(oldObj, newObj interface{}) { pm := oldObj.(*promopv1.ServiceMonitor) - c.clearConfigs("serviceMonitor", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) c.addServiceMonitor(newObj.(*promopv1.ServiceMonitor)) } func (c *crdManager) onDeleteServiceMonitor(obj interface{}) { pm := obj.(*promopv1.ServiceMonitor) - c.clearConfigs("serviceMonitor", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) if err := c.apply(); err != nil { level.Error(c.logger).Log("name", pm.Name, "err", err, "msg", "error applying scrape configs after deleting "+c.kind) } @@ -498,22 +515,22 @@ func (c *crdManager) onAddProbe(obj interface{}) { } func (c *crdManager) onUpdateProbe(oldObj, newObj interface{}) { pm := oldObj.(*promopv1.Probe) - c.clearConfigs("probe", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) c.addProbe(newObj.(*promopv1.Probe)) } func (c *crdManager) onDeleteProbe(obj interface{}) { pm := obj.(*promopv1.Probe) - c.clearConfigs("probe", pm.Namespace, pm.Name) + c.clearConfigs(pm.Namespace, pm.Name) if err := c.apply(); err != nil { level.Error(c.logger).Log("name", pm.Name, "err", err, "msg", "error applying scrape configs after deleting "+c.kind) } } -func (c *crdManager) clearConfigs(kind, ns, name string) { +func (c *crdManager) clearConfigs(ns, name string) { c.mut.Lock() defer c.mut.Unlock() - prefix := fmt.Sprintf("%s/%s/%s", kind, ns, name) + prefix := fmt.Sprintf("%s/%s/%s", c.kind, ns, name) for k := range c.discoveryConfigs { if strings.HasPrefix(k, prefix) { delete(c.discoveryConfigs, k) diff --git a/component/prometheus/operator/podmonitors/operator.go b/component/prometheus/operator/podmonitors/operator.go index a9895511ece4..c35c277acee0 100644 --- a/component/prometheus/operator/podmonitors/operator.go +++ b/component/prometheus/operator/podmonitors/operator.go @@ -5,13 +5,14 @@ import ( "github.com/grafana/agent/component/prometheus/operator" "github.com/grafana/agent/component/prometheus/operator/common" "github.com/grafana/agent/service/cluster" + "github.com/grafana/agent/service/http" ) func init() { component.Register(component.Registration{ Name: "prometheus.operator.podmonitors", Args: operator.Arguments{}, - NeedsServices: []string{cluster.ServiceName}, + NeedsServices: []string{cluster.ServiceName, http.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return common.New(opts, args, common.KindPodMonitor) diff --git a/component/prometheus/operator/probes/probes.go b/component/prometheus/operator/probes/probes.go index 219b167119ce..e8d73ef10bf6 100644 --- a/component/prometheus/operator/probes/probes.go +++ b/component/prometheus/operator/probes/probes.go @@ -5,13 +5,14 @@ import ( "github.com/grafana/agent/component/prometheus/operator" "github.com/grafana/agent/component/prometheus/operator/common" "github.com/grafana/agent/service/cluster" + "github.com/grafana/agent/service/http" ) func init() { component.Register(component.Registration{ Name: "prometheus.operator.probes", Args: operator.Arguments{}, - NeedsServices: []string{cluster.ServiceName}, + NeedsServices: []string{cluster.ServiceName, http.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return common.New(opts, args, common.KindProbe) diff --git a/component/prometheus/operator/servicemonitors/servicemonitors.go b/component/prometheus/operator/servicemonitors/servicemonitors.go index beb063352b58..9abc214b969d 100644 --- a/component/prometheus/operator/servicemonitors/servicemonitors.go +++ b/component/prometheus/operator/servicemonitors/servicemonitors.go @@ -5,13 +5,14 @@ import ( "github.com/grafana/agent/component/prometheus/operator" "github.com/grafana/agent/component/prometheus/operator/common" "github.com/grafana/agent/service/cluster" + "github.com/grafana/agent/service/http" ) func init() { component.Register(component.Registration{ Name: "prometheus.operator.servicemonitors", Args: operator.Arguments{}, - NeedsServices: []string{cluster.ServiceName}, + NeedsServices: []string{cluster.ServiceName, http.ServiceName}, Build: func(opts component.Options, args component.Arguments) (component.Component, error) { return common.New(opts, args, common.KindServiceMonitor) diff --git a/component/prometheus/operator/types.go b/component/prometheus/operator/types.go index 2696f0c4fce0..b40b0f2fe70c 100644 --- a/component/prometheus/operator/types.go +++ b/component/prometheus/operator/types.go @@ -71,8 +71,9 @@ type DebugInfo struct { } type DiscoveredResource struct { - Namespace string `river:"namespace,attr"` - Name string `river:"name,attr"` - LastReconcile time.Time `river:"last_reconcile,attr,optional"` - ReconcileError string `river:"reconcile_error,attr,optional"` + Namespace string `river:"namespace,attr"` + Name string `river:"name,attr"` + LastReconcile time.Time `river:"last_reconcile,attr,optional"` + ReconcileError string `river:"reconcile_error,attr,optional"` + ScrapeConfigsURL string `river:"scrape_configs_url,attr,optional"` } diff --git a/component/prometheus/receive_http/receive_http.go b/component/prometheus/receive_http/receive_http.go index eba12ecb9759..526488ce7fd6 100644 --- a/component/prometheus/receive_http/receive_http.go +++ b/component/prometheus/receive_http/receive_http.go @@ -59,7 +59,7 @@ func New(opts component.Options, args Arguments) (*Component, error) { c := &Component{ opts: opts, - handler: remote.NewWriteHandler(opts.Logger, fanout), + handler: remote.NewWriteHandler(opts.Logger, opts.Registerer, fanout), fanout: fanout, uncheckedCollector: uncheckedCollector, } diff --git a/component/prometheus/receive_http/receive_http_test.go b/component/prometheus/receive_http/receive_http_test.go index 980ff94ac647..ceef61a5c507 100644 --- a/component/prometheus/receive_http/receive_http_test.go +++ b/component/prometheus/receive_http/receive_http_test.go @@ -373,7 +373,7 @@ func request(ctx context.Context, rawRemoteWriteURL string, req *prompb.WriteReq } compressed := snappy.Encode(buf, buf) - return client.Store(ctx, compressed) + return client.Store(ctx, compressed, 0) } func testOptions(t *testing.T) component.Options { diff --git a/component/prometheus/remotewrite/types.go b/component/prometheus/remotewrite/types.go index 12665f02f3dd..39ef6a55a191 100644 --- a/component/prometheus/remotewrite/types.go +++ b/component/prometheus/remotewrite/types.go @@ -8,12 +8,16 @@ import ( types "github.com/grafana/agent/component/common/config" flow_relabel "github.com/grafana/agent/component/common/relabel" + "github.com/grafana/river/rivertypes" + "github.com/google/uuid" common "github.com/prometheus/common/config" "github.com/prometheus/common/model" + promsigv4 "github.com/prometheus/common/sigv4" "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/storage" + "github.com/prometheus/prometheus/storage/remote/azuread" ) // Defaults for config blocks. @@ -72,6 +76,8 @@ type EndpointOptions struct { QueueOptions *QueueOptions `river:"queue_config,block,optional"` MetadataOptions *MetadataOptions `river:"metadata_config,block,optional"` WriteRelabelConfigs []*flow_relabel.Config `river:"write_relabel_config,block,optional"` + SigV4 *SigV4Config `river:"sigv4,block,optional"` + AzureAD *AzureADConfig `river:"azuread,block,optional"` } // SetToDefault implements river.Defaulter. @@ -83,6 +89,14 @@ func (r *EndpointOptions) SetToDefault() { } } +func isAuthSetInHttpClientConfig(cfg *types.HTTPClientConfig) bool { + return cfg.BasicAuth != nil || + cfg.OAuth2 != nil || + cfg.Authorization != nil || + len(cfg.BearerToken) > 0 || + len(cfg.BearerTokenFile) > 0 +} + // Validate implements river.Validator. func (r *EndpointOptions) Validate() error { // We must explicitly Validate because HTTPClientConfig is squashed and it won't run otherwise @@ -92,6 +106,20 @@ func (r *EndpointOptions) Validate() error { } } + const tooManyAuthErr = "at most one of sigv4, azuread, basic_auth, oauth2, bearer_token & bearer_token_file must be configured" + + if r.SigV4 != nil { + if r.AzureAD != nil || isAuthSetInHttpClientConfig(r.HTTPClientConfig) { + return fmt.Errorf(tooManyAuthErr) + } + } + + if r.AzureAD != nil { + if r.SigV4 != nil || isAuthSetInHttpClientConfig(r.HTTPClientConfig) { + return fmt.Errorf(tooManyAuthErr) + } + } + if r.WriteRelabelConfigs != nil { for _, relabelConfig := range r.WriteRelabelConfigs { if err := relabelConfig.Validate(); err != nil { @@ -216,7 +244,8 @@ func convertConfigs(cfg Arguments) (*config.Config, error) { HTTPClientConfig: *rw.HTTPClientConfig.Convert(), QueueConfig: rw.QueueOptions.toPrometheusType(), MetadataConfig: rw.MetadataOptions.toPrometheusType(), - // TODO(rfratto): SigV4Config + SigV4Config: rw.SigV4.toPrometheusType(), + AzureADConfig: rw.AzureAD.toPrometheusType(), }) } @@ -236,3 +265,84 @@ func toLabels(in map[string]string) labels.Labels { sort.Sort(res) return res } + +// ManagedIdentityConfig is used to store managed identity config values +type ManagedIdentityConfig struct { + // ClientID is the clientId of the managed identity that is being used to authenticate. + ClientID string `river:"client_id,attr"` +} + +func (m ManagedIdentityConfig) toPrometheusType() azuread.ManagedIdentityConfig { + return azuread.ManagedIdentityConfig{ + ClientID: m.ClientID, + } +} + +type AzureADConfig struct { + // ManagedIdentity is the managed identity that is being used to authenticate. + ManagedIdentity ManagedIdentityConfig `river:"managed_identity,block"` + + // Cloud is the Azure cloud in which the service is running. Example: AzurePublic/AzureGovernment/AzureChina. + Cloud string `river:"cloud,attr,optional"` +} + +func (a *AzureADConfig) Validate() error { + if a.Cloud != azuread.AzureChina && a.Cloud != azuread.AzureGovernment && a.Cloud != azuread.AzurePublic { + return fmt.Errorf("must provide a cloud in the Azure AD config") + } + + _, err := uuid.Parse(a.ManagedIdentity.ClientID) + if err != nil { + return fmt.Errorf("the provided Azure Managed Identity client_id provided is invalid") + } + + return nil +} + +// SetToDefault implements river.Defaulter. +func (a *AzureADConfig) SetToDefault() { + *a = AzureADConfig{ + Cloud: azuread.AzurePublic, + } +} + +func (a *AzureADConfig) toPrometheusType() *azuread.AzureADConfig { + if a == nil { + return nil + } + + mangedIdentity := a.ManagedIdentity.toPrometheusType() + return &azuread.AzureADConfig{ + ManagedIdentity: &mangedIdentity, + Cloud: a.Cloud, + } +} + +type SigV4Config struct { + Region string `river:"region,attr,optional"` + AccessKey string `river:"access_key,attr,optional"` + SecretKey rivertypes.Secret `river:"secret_key,attr,optional"` + Profile string `river:"profile,attr,optional"` + RoleARN string `river:"role_arn,attr,optional"` +} + +func (s *SigV4Config) Validate() error { + if (s.AccessKey == "") != (s.SecretKey == "") { + return fmt.Errorf("must provide an AWS SigV4 access key and secret key if credentials are specified in the SigV4 config") + } + return nil +} + +func (s *SigV4Config) toPrometheusType() *promsigv4.SigV4Config { + if s == nil { + return nil + } + + return &promsigv4.SigV4Config{ + Region: s.Region, + AccessKey: s.AccessKey, + SecretKey: common.Secret(s.SecretKey), + Profile: s.Profile, + RoleARN: s.RoleARN, + } +} diff --git a/component/prometheus/remotewrite/types_test.go b/component/prometheus/remotewrite/types_test.go index a8b721ac92f6..b9dc4970b990 100644 --- a/component/prometheus/remotewrite/types_test.go +++ b/component/prometheus/remotewrite/types_test.go @@ -1,61 +1,280 @@ package remotewrite import ( + "net/url" "testing" + "time" "github.com/grafana/river" + commonconfig "github.com/prometheus/common/config" + "github.com/prometheus/common/model" + "github.com/prometheus/common/sigv4" + "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/model/labels" + "github.com/prometheus/prometheus/model/relabel" + "github.com/prometheus/prometheus/storage/remote/azuread" "github.com/stretchr/testify/require" ) +func expectedCfg(transform func(c *config.Config)) *config.Config { + // Initialize this with the default expected config + res := &config.Config{ + GlobalConfig: config.GlobalConfig{ + ExternalLabels: labels.Labels{}, + }, + RemoteWriteConfigs: []*config.RemoteWriteConfig{ + { + URL: &commonconfig.URL{ + URL: &url.URL{ + Scheme: "http", + Host: "0.0.0.0:11111", + Path: `/api/v1/write`, + }, + }, + RemoteTimeout: model.Duration(30 * time.Second), + WriteRelabelConfigs: []*relabel.Config{}, + SendExemplars: true, + HTTPClientConfig: commonconfig.HTTPClientConfig{ + FollowRedirects: true, + EnableHTTP2: true, + }, + QueueConfig: config.QueueConfig{ + Capacity: 10000, + MaxShards: 50, + MinShards: 1, + MaxSamplesPerSend: 2000, + BatchSendDeadline: model.Duration(5 * time.Second), + MinBackoff: model.Duration(30 * time.Millisecond), + MaxBackoff: model.Duration(5 * time.Second), + RetryOnRateLimit: true, + }, + MetadataConfig: config.MetadataConfig{ + Send: true, + SendInterval: model.Duration(1 * time.Minute), + MaxSamplesPerSend: 2000, + }, + }, + }, + } + + if transform != nil { + transform(res) + } + return res +} + func TestRiverConfig(t *testing.T) { - var exampleRiverConfig = ` - external_labels = { - cluster = "local", - } - - endpoint { - name = "test-url" - url = "http://0.0.0.0:11111/api/v1/write" - remote_timeout = "100ms" - - queue_config { - batch_send_deadline = "100ms" + tests := []struct { + testName string + cfg string + expectedCfg *config.Config + errorMsg string + }{ + { + testName: "Endpoint_Defaults", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" } + `, + expectedCfg: expectedCfg(nil), + }, + { + testName: "RelabelConfig", + cfg: ` + external_labels = { + cluster = "local", + } + + endpoint { + name = "test-url" + url = "http://0.0.0.0:11111/api/v1/write" + remote_timeout = "100ms" + + queue_config { + batch_send_deadline = "100ms" + } + + write_relabel_config { + source_labels = ["instance"] + target_label = "instance" + action = "lowercase" + } + }`, + expectedCfg: expectedCfg(func(c *config.Config) { + relabelCfg := &relabel.DefaultRelabelConfig + relabelCfg.SourceLabels = model.LabelNames{"instance"} + relabelCfg.TargetLabel = "instance" + relabelCfg.Action = "lowercase" + + c.GlobalConfig.ExternalLabels = labels.FromMap(map[string]string{ + "cluster": "local", + }) + c.RemoteWriteConfigs[0].Name = "test-url" + c.RemoteWriteConfigs[0].RemoteTimeout = model.Duration(100 * time.Millisecond) + c.RemoteWriteConfigs[0].QueueConfig.BatchSendDeadline = model.Duration(100 * time.Millisecond) + c.RemoteWriteConfigs[0].WriteRelabelConfigs = []*relabel.Config{ + relabelCfg, + } + }), + }, + { + testName: "AzureAD_Defaults", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + }`, + expectedCfg: expectedCfg(func(c *config.Config) { + c.RemoteWriteConfigs[0].AzureADConfig = &azuread.AzureADConfig{ + Cloud: "AzurePublic", + ManagedIdentity: &azuread.ManagedIdentityConfig{ + ClientID: "00000000-0000-0000-0000-000000000000", + }, + } + }), + }, + { + testName: "AzureAD_Explicit", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" - write_relabel_config { - source_labels = ["instance"] - target_label = "instance" - action = "lowercase" + azuread { + cloud = "AzureChina" + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + }`, + expectedCfg: expectedCfg(func(c *config.Config) { + c.RemoteWriteConfigs[0].AzureADConfig = &azuread.AzureADConfig{ + Cloud: "AzureChina", + ManagedIdentity: &azuread.ManagedIdentityConfig{ + ClientID: "00000000-0000-0000-0000-000000000000", + }, + } + }), + }, + { + testName: "SigV4_Defaults", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + sigv4 {} + }`, + expectedCfg: expectedCfg(func(c *config.Config) { + c.RemoteWriteConfigs[0].SigV4Config = &sigv4.SigV4Config{} + }), + }, + { + testName: "SigV4_Explicit", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + sigv4 { + region = "us-east-1" + access_key = "example_access_key" + secret_key = "example_secret_key" + profile = "example_profile" + role_arn = "example_role_arn" + } + }`, + expectedCfg: expectedCfg(func(c *config.Config) { + c.RemoteWriteConfigs[0].SigV4Config = &sigv4.SigV4Config{ + Region: "us-east-1", + AccessKey: "example_access_key", + SecretKey: commonconfig.Secret("example_secret_key"), + Profile: "example_profile", + RoleARN: "example_role_arn", + } + }), + }, + { + testName: "TooManyAuth1", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + sigv4 {} + bearer_token = "token" + }`, + errorMsg: "at most one of sigv4, azuread, basic_auth, oauth2, bearer_token & bearer_token_file must be configured", + }, + { + testName: "TooManyAuth2", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + sigv4 {} + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + }`, + errorMsg: "at most one of sigv4, azuread, basic_auth, oauth2, bearer_token & bearer_token_file must be configured", + }, + { + testName: "BadAzureClientId", + cfg: ` + endpoint { + url = "http://0.0.0.0:11111/api/v1/write" + + azuread { + managed_identity { + client_id = "bad_client_id" + } + } + }`, + errorMsg: "the provided Azure Managed Identity client_id provided is invalid", + }, + { + // Make sure the squashed HTTPClientConfig Validate function is being utilized correctly + testName: "BadBearerConfig", + cfg: ` + external_labels = { + cluster = "local", } - } -` - var args Arguments - err := river.Unmarshal([]byte(exampleRiverConfig), &args) - require.NoError(t, err) -} + endpoint { + name = "test-url" + url = "http://0.0.0.0:11111/api/v1/write" + remote_timeout = "100ms" + bearer_token = "token" + bearer_token_file = "/path/to/file.token" + + queue_config { + batch_send_deadline = "100ms" + } + }`, + errorMsg: "at most one of bearer_token & bearer_token_file must be configured", + }, + } -func TestBadRiverConfig(t *testing.T) { - var exampleRiverConfig = ` - external_labels = { - cluster = "local", - } - - endpoint { - name = "test-url" - url = "http://0.0.0.0:11111/api/v1/write" - remote_timeout = "100ms" - bearer_token = "token" - bearer_token_file = "/path/to/file.token" - - queue_config { - batch_send_deadline = "100ms" + for _, tc := range tests { + t.Run(tc.testName, func(t *testing.T) { + var args Arguments + err := river.Unmarshal([]byte(tc.cfg), &args) + + if tc.errorMsg != "" { + require.ErrorContains(t, err, tc.errorMsg) + return } - } -` + require.NoError(t, err) + + promCfg, err := convertConfigs(args) + require.NoError(t, err) - // Make sure the squashed HTTPClientConfig Validate function is being utilized correctly - var args Arguments - err := river.Unmarshal([]byte(exampleRiverConfig), &args) - require.ErrorContains(t, err, "at most one of bearer_token & bearer_token_file must be configured") + require.Equal(t, tc.expectedCfg, promCfg) + }) + } } diff --git a/component/prometheus/scrape/scrape.go b/component/prometheus/scrape/scrape.go index a0f10935d8c3..1501f0ace870 100644 --- a/component/prometheus/scrape/scrape.go +++ b/component/prometheus/scrape/scrape.go @@ -54,7 +54,7 @@ type Arguments struct { // A set of query parameters with which the target is scraped. Params url.Values `river:"params,attr,optional"` // Whether to scrape a classic histogram that is also exposed as a native histogram. - ScrapeClassicHistograms bool `river:"scrape_classic_histogram,attr,optional"` + ScrapeClassicHistograms bool `river:"scrape_classic_histograms,attr,optional"` // How frequently to scrape the targets of this scrape config. ScrapeInterval time.Duration `river:"scrape_interval,attr,optional"` // The timeout for scraping targets of this config. diff --git a/component/pyroscope/ebpf/args.go b/component/pyroscope/ebpf/args.go index b43c81a0469e..e3f1be4f5f11 100644 --- a/component/pyroscope/ebpf/args.go +++ b/component/pyroscope/ebpf/args.go @@ -21,4 +21,5 @@ type Arguments struct { CacheRounds int `river:"cache_rounds,attr,optional"` CollectUserProfile bool `river:"collect_user_profile,attr,optional"` CollectKernelProfile bool `river:"collect_kernel_profile,attr,optional"` + Demangle string `river:"demangle,attr,optional"` } diff --git a/component/pyroscope/ebpf/ebpf_linux.go b/component/pyroscope/ebpf/ebpf_linux.go index 3b15d028c20b..6bdbe02d8ccf 100644 --- a/component/pyroscope/ebpf/ebpf_linux.go +++ b/component/pyroscope/ebpf/ebpf_linux.go @@ -17,6 +17,8 @@ import ( "github.com/grafana/pyroscope/ebpf/pprof" "github.com/grafana/pyroscope/ebpf/sd" "github.com/grafana/pyroscope/ebpf/symtab" + "github.com/grafana/pyroscope/ebpf/symtab/elf" + "github.com/ianlancetaylor/demangle" "github.com/oklog/run" ) @@ -81,6 +83,7 @@ func defaultArguments() Arguments { CollectUserProfile: true, CollectKernelProfile: true, TargetsOnly: true, + Demangle: "none", } } @@ -230,6 +233,10 @@ func convertSessionOptions(args Arguments, ms *metrics) ebpfspy.SessionOptions { CollectKernel: args.CollectKernelProfile, SampleRate: args.SampleRate, CacheOptions: symtab.CacheOptions{ + SymbolOptions: symtab.SymbolOptions{ + GoTableFallback: false, + DemangleOptions: convertDemangleOptions(args.Demangle), + }, PidCacheOptions: symtab.GCacheOptions{ Size: args.PidCacheSize, KeepRounds: args.CacheRounds, @@ -246,3 +253,18 @@ func convertSessionOptions(args Arguments, ms *metrics) ebpfspy.SessionOptions { }, } } + +func convertDemangleOptions(o string) []demangle.Option { + switch o { + case "none": + return elf.DemangleNone + case "simplified": + return elf.DemangleSimplified + case "templates": + return elf.DemangleTemplates + case "full": + return elf.DemangleFull + default: + return elf.DemangleNone + } +} diff --git a/component/registry.go b/component/registry.go index 4425b11b096b..f544bf8287f7 100644 --- a/component/registry.go +++ b/component/registry.go @@ -118,29 +118,6 @@ type Registration struct { // any number of underscores or alphanumeric ASCII characters. Name string - // A singleton component only supports one instance of itself across the - // whole process. Normally, multiple components of the same type may be - // created. - // - // The fully-qualified name of a component is the combination of River block - // name and all of its labels. Fully-qualified names must be unique across - // the process. Components which are *NOT* singletons automatically support - // user-supplied identifiers: - // - // // Fully-qualified names: remote.s3.object-a, remote.s3.object-b - // remote.s3 "object-a" { ... } - // remote.s3 "object-b" { ... } - // - // This allows for multiple instances of the same component to be defined. - // However, components registered as a singleton do not support user-supplied - // identifiers: - // - // node_exporter { ... } - // - // This prevents the user from defining multiple instances of node_exporter - // with different fully-qualified names. - Singleton bool - // An example Arguments value that the registered component expects to // receive as input. Components should provide the zero value of their // Arguments type here. diff --git a/converter/converter.go b/converter/converter.go index e53bde62b62e..1bf5857a5d3c 100644 --- a/converter/converter.go +++ b/converter/converter.go @@ -8,6 +8,7 @@ import ( "github.com/grafana/agent/converter/diag" "github.com/grafana/agent/converter/internal/prometheusconvert" "github.com/grafana/agent/converter/internal/promtailconvert" + "github.com/grafana/agent/converter/internal/staticconvert" ) // Input represents the type of config file being fed into the converter. @@ -18,11 +19,14 @@ const ( InputPrometheus Input = "prometheus" // InputPromtail indicates that the input file is a promtail YAML file. InputPromtail Input = "promtail" + // InputStatic indicates that the input file is a grafana agent static YAML file. + InputStatic Input = "static" ) var SupportedFormats = []string{ string(InputPrometheus), string(InputPromtail), + string(InputStatic), } // Convert generates a Grafana Agent Flow config given an input configuration @@ -43,6 +47,8 @@ func Convert(in []byte, kind Input) ([]byte, diag.Diagnostics) { return prometheusconvert.Convert(in) case InputPromtail: return promtailconvert.Convert(in) + case InputStatic: + return staticconvert.Convert(in) } var diags diag.Diagnostics diff --git a/converter/internal/prometheusconvert/remote_write.go b/converter/internal/prometheusconvert/remote_write.go index 6115a7e9e50b..3ecd1e51d795 100644 --- a/converter/internal/prometheusconvert/remote_write.go +++ b/converter/internal/prometheusconvert/remote_write.go @@ -8,7 +8,10 @@ import ( "github.com/grafana/agent/component/prometheus/remotewrite" "github.com/grafana/agent/converter/diag" "github.com/grafana/agent/converter/internal/common" + "github.com/grafana/river/rivertypes" + "github.com/prometheus/common/sigv4" prom_config "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/storage/remote/azuread" ) func appendPrometheusRemoteWrite(pb *prometheusBlocks, globalConfig prom_config.GlobalConfig, remoteWriteConfigs []*prom_config.RemoteWriteConfig, label string) *remotewrite.Exports { @@ -40,10 +43,6 @@ func appendPrometheusRemoteWrite(pb *prometheusBlocks, globalConfig prom_config. func validateRemoteWriteConfig(remoteWriteConfig *prom_config.RemoteWriteConfig) diag.Diagnostics { var diags diag.Diagnostics - if remoteWriteConfig.SigV4Config != nil { - diags.Add(diag.SeverityLevelError, "unsupported remote_write sigv4 config was provided") - } - diags.AddAll(ValidateHttpClientConfig(&remoteWriteConfig.HTTPClientConfig)) return diags } @@ -76,6 +75,8 @@ func getEndpointOptions(remoteWriteConfigs []*prom_config.RemoteWriteConfig) []* QueueOptions: toQueueOptions(&remoteWriteConfig.QueueConfig), MetadataOptions: toMetadataOptions(&remoteWriteConfig.MetadataConfig), WriteRelabelConfigs: ToFlowRelabelConfigs(remoteWriteConfig.WriteRelabelConfigs), + SigV4: toSigV4(remoteWriteConfig.SigV4Config), + AzureAD: toAzureAD(remoteWriteConfig.AzureADConfig), } endpoints = append(endpoints, endpoint) @@ -104,3 +105,32 @@ func toMetadataOptions(metadataConfig *prom_config.MetadataConfig) *remotewrite. MaxSamplesPerSend: metadataConfig.MaxSamplesPerSend, } } + +// toSigV4 converts a Prometheus SigV4 config to a River SigV4 config. +func toSigV4(sigv4Config *sigv4.SigV4Config) *remotewrite.SigV4Config { + if sigv4Config == nil { + return nil + } + + return &remotewrite.SigV4Config{ + Region: sigv4Config.Region, + AccessKey: sigv4Config.AccessKey, + SecretKey: rivertypes.Secret(sigv4Config.SecretKey), + Profile: sigv4Config.Profile, + RoleARN: sigv4Config.RoleARN, + } +} + +// toAzureAD converts a Prometheus AzureAD config to a River AzureAD config. +func toAzureAD(azureADConfig *azuread.AzureADConfig) *remotewrite.AzureADConfig { + if azureADConfig == nil { + return nil + } + + return &remotewrite.AzureADConfig{ + Cloud: azureADConfig.Cloud, + ManagedIdentity: remotewrite.ManagedIdentityConfig{ + ClientID: azureADConfig.ManagedIdentity.ClientID, + }, + } +} diff --git a/converter/internal/prometheusconvert/testdata/scrape.river b/converter/internal/prometheusconvert/testdata/scrape.river index 2e1ef3665e15..0ca26d333fb9 100644 --- a/converter/internal/prometheusconvert/testdata/scrape.river +++ b/converter/internal/prometheusconvert/testdata/scrape.river @@ -43,7 +43,7 @@ prometheus.remote_write "default" { } endpoint { - name = "remote-1" + name = "remote1" url = "http://remote-write-url1" queue_config { } @@ -69,4 +69,63 @@ prometheus.remote_write "default" { metadata_config { } } + + endpoint { + name = "remote3_sigv4_defaults" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + sigv4 { } + } + + endpoint { + name = "remote4_sigv4_explicit" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + sigv4 { + region = "us-east-1" + access_key = "fake_access_key" + secret_key = "fake_secret_key" + profile = "fake_profile" + role_arn = "fake_role_arn" + } + } + + endpoint { + name = "remote5_azuread_defaults" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + } + + endpoint { + name = "remote6_azuread_explicit" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + cloud = "AzureGovernment" + } + } } diff --git a/converter/internal/prometheusconvert/testdata/scrape.yaml b/converter/internal/prometheusconvert/testdata/scrape.yaml index 196e0686108c..d4b1e7e203c7 100644 --- a/converter/internal/prometheusconvert/testdata/scrape.yaml +++ b/converter/internal/prometheusconvert/testdata/scrape.yaml @@ -22,7 +22,7 @@ scrape_configs: - targets: ["localhost:9093"] remote_write: - - name: "remote-1" + - name: "remote1" url: "http://remote-write-url1" write_relabel_configs: - source_labels: [__address1__] @@ -30,4 +30,26 @@ remote_write: - source_labels: [__address2__] target_label: __param_target2 - name: "remote2" - url: "http://remote-write-url2" \ No newline at end of file + url: "http://remote-write-url2" + - name: "remote3_sigv4_defaults" + url: http://localhost:9012/api/prom/push + sigv4: {} + - name: "remote4_sigv4_explicit" + url: http://localhost:9012/api/prom/push + sigv4: + region: us-east-1 + access_key: fake_access_key + secret_key: fake_secret_key + profile: fake_profile + role_arn: fake_role_arn + - name: "remote5_azuread_defaults" + url: http://localhost:9012/api/prom/push + azuread: + managed_identity: + client_id: 00000000-0000-0000-0000-000000000000 + - name: "remote6_azuread_explicit" + url: http://localhost:9012/api/prom/push + azuread: + cloud: AzureGovernment + managed_identity: + client_id: 00000000-0000-0000-0000-000000000000 diff --git a/converter/internal/prometheusconvert/testdata/unsupported.diags b/converter/internal/prometheusconvert/testdata/unsupported.diags index 3efc82ee60ab..6ea410da9bc5 100644 --- a/converter/internal/prometheusconvert/testdata/unsupported.diags +++ b/converter/internal/prometheusconvert/testdata/unsupported.diags @@ -7,7 +7,6 @@ (Error) unsupported native_histogram_bucket_limit for scrape_configs (Error) unsupported storage config was provided (Error) unsupported tracing config was provided -(Error) unsupported remote_write sigv4 config was provided (Error) unsupported HTTP Client config proxy_from_environment was provided (Error) unsupported HTTP Client config max_version was provided (Error) unsupported remote_read config was provided \ No newline at end of file diff --git a/converter/internal/prometheusconvert/testdata/unsupported.river b/converter/internal/prometheusconvert/testdata/unsupported.river index e275ee15edb3..a1882ff1eb5a 100644 --- a/converter/internal/prometheusconvert/testdata/unsupported.river +++ b/converter/internal/prometheusconvert/testdata/unsupported.river @@ -19,9 +19,9 @@ prometheus.scrape "prometheus2" { targets = [{ __address__ = "localhost:9091", }] - forward_to = [prometheus.remote_write.default.receiver] - job_name = "prometheus2" - scrape_classic_histogram = true + forward_to = [prometheus.remote_write.default.receiver] + job_name = "prometheus2" + scrape_classic_histograms = true } prometheus.remote_write "default" { diff --git a/converter/internal/prometheusconvert/testdata/unsupported.yaml b/converter/internal/prometheusconvert/testdata/unsupported.yaml index 5536f2d08d22..bf677c030a39 100644 --- a/converter/internal/prometheusconvert/testdata/unsupported.yaml +++ b/converter/internal/prometheusconvert/testdata/unsupported.yaml @@ -51,7 +51,5 @@ remote_write: proxy_from_environment: true tls_config: max_version: TLS13 - sigv4: - region: us-east-1 - name: "remote2" url: "http://remote-write-url2" \ No newline at end of file diff --git a/converter/internal/staticconvert/internal/build/builder.go b/converter/internal/staticconvert/internal/build/builder.go index 8612ed4f1901..9e99a619d32f 100644 --- a/converter/internal/staticconvert/internal/build/builder.go +++ b/converter/internal/staticconvert/internal/build/builder.go @@ -11,6 +11,7 @@ import ( "github.com/grafana/agent/pkg/integrations/apache_http" "github.com/grafana/agent/pkg/integrations/azure_exporter" "github.com/grafana/agent/pkg/integrations/blackbox_exporter" + "github.com/grafana/agent/pkg/integrations/cadvisor" "github.com/grafana/agent/pkg/integrations/cloudwatch_exporter" int_config "github.com/grafana/agent/pkg/integrations/config" "github.com/grafana/agent/pkg/integrations/consul_exporter" @@ -126,6 +127,8 @@ func (b *IntegrationsV1ConfigBuilder) appendIntegrations() { exports = b.appendWindowsExporter(itg) case *azure_exporter.Config: exports = b.appendAzureExporter(itg) + case *cadvisor.Config: + exports = b.appendCadvisorExporter(itg) } if len(exports.Targets) > 0 { diff --git a/converter/internal/staticconvert/internal/build/cadvisor_exporter.go b/converter/internal/staticconvert/internal/build/cadvisor_exporter.go new file mode 100644 index 000000000000..bbb1877633c4 --- /dev/null +++ b/converter/internal/staticconvert/internal/build/cadvisor_exporter.go @@ -0,0 +1,47 @@ +package build + +import ( + "fmt" + "time" + + "github.com/grafana/agent/component/discovery" + "github.com/grafana/agent/component/prometheus/exporter/cadvisor" + "github.com/grafana/agent/converter/internal/common" + "github.com/grafana/agent/converter/internal/prometheusconvert" + cadvisor_integration "github.com/grafana/agent/pkg/integrations/cadvisor" +) + +func (b *IntegrationsV1ConfigBuilder) appendCadvisorExporter(config *cadvisor_integration.Config) discovery.Exports { + args := toCadvisorExporter(config) + compLabel := common.LabelForParts(b.globalCtx.LabelPrefix, config.Name()) + b.f.Body().AppendBlock(common.NewBlockWithOverride( + []string{"prometheus", "exporter", "cadvisor"}, + compLabel, + args, + )) + + return prometheusconvert.NewDiscoveryExports(fmt.Sprintf("prometheus.exporter.cadvisor.%s.targets", compLabel)) +} + +func toCadvisorExporter(config *cadvisor_integration.Config) *cadvisor.Arguments { + return &cadvisor.Arguments{ + + StoreContainerLabels: config.StoreContainerLabels, + AllowlistedContainerLabels: config.AllowlistedContainerLabels, + EnvMetadataAllowlist: config.EnvMetadataAllowlist, + RawCgroupPrefixAllowlist: config.RawCgroupPrefixAllowlist, + PerfEventsConfig: config.PerfEventsConfig, + ResctrlInterval: time.Duration(config.ResctrlInterval), + DisabledMetrics: config.DisabledMetrics, + EnabledMetrics: config.EnabledMetrics, + StorageDuration: config.StorageDuration, + ContainerdHost: config.Containerd, + ContainerdNamespace: config.ContainerdNamespace, + DockerHost: config.Docker, + UseDockerTLS: config.DockerTLS, + DockerTLSCert: config.DockerTLSCert, + DockerTLSKey: config.DockerTLSKey, + DockerTLSCA: config.DockerTLSCA, + DockerOnly: config.DockerOnly, + } +} diff --git a/converter/internal/staticconvert/internal/build/node_exporter.go b/converter/internal/staticconvert/internal/build/node_exporter.go index 30ee3bdfc32e..f0b76e0a273e 100644 --- a/converter/internal/staticconvert/internal/build/node_exporter.go +++ b/converter/internal/staticconvert/internal/build/node_exporter.go @@ -12,11 +12,11 @@ func (b *IntegrationsV1ConfigBuilder) appendNodeExporter(config *node_exporter.C args := toNodeExporter(config) b.f.Body().AppendBlock(common.NewBlockWithOverride( []string{"prometheus", "exporter", "unix"}, - "", + "default", args, )) - return prometheusconvert.NewDiscoveryExports("prometheus.exporter.unix.targets") + return prometheusconvert.NewDiscoveryExports("prometheus.exporter.unix.default.targets") } func toNodeExporter(config *node_exporter.Config) *unix.Arguments { diff --git a/converter/internal/staticconvert/internal/build/redis_exporter.go b/converter/internal/staticconvert/internal/build/redis_exporter.go index 41a5254ec5fb..9de8db64ebc4 100644 --- a/converter/internal/staticconvert/internal/build/redis_exporter.go +++ b/converter/internal/staticconvert/internal/build/redis_exporter.go @@ -40,6 +40,7 @@ func toRedisExporter(config *redis_exporter.Config) *redis.Arguments { CheckSingleKeys: splitByCommaNullOnEmpty(config.CheckSingleKeys), CheckStreams: splitByCommaNullOnEmpty(config.CheckStreams), CheckSingleStreams: splitByCommaNullOnEmpty(config.CheckSingleStreams), + ExportKeyValues: config.ExportKeyValues, CountKeys: splitByCommaNullOnEmpty(config.CountKeys), ScriptPath: config.ScriptPath, ScriptPaths: nil, diff --git a/converter/internal/staticconvert/testdata/integrations.river b/converter/internal/staticconvert/testdata/integrations.river index cd0413aff620..5ccc6ecf6240 100644 --- a/converter/internal/staticconvert/testdata/integrations.river +++ b/converter/internal/staticconvert/testdata/integrations.river @@ -73,10 +73,9 @@ prometheus.scrape "integrations_snmp" { } prometheus.exporter.azure "integrations_azure_exporter" { - subscriptions = ["subId"] - resource_type = "Microsoft.Dashboard/grafana" - metrics = ["HttpRequestCount"] - included_resource_tags = ["owner"] + subscriptions = ["subId"] + resource_type = "Microsoft.Dashboard/grafana" + metrics = ["HttpRequestCount"] } prometheus.scrape "integrations_azure_exporter" { @@ -91,6 +90,20 @@ prometheus.scrape "integrations_azure_exporter" { } } +prometheus.exporter.cadvisor "integrations_cadvisor" { } + +prometheus.scrape "integrations_cadvisor" { + targets = prometheus.exporter.cadvisor.integrations_cadvisor.targets + forward_to = [prometheus.remote_write.integrations.receiver] + job_name = "integrations/cadvisor" + + tls_config { + ca_file = "/something7.cert" + cert_file = "/something8.cert" + key_file = "/something9.cert" + } +} + prometheus.exporter.cloudwatch "integrations_cloudwatch_exporter" { sts_region = "us-east-2" fips_disabled = false @@ -358,10 +371,10 @@ prometheus.scrape "integrations_mysqld_exporter" { } } -prometheus.exporter.unix { } +prometheus.exporter.unix "default" { } discovery.relabel "integrations_node_exporter" { - targets = prometheus.exporter.unix.targets + targets = prometheus.exporter.unix.default.targets rule { source_labels = ["__address__"] @@ -468,7 +481,8 @@ prometheus.scrape "integrations_process_exporter" { } prometheus.exporter.redis "integrations_redis_exporter" { - redis_addr = "redis-2:6379" + redis_addr = "redis-2:6379" + export_key_values = false } discovery.relabel "integrations_redis_exporter" { diff --git a/converter/internal/staticconvert/testdata/integrations.yaml b/converter/internal/staticconvert/testdata/integrations.yaml index 806779c0db09..afa2ed1046c5 100644 --- a/converter/internal/staticconvert/testdata/integrations.yaml +++ b/converter/internal/staticconvert/testdata/integrations.yaml @@ -159,6 +159,7 @@ integrations: redis_exporter: enabled: true redis_addr: "redis-2:6379" + export_key_values: false relabel_configs: - source_labels: [__address__] target_label: instance @@ -198,3 +199,5 @@ integrations: resource_type: "Microsoft.Dashboard/grafana" metrics: - "HttpRequestCount" + cadvisor: + enabled: true diff --git a/converter/internal/staticconvert/testdata/prom_remote_write.river b/converter/internal/staticconvert/testdata/prom_remote_write.river index 82f6d23e15cb..df5a9848a234 100644 --- a/converter/internal/staticconvert/testdata/prom_remote_write.river +++ b/converter/internal/staticconvert/testdata/prom_remote_write.river @@ -42,3 +42,70 @@ prometheus.remote_write "metrics_test3" { metadata_config { } } } + +prometheus.remote_write "metrics_test4_sigv4_defaults" { + endpoint { + name = "test4_sigv4_defaults-c42e88" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + sigv4 { } + } +} + +prometheus.remote_write "metrics_test5_sigv4_explicit" { + endpoint { + name = "test5_sigv4_explicit-050ad5" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + sigv4 { + region = "us-east-1" + access_key = "fake_access_key" + secret_key = "fake_secret_key" + profile = "fake_profile" + role_arn = "fake_role_arn" + } + } +} + +prometheus.remote_write "metrics_test6_azuread_defaults" { + endpoint { + name = "test6_azuread_defaults-50e17f" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + } + } +} + +prometheus.remote_write "metrics_test7_azuread_explicit" { + endpoint { + name = "test7_azuread_explicit-0f55f1" + url = "http://localhost:9012/api/prom/push" + + queue_config { } + + metadata_config { } + + azuread { + managed_identity { + client_id = "00000000-0000-0000-0000-000000000000" + } + cloud = "AzureGovernment" + } + } +} diff --git a/converter/internal/staticconvert/testdata/prom_remote_write.yaml b/converter/internal/staticconvert/testdata/prom_remote_write.yaml index e3f302ebd997..ef548902ccb6 100644 --- a/converter/internal/staticconvert/testdata/prom_remote_write.yaml +++ b/converter/internal/staticconvert/testdata/prom_remote_write.yaml @@ -16,4 +16,29 @@ metrics: - url: http://localhost:9012/api/prom/push queue_config: retry_on_http_429: false - \ No newline at end of file + - name: "test4_sigv4_defaults" + remote_write: + - url: http://localhost:9012/api/prom/push + sigv4: {} + - name: "test5_sigv4_explicit" + remote_write: + - url: http://localhost:9012/api/prom/push + sigv4: + region: us-east-1 + access_key: fake_access_key + secret_key: fake_secret_key + profile: fake_profile + role_arn: fake_role_arn + - name: "test6_azuread_defaults" + remote_write: + - url: http://localhost:9012/api/prom/push + azuread: + managed_identity: + client_id: 00000000-0000-0000-0000-000000000000 + - name: "test7_azuread_explicit" + remote_write: + - url: http://localhost:9012/api/prom/push + azuread: + cloud: AzureGovernment + managed_identity: + client_id: 00000000-0000-0000-0000-000000000000 diff --git a/converter/internal/staticconvert/testdata/unsupported.diags b/converter/internal/staticconvert/testdata/unsupported.diags index 1ca9ab48e046..1e3ec0f4f371 100644 --- a/converter/internal/staticconvert/testdata/unsupported.diags +++ b/converter/internal/staticconvert/testdata/unsupported.diags @@ -7,7 +7,6 @@ (Error) unsupported prefer_server_cipher_suites server config was provided. (Error) unsupported wal_directory metrics config was provided. use the run command flag --storage.path for Flow mode instead. (Error) unsupported integration agent was provided. -(Error) unsupported integration cadvisor was provided. (Warning) disabled integrations do nothing and are not included in the output: node_exporter. (Error) unsupported traces config was provided. (Error) unsupported agent_management config was provided. \ No newline at end of file diff --git a/converter/internal/staticconvert/testdata/unsupported.yaml b/converter/internal/staticconvert/testdata/unsupported.yaml index 5051d182277f..3a2c004c5238 100644 --- a/converter/internal/staticconvert/testdata/unsupported.yaml +++ b/converter/internal/staticconvert/testdata/unsupported.yaml @@ -23,8 +23,6 @@ metrics: integrations: agent: enabled: true - cadvisor: - enabled: true mssql: enabled: true scrape_integration: false @@ -61,4 +59,4 @@ logs: - name: log_config agent_management: - host: host_name \ No newline at end of file + host: host_name diff --git a/converter/internal/staticconvert/testdata_windows/integrations.river b/converter/internal/staticconvert/testdata_windows/integrations.river index ffb5c26e2858..d550abac368e 100644 --- a/converter/internal/staticconvert/testdata_windows/integrations.river +++ b/converter/internal/staticconvert/testdata_windows/integrations.river @@ -23,18 +23,8 @@ http { } prometheus.exporter.windows "integrations_windows_exporter" { - enabled_collectors = ["cpu", "cs", "logical_disk", "net", "os", "service", "system"] - - dfsr { - sources_enabled = ["connection", "folder", "volume"] - } - exchange { } - mssql { - enabled_classes = ["accessmethods", "availreplica", "bufman", "databases", "dbreplica", "genstats", "locks", "memmgr", "sqlstats", "sqlerrors", "transactions", "waitstats"] - } - network { exclude = ".+" } diff --git a/converter/internal/staticconvert/validate.go b/converter/internal/staticconvert/validate.go index acf67e1c4090..8f2eb027a93b 100644 --- a/converter/internal/staticconvert/validate.go +++ b/converter/internal/staticconvert/validate.go @@ -9,6 +9,7 @@ import ( "github.com/grafana/agent/pkg/integrations/apache_http" "github.com/grafana/agent/pkg/integrations/azure_exporter" "github.com/grafana/agent/pkg/integrations/blackbox_exporter" + "github.com/grafana/agent/pkg/integrations/cadvisor" "github.com/grafana/agent/pkg/integrations/cloudwatch_exporter" "github.com/grafana/agent/pkg/integrations/consul_exporter" "github.com/grafana/agent/pkg/integrations/dnsmasq_exporter" @@ -130,6 +131,7 @@ func validateIntegrations(integrationsConfig config.VersionedIntegrations) diag. case *statsd_exporter.Config: case *windows_exporter.Config: case *azure_exporter.Config: + case *cadvisor.Config: default: diags.Add(diag.SeverityLevelError, fmt.Sprintf("unsupported integration %s was provided.", itg.Name())) } diff --git a/docs/developer/writing-flow-component-documentation.md b/docs/developer/writing-flow-component-documentation.md index 7a5b8914f6f6..ddaf6466e021 100644 --- a/docs/developer/writing-flow-component-documentation.md +++ b/docs/developer/writing-flow-component-documentation.md @@ -560,4 +560,15 @@ The [loki.source.podlogs][] component documentation needed to add an extra section to document the PodLogs CRD, since we do not yet have a way of documenting auxiliary artifacts which are related to a component. +### otelcol.processor.transform + +The [otelcol.processor.transform][] component documentation needed to add +an extra section about OTTL Contexts because there is no appropriate OTEL docs page +that we could link to. Currently this information is housed on the [transformprocessor][] +doc page, but because it contains yaml config for the Collector, users might get confused +how this maps to River and it is better not to link to it. In the future we could try to +move this information from [transformprocessor][] to the [OTTL Context][ottl context] doc. + [loki.source.podlogs]: ../sources/flow/reference/components/loki.source.podlogs.md +[otelcol.processor.transform]: ../sources/flow/reference/components/otelcol.processor.transform.md +[ottl context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/contexts/README.md \ No newline at end of file diff --git a/docs/sources/_index.md b/docs/sources/_index.md index 659f0249f4ae..a589b6545f2e 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -21,7 +21,7 @@ form programmable observability **pipelines** for telemetry collection, processing, and delivery. {{% admonition type="note" %}} -This page focuses mainly on [Flow mode]({{< relref "./flow" >}}), the Terraform-inspired variant of Grafana Agent. +This page focuses mainly on [Flow mode][], the Terraform-inspired variant of Grafana Agent. For information on other variants of Grafana Agent, refer to [Introduction to Grafana Agent]({{< relref "./about.md" >}}). {{% /admonition %}} @@ -56,8 +56,6 @@ Grafana Agent can collect, transform, and send data to: * **Batteries included**: Integrate with systems like MySQL, Kubernetes, and Apache to get telemetry that's immediately useful. -[UI]: {{< relref "./flow/monitoring/debugging.md#grafana-agent-flow-ui" >}} - ## Getting started * Choose a [variant][variants] of Grafana Agent to run. @@ -66,11 +64,6 @@ Grafana Agent can collect, transform, and send data to: * [Static mode Kubernetes operator][] * [Flow mode][] -[variants]: {{< relref "./about.md" >}} -[Static mode]: {{< relref "./static" >}} -[Static mode Kubernetes operator]: {{< relref "./operator" >}} -[Flow mode]: {{< relref "./flow" >}} - ## Supported platforms * Linux @@ -106,3 +99,20 @@ one minor release is moved. Patch and security releases may be created at any time. [Milestones]: https://github.com/grafana/agent/milestones + +{{% docs/reference %}} +[variants]: "/docs/agent/ -> /docs/agent//about" +[variants]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/about" + +[Static mode]: "/docs/agent/ -> /docs/agent//static" +[Static mode]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static" + +[Static mode Kubernetes operator]: "/docs/agent/ -> /docs/agent//operator" +[Static mode Kubernetes operator]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/operator" + +[Flow mode]: "/docs/agent/ -> /docs/agent//flow" +[Flow mode]: "/docs/grafana-cloud/ -> /docs/agent//flow" + +[UI]: "/docs/agent/ -> /docs/agent//flow/monitoring/debugging.md#grafana-agent-flow-ui" +[UI]: "/docs/grafana-cloud/ -> /docs/agent//flow/monitoring/debugging.md#grafana-agent-flow-ui" +{{% /docs/reference %}} diff --git a/docs/sources/about.md b/docs/sources/about.md index a02585b56ebc..f61cd5e7f51b 100644 --- a/docs/sources/about.md +++ b/docs/sources/about.md @@ -19,13 +19,20 @@ such as Prometheus and OpenTelemetry. Grafana Agent is available in three different variants: -* [Static mode][]: The default, original variant of Grafana Agent. -* [Static mode Kubernetes operator][]: Variant which manages agents running in Static mode. -* [Flow mode][]: The newer, more flexible re-imagining variant of Grafana Agent. +- [Static mode][]: The default, original variant of Grafana Agent. +- [Static mode Kubernetes operator][]: Variant which manages agents running in Static mode. +- [Flow mode][]: The newer, more flexible re-imagining variant of Grafana Agent. -[Static mode]: {{< relref "./static/" >}} -[Static mode Kubernetes operator]: {{< relref "./operator/" >}} -[Flow mode]: {{< relref "./flow/" >}} +{{% docs/reference %}} +[Static mode]: "/docs/agent/ -> /docs/agent//static" +[Static mode]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static" + +[Static mode Kubernetes operator]: "/docs/agent/ -> /docs/agent//operator" +[Static mode Kubernetes operator]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/operator" + +[Flow mode]: "/docs/agent/ -> /docs/agent//flow" +[Flow mode]: "/docs/grafana-cloud/ -> /docs/agent//flow" +{{% /docs/reference %}} ## Stability @@ -107,6 +114,10 @@ You should run Flow mode when: [BoringCrypto](https://pkg.go.dev/crypto/internal/boring) is an **EXPERIMENTAL** feature for building Grafana Agent binaries and images with BoringCrypto enabled. Builds and Docker images for Linux arm64/amd64 are made available. -[integrations]: {{< relref "./static/configuration/integrations/" >}} -[components]: {{< relref "./flow/reference/components/" >}} +{{% docs/reference %}} +[integrations]: "/docs/agent/ -> /docs/agent//static/configuration/integrations" +[integrations]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/configuration/integrations" +[components]: "/docs/agent/ -> /docs/agent//flow/reference/components" +[components]: "/docs/grafana-cloud/ -> /docs/agent//flow/reference/components" +{{% /docs/reference %}} diff --git a/docs/sources/flow/getting-started/migrating-from-prometheus.md b/docs/sources/flow/getting-started/migrating-from-prometheus.md index b6a2b73bc185..0417d3c27260 100644 --- a/docs/sources/flow/getting-started/migrating-from-prometheus.md +++ b/docs/sources/flow/getting-started/migrating-from-prometheus.md @@ -147,7 +147,9 @@ remote_write: password: PASSWORD ``` -The convert command takes the YAML file as input and outputs a River file. +The convert command takes the YAML file as input and outputs a [River][] file. + +[River]: {{< relref "../config-language/_index.md" >}} ```bash AGENT_MODE=flow; grafana-agent convert --source-format=prometheus --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH @@ -187,3 +189,28 @@ prometheus.remote_write "default" { } } ``` + +## Limitations + +Configuration conversion is done on a best-effort basis. The Agent will issue +warnings or errors where the conversion cannot be performed. + +Once the configuration is converted, we recommend that you review +the Flow Mode configuration file created and verify that it is correct +before starting to use it in a production environment. + +Furthermore, we recommend that you review the following checklist: + +* The following configurations are not available for conversion to flow mode: + `rule_files`, `alerting`, `remote_read`, `storage`, and `tracing`. Any + additional unsupported features are returned as errors during conversion. +* Check if you are using any extra command line arguments with Prometheus that + are not present in your configuration file. For example, `--web.listen-address`. +* Metamonitoring metrics exposed by the Flow Mode usually match Prometheus + metamonitoring metrics but will use a different name. Make sure that you use + the new metric names, for example, in your alerts and dashboards queries. +* The logs produced by Grafana Agent will differ from those + produced by Prometheus. +* Grafana Agent exposes the [Grafana Agent Flow UI][]. + +[Grafana Agent Flow UI]: {{< relref "../monitoring/debugging/#grafana-agent-flow-ui" >}} \ No newline at end of file diff --git a/docs/sources/flow/getting-started/migrating-from-promtail.md b/docs/sources/flow/getting-started/migrating-from-promtail.md index 7d3d597fcc09..09b91ec0d4ad 100644 --- a/docs/sources/flow/getting-started/migrating-from-promtail.md +++ b/docs/sources/flow/getting-started/migrating-from-promtail.md @@ -148,7 +148,9 @@ scrape_configs: __path__: /var/log/*.log ``` -The convert command takes the YAML file as input and outputs a River file. +The convert command takes the YAML file as input and outputs a [River][] file. + +[River]: {{< relref "../config-language/_index.md" >}} ```bash AGENT_MODE=flow; grafana-agent convert --source-format=promtail --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH @@ -189,7 +191,7 @@ before starting to use it in a production environment. Furthermore, we recommend that you review the following checklist: * Check if you are using any extra command line arguments with Promtail which - are not present in your config file, for example, `-max-line-size` + are not present in your config file. For example, `-max-line-size`. * Check if you are setting any environment variables, whether [expanded in the config file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`. @@ -197,9 +199,9 @@ Furthermore, we recommend that you review the following checklist: Refer to the [loki.source.file][] documentation for more details. Check if you have any existing setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions file path. -* Metrics exposed by the Flow Mode usually match Promtail metrics but - will use a different name. Make sure that you use the new metric names, for example, - in your alerts and dashboards queries. +* Metamonitoring metrics exposed by the Flow Mode usually match Promtail + metamonitoring metrics but will use a different name. Make sure that you + use the new metric names, for example, in your alerts and dashboards queries. * Note that the logs produced by the Agent will differ from those produced by Promtail. * Note that the Agent exposes the [Grafana Agent Flow UI][], which differs @@ -207,4 +209,4 @@ Furthermore, we recommend that you review the following checklist: [expanded in the config file]: /docs/loki/latest/clients/promtail/configuration/#use-environment-variables-in-the-configuration -[Grafana Agent Flow UI]: /docs/agent/latest/flow/monitoring/debugging/#grafana-agent-flow-ui +[Grafana Agent Flow UI]: {{< relref "../monitoring/debugging/#grafana-agent-flow-ui" >}} diff --git a/docs/sources/flow/getting-started/migrating-from-static.md b/docs/sources/flow/getting-started/migrating-from-static.md new file mode 100644 index 000000000000..b50297743edf --- /dev/null +++ b/docs/sources/flow/getting-started/migrating-from-static.md @@ -0,0 +1,320 @@ +--- +aliases: +- /docs/grafana-cloud/agent/flow/getting-started/migrating-from-static/ +- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/migrating-from-static/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/migrating-from-static/ +canonical: https://grafana.com/docs/agent/latest/flow/getting-started/migrating-from-static/ +description: Learn how to migrate your configuration from Grafana Agent Static mode to Flow mode +menuTitle: Migrate from Static mode to Flow mode +title: Migrate Grafana Agent from Static mode to Flow mode +weight: 340 +--- + +# Migrate Grafana Agent from Static mode to Flow mode + +The built-in Grafana Agent convert command can migrate your [Static][] mode +configuration to a Flow mode configuration. + +This topic describes how to: + +* Convert a Grafana Agent Static mode configuration to a Flow mode configuration. +* Run a Grafana Agent Static mode configuration natively using Grafana Agent Flow mode. + +[Static]: {{< relref "../../static/_index.md" >}} + +## Components used in this topic + +* [prometheus.scrape][] +* [prometheus.remote_write][] +* [local.file_match][] +* [loki.process][] +* [loki.source.file][] +* [loki.write][] + +[prometheus.scrape]: {{< relref "../reference/components/prometheus.scrape.md" >}} +[prometheus.remote_write]: {{< relref "../reference/components/prometheus.remote_write.md" >}} +[local.file_match]: {{< relref "../reference/components/local.file_match.md" >}} +[loki.process]: {{< relref "../reference/components/loki.process.md" >}} +[loki.source.file]: {{< relref "../reference/components/loki.source.file.md" >}} +[loki.write]: {{< relref "../reference/components/loki.write.md" >}} + +## Before you begin + +* You must have an existing Grafana Agent Static mode configuration. +* You must be familiar with the [Components][] concept in Grafana Agent Flow mode. + +[Components]: {{< relref "../concepts/components.md" >}} +[convert]: {{< relref "../reference/cli/convert.md" >}} +[run]: {{< relref "../reference/cli/run.md" >}} +[Start the agent]: {{< relref "../setup/start-agent.md" >}} +[Flow Debugging]: {{< relref "../monitoring/debugging.md" >}} +[debugging]: #debugging + +## Convert a Static mode configuration + +To fully migrate Grafana Agent from [Static][] mode to Flow mode, you must convert +your Static mode configuration into a Flow mode configuration. +This conversion will enable you to take full advantage of the many additional +features available in Grafana Agent Flow mode. + +> In this task, we will use the [convert][] CLI command to output a Flow mode +> configuration from a Static mode configuration. + +1. Open a terminal window and run the following command: + + ```bash + AGENT_MODE=flow; grafana-agent convert --source-format=static --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH + ``` + + Replace the following: + * `INPUT_CONFIG_PATH`: The full path to the [Static][] configuration. + * `OUTPUT_CONFIG_PATH`: The full path to output the flow configuration. + +1. [Start the Agent][] in Flow mode using the new Flow mode configuration + from `OUTPUT_CONFIG_PATH`: + +### Debugging + +1. If the convert command cannot convert a [Static] mode configuration, diagnostic + information is sent to `stderr`. You can use the `--bypass-errors` flag to + bypass any non-critical issues and output the Flow mode configuration + using a best-effort conversion. + + {{% admonition type="caution" %}}If you bypass the errors, the behavior of the converted configuration may not match the original [Static] mode configuration. Make sure you fully test the converted configuration before using it in a production environment.{{% /admonition %}} + + ```bash + AGENT_MODE=flow; grafana-agent convert --source-format=static --bypass-errors --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH + ``` + +1. You can use the `--report` flag to output a diagnostic report. + + ```bash + AGENT_MODE=flow; grafana-agent convert --source-format=static --report=OUTPUT_REPORT_PATH --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH + ``` + + * Replace `OUTPUT_REPORT_PATH` with the output path for the report. + + Using the [example](#example) Grafana Agent Static Mode configuration below, the diagnostic + report provides the following information: + + ```plaintext + (Warning) global positions configuration is not supported - each Flow Mode's loki.source.file component has its own positions file in the component's data directory + (Warning) server.log_level is not supported - Flow mode components may produce different logs + (Warning) Please review your agent command line flags and ensure they are set in your Flow mode config file where necessary. + ``` + +## Run a Static mode configuration + +If you’re not ready to completely switch to a Flow mode configuration, you can run +Grafana Agent using your existing Grafana Agent Static mode configuration. +The `--config.format=static` flag tells Grafana Agent to convert your [Static] mode +configuration to Flow mode and load it directly without saving the new +configuration. This allows you to try Flow mode without modifying your existing +Grafana Agent Static mode configuration infrastructure. + +> In this task, we will use the [run][] CLI command to run Grafana Agent in Flow +> mode using a Static mode configuration. + +[Start the Agent][] in Flow mode and include the command line flag +`--config.format=static`. Your configuration file must be a valid [Static] +mode configuration file. + +### Debugging + +1. You can follow the convert CLI command [debugging][] instructions to generate + a diagnostic report. + +1. Refer to the Grafana Agent [Flow Debugging][] for more information about + running Grafana Agent in Flow mode. + +1. If your [Static] mode configuration cannot be converted and loaded directly into + Grafana Agent, diagnostic information is sent to `stderr`. You can use the ` + --config.bypass-conversion-errors` flag with `--config.format=static` to bypass any + non-critical issues and start the Agent. + + {{% admonition type="caution" %}}If you bypass the errors, the behavior of the converted configuration may not match the original Grafana Agent Static mode configuration. Do not use this flag in a production environment.{{%/admonition %}} + +## Example + +This example demonstrates converting a [Static] mode configuration file to a Flow mode configuration file. + +The following [Static] mode configuration file provides the input for the conversion: + +```yaml +server: + log_level: info + +metrics: + global: + scrape_interval: 15s + remote_write: + - url: https://prometheus-us-central1.grafana.net/api/prom/push + basic_auth: + username: USERNAME + password: PASSWORD + configs: + - name: test + host_filter: false + scrape_configs: + - job_name: local-agent + static_configs: + - targets: ['127.0.0.1:12345'] + labels: + cluster: 'localhost' + +logs: + global: + file_watch_config: + min_poll_frequency: 1s + max_poll_frequency: 5s + positions_directory: /var/lib/agent/data-agent + configs: + - name: varlogs + scrape_configs: + - job_name: varlogs + static_configs: + - targets: + - localhost + labels: + job: varlogs + host: mylocalhost + __path__: /var/log/*.log + pipeline_stages: + - match: + selector: '{filename="/var/log/*.log"}' + stages: + - drop: + expression: '^[^0-9]{4}' + - regex: + expression: '^(?P\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?P[[:alpha:]]+)\] (?:\d+)\#(?:\d+): \*(?:\d+) (?P.+)$' + - pack: + labels: + - level + clients: + - url: https://USER_ID:API_KEY@logs-prod3.grafana.net/loki/api/v1/push +``` + +The convert command takes the YAML file as input and outputs a [River][] file. + +[River]: {{< relref "../config-language/_index.md" >}} + +```bash +AGENT_MODE=flow; grafana-agent convert --source-format=static --output=OUTPUT_CONFIG_PATH INPUT_CONFIG_PATH +``` + +The new Flow mode configuration file looks like this: + +```river +prometheus.scrape "metrics_test_local_agent" { + targets = [{ + __address__ = "127.0.0.1:12345", + cluster = "localhost", + }] + forward_to = [prometheus.remote_write.metrics_test.receiver] + job_name = "local-agent" + scrape_interval = "15s" +} + +prometheus.remote_write "metrics_test" { + endpoint { + name = "test-a653a1" + url = "https://prometheus-us-central1.grafana.net/api/prom/push" + + basic_auth { + username = "USERNAME" + password = "PASSWORD" + } + + queue_config { } + + metadata_config { } + } +} + +local.file_match "logs_varlogs_varlogs" { + path_targets = [{ + __address__ = "localhost", + __path__ = "/var/log/*.log", + host = "mylocalhost", + job = "varlogs", + }] +} + +loki.process "logs_varlogs_varlogs" { + forward_to = [loki.write.logs_varlogs.receiver] + + stage.match { + selector = "{filename=\"/var/log/*.log\"}" + + stage.drop { + expression = "^[^0-9]{4}" + } + + stage.regex { + expression = "^(?P\\d{4}/\\d{2}/\\d{2} \\d{2}:\\d{2}:\\d{2}) \\[(?P[[:alpha:]]+)\\] (?:\\d+)\\#(?:\\d+): \\*(?:\\d+) (?P.+)$" + } + + stage.pack { + labels = ["level"] + ingest_timestamp = false + } + } +} + +loki.source.file "logs_varlogs_varlogs" { + targets = local.file_match.logs_varlogs_varlogs.targets + forward_to = [loki.process.logs_varlogs_varlogs.receiver] + + file_watch { + min_poll_frequency = "1s" + max_poll_frequency = "5s" + } +} + +loki.write "logs_varlogs" { + endpoint { + url = "https://USER_ID:API_KEY@logs-prod3.grafana.net/loki/api/v1/push" + } + external_labels = {} +} + +``` + +## Limitations + +Configuration conversion is done on a best-effort basis. The Agent will issue +warnings or errors where the conversion cannot be performed. + +Once the configuration is converted, we recommend that you review +the Flow mode configuration file, and verify that it is correct +before starting to use it in a production environment. + +Furthermore, we recommend that you review the following checklist: + +* The following configuration options are not available for conversion to Flow + mode: [Integrations next][], [Traces][], and [Agent Management][]. Any + additional unsupported features are returned as errors during conversion. +* There is no gRPC server to configure for Flow mode, so any non-default config + will show as unsupported during the conversion. +* Check if you are using any extra command line arguments with Static mode that + are not present in your configuration file. For example, `-server.http.address`. +* Check if you are using any environment variables in your [Static] mode configuration. + These will be evaluated during conversion and you may want to replace them + with the Flow Standard library [env] function after conversion. +* Review additional [Prometheus Limitations] for limitations specific to your + [Metrics] config. +* Review additional [Promtail Limitations] for limitations specific to your + [Logs] config. +* The logs produced by Grafana Agent Flow mode will differ from those + produced by Static mode. +* Grafana Agent exposes the [Grafana Agent Flow UI][]. + +[Integrations next]: {{< relref "../../static/configuration/integrations/integrations-next/_index.md" >}} +[Traces]: {{< relref "../../static/configuration/traces-config.md" >}} +[Agent Management]: {{< relref "../../static/configuration/agent-management.md" >}} +[env]: {{< relref "../reference/stdlib/env.md" >}} +[Prometheus Limitations]: {{< relref "migrating-from-prometheus.md/#limitations" >}} +[Promtail Limitations]: {{< relref "migrating-from-promtail.md/#limitations" >}} +[Metrics]: {{< relref "../../static/configuration/metrics-config.md" >}} +[Logs]: {{< relref "../../static/configuration/logs-config.md" >}} +[Grafana Agent Flow UI]: {{< relref "../monitoring/debugging/#grafana-agent-flow-ui" >}} diff --git a/docs/sources/flow/monitoring/controller_metrics.md b/docs/sources/flow/monitoring/controller_metrics.md index 79d1a0809022..0357acbe5085 100644 --- a/docs/sources/flow/monitoring/controller_metrics.md +++ b/docs/sources/flow/monitoring/controller_metrics.md @@ -32,9 +32,12 @@ The controller exposes the following metrics: * `agent_component_controller_running_components` (Gauge): The current number of running components by health. The health is represented in the `health_type` label. -* `agent_component_evaluation_seconds` (Histogram): The number of completed - graph evaluations performed by the component controller with how long they - took. +* `agent_component_evaluation_seconds` (Histogram): The time it takes to + evaluate components after one of their dependencies is updated. +* `agent_component_dependencies_wait_seconds` (Histogram): Time spent by + components waiting to be evaluated after one of their dependencies is updated. +* `agent_component_evaluation_queue_size` (Gauge): The current number of + component evaluations waiting to be performed. [component controller]: {{< relref "../concepts/component_controller.md" >}} [grafana-agent run]: {{< relref "../reference/cli/run.md" >}} diff --git a/docs/sources/flow/reference/cli/convert.md b/docs/sources/flow/reference/cli/convert.md index b20a0c3b9e37..cd976f108784 100644 --- a/docs/sources/flow/reference/cli/convert.md +++ b/docs/sources/flow/reference/cli/convert.md @@ -45,12 +45,13 @@ The following flags are supported: * `--report`, `-r`: The filepath and filename where the report is written. -* `--source-format`, `-f`: Required. The format of the source file. Supported formats: [prometheus], [promtail]. +* `--source-format`, `-f`: Required. The format of the source file. Supported formats: [prometheus], [promtail], [static]. * `--bypass-errors`, `-b`: Enable bypassing errors when converting. [prometheus]: #prometheus [promtail]: #promtail +[static]: #static [errors]: #errors ### Defaults @@ -80,6 +81,8 @@ This includes Prometheus features such as and many supported *_sd_configs. Unsupported features in a source config result in [errors]. +Refer to [Migrate from Prometheus to Grafana Agent Flow]({{< relref "../../getting-started/migrating-from-prometheus/" >}}) for a detailed migration guide. + ### Promtail Using the `--source-format=promtail` will convert the source configuration from @@ -91,3 +94,15 @@ are supported and can be converted to Grafana Agent Flow config. If you have unsupported features in a source configuration, you will receive [errors] when you convert to a flow configuration. The converter will also raise warnings for configuration options that may require your attention. + +Refer to [Migrate from Promtail to Grafana Agent Flow]({{< relref "../../getting-started/migrating-from-promtail/" >}}) for a detailed migration guide. + +### Static + +Using the `--source-format=static` will convert the source configuration from +Grafana Agent [Static]({{< relref "../../../static" >}}) mode to Flow mode configuration. + +If you have unsupported features in a Static mode source configuration, you will receive [errors][] when you convert to a Flow mode configuration. The converter will +also raise warnings for configuration options that may require your attention. + +Refer to [Migrate Grafana Agent from Static mode to Flow mode]({{< relref "../../getting-started/migrating-from-static/" >}}) for a detailed migration guide. \ No newline at end of file diff --git a/docs/sources/flow/reference/cli/run.md b/docs/sources/flow/reference/cli/run.md index 249bcdfc845e..715d97af25c0 100644 --- a/docs/sources/flow/reference/cli/run.md +++ b/docs/sources/flow/reference/cli/run.md @@ -63,7 +63,7 @@ The following flags are supported: * `--cluster.advertise-interfaces`: List of interfaces used to infer an address to advertise. Set to `all` to use all available network interfaces on the system. (default `"eth0,en0"`). * `--cluster.max-join-peers`: Number of peers to join from the discovered set (default `5`). * `--cluster.name`: Name to prevent nodes without this identifier from joining the cluster (default `""`). -* `--config.format`: The format of the source file. Supported formats: `flow`, `prometheus`, `promtail` (default `"flow"`). +* `--config.format`: The format of the source file. Supported formats: `flow`, `prometheus`, `promtail`, `static` (default `"flow"`). * `--config.bypass-conversion-errors`: Enable bypassing errors when converting (default `false`). [in-memory HTTP traffic]: {{< relref "../../concepts/component_controller.md#in-memory-traffic" >}} diff --git a/docs/sources/flow/reference/components/discovery.kubernetes.md b/docs/sources/flow/reference/components/discovery.kubernetes.md index 3de4c1366a49..e6b4bb64736f 100644 --- a/docs/sources/flow/reference/components/discovery.kubernetes.md +++ b/docs/sources/flow/reference/components/discovery.kubernetes.md @@ -458,3 +458,37 @@ Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - `USERNAME`: The username to use for authentication to the remote_write API. - `PASSWORD`: The password to use for authentication to the remote_write API. + +### Limit to only pods on the same node + +This example limits the search to pods on the same node as this Grafana Agent. This configuration could be useful if you are running the Agent as a DaemonSet: + +```river +discovery.kubernetes "k8s_pods" { + role = "pod" + selectors { + role = "pod" + field = "spec.nodeName=" + constants.hostname + } +} + +prometheus.scrape "demo" { + targets = discovery.kubernetes.k8s_pods.targets + forward_to = [prometheus.remote_write.demo.receiver] +} + +prometheus.remote_write "demo" { + endpoint { + url = PROMETHEUS_REMOTE_WRITE_URL + + basic_auth { + username = USERNAME + password = PASSWORD + } + } +} +``` +Replace the following: + - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. + - `USERNAME`: The username to use for authentication to the remote_write API. + - `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/faro.receiver.md b/docs/sources/flow/reference/components/faro.receiver.md new file mode 100644 index 000000000000..2cd7e898b123 --- /dev/null +++ b/docs/sources/flow/reference/components/faro.receiver.md @@ -0,0 +1,268 @@ +--- +aliases: +- /docs/grafana-cloud/agent/flow/reference/components/faro.receiver/ +- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/faro.receiver/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/faro.receiver/ +canonical: https://grafana.com/docs/agent/latest/flow/reference/components/faro.receiver/ +title: faro.receiver +description: Learn about the faro.receiver +--- + +# faro.receiver + +`faro.receiver` accepts web application telemetry data from the [Grafana Faro Web SDK][faro-sdk] +and forwards it to other components for future processing. + +[faro-sdk]: https://github.com/grafana/faro-web-sdk + +## Usage + +```river +faro.receiver "LABEL" { + output { + logs = [LOKI_RECEIVERS] + traces = [OTELCOL_COMPONENTS] + } +} +``` + +## Arguments + +The following arguments are supported: + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`extra_log_labels` | `map(string)` | Extra labels to attach to emitted log lines. | `{}` | no + +## Blocks + +The following blocks are supported inside the definition of `faro.receiver`: + +Hierarchy | Block | Description | Required +--------- | ----- | ----------- | -------- +server | [server][] | Configures the HTTP server. | no +server > rate_limiting | [rate_limiting][] | Configures rate limiting for the HTTP server. | no +sourcemaps | [sourcemaps][] | Configures sourcemap retrieval. | no +sourcemaps > location | [location][] | Configures on-disk location for sourcemap retrieval. | no +output | [output][] | Configures where to send collected telemetry data. | yes + +[server]: #server-block +[rate_limiting]: #rate_limiting-block +[sourcemaps]: #sourcemaps-block +[location]: #location-block +[output]: #output-block + +### server block + +The `server` block configures the HTTP server managed by the `faro.receiver` +component. Clients using the [Grafana Faro Web SDK][faro-sdk] forward telemetry +data to this HTTP server for processing. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`listen_address` | `string` | Address to listen for HTTP traffic on. | `127.0.0.1` | no +`listen_port` | `number` | Port to listen for HTTP traffic on. | `12347` | no +`cors_allowed_origins` | `list(string)` | Origins for which cross-origin requests are permitted. | `[]` | no +`api_key` | `secret` | Optional API key to validate client requests with. | `""` | no +`max_allowed_payload_size` | `string` | Maximum size (in bytes) for client requests. | `"5MiB"` | no + +By default, telemetry data is only accepted from applications on the same local +network as the browser. To accept telemetry data from a wider set of clients, +modify the `listen_address` attribute to the IP address of the appropriate +network interface to use. + +The `cors_allowed_origins` argument determines what origins browser requests +may come from. The default value, `[]`, disables CORS support. To support +requests from all origins, set `cors_allowed_origins` to `["*"]`. The `*` +character indicates a wildcard. + +When the `api_key` argument is non-empty, client requests must have an HTTP +header called `X-API-Key` matching the value of the `api_key` argument. +Requests that are missing the header or have the wrong value are rejected with +an `HTTP 401 Unauthorized` status code. If the `api_key` argument is empty, no +authentication checks are performed, and the `X-API-Key` HTTP header is +ignored. + +### rate_limiting block + +The `rate_limiting` block configures rate limiting for client requests. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`enabled` | `bool` | Whether to enable rate limiting. | `true` | no +`rate` | `number` | Rate of allowed requests per second. | `50` | no +`burst_size` | `number` | Allowed burst size of requests. | `100` | no + +Rate limiting functions as a [token bucket algorithm][token-bucket], where +a bucket has a maximum capacity for up to `burst_size` requests and refills at a +rate of `rate` per second. + +Each HTTP request drains the capacity of the bucket by one. Once the bucket is +empty, HTTP requests are rejected with an `HTTP 429 Too Many Requests` status +code until the bucket has more available capacity. + +Configuring the `rate` argument determines how fast the bucket refills, and +configuring the `burst_size` argument determines how many requests can be +received in a burst before the bucket is empty and starts rejecting requests. + +[token-bucket]: https://en.wikipedia.org/wiki/Token_bucket + +### sourcemaps block + +The `sourcemaps` block configures how to retrieve sourcemaps. Sourcemaps are +then used to transform file and line information from minified code into the +file and line information from the original source code. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`download` | `bool` | Whether to download sourcemaps. | `true` | no +`download_from_origins` | `list(string)` | Which origins to download sourcemaps from. | `["*"]` | no +`download_timeout` | `duration` | Timeout when downloading sourcemaps. | `"1s"` | no + +When exceptions are sent to the `faro.receiver` component, it can download +sourcemaps from the web application. You can disable this behavior by setting +the `download` argument to `false`. + +The `download_from_origins` argument determines which origins a sourcemap may +be downloaded from. The origin is attached to the URL that a browser is sending +telemetry data from. The default value, `["*"]`, enables downloading sourcemaps +from all origins. The `*` character indicates a wildcard. + +By default, sourcemap downloads are subject to a timeout of `"1s"`, specified +by the `download_timeout` argument. Setting `download_timeout` to `"0s"` +disables timeouts. + +To retrieve sourcemaps from disk instead of the network, specify one or more +[`location` blocks][location]. When `location` blocks are provided, they are +checked first for sourcemaps before falling back to downloading. + +### location block + +The `location` block declares a location where sourcemaps are stored on the +filesystem. The `location` block can be specified multiple times to declare +multiple locations where sourcemaps are stored. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`path` | `string` | The path on disk where sourcemaps are stored. | | yes +`minified_path_prefix` | `string` | The prefix of the minified path sent from browsers. | | yes + +The `minified_path_prefix` argument determines the prefix of paths to +Javascript files, such as `http://example.com/`. The `path` argument then +determines where to find the sourcemap for the file. + +For example, given the following location block: + +``` +location { + path = "/var/my-app/build" + minified_path_prefix = "http://example.com/" +} +``` + +To look up the sourcemaps for a file hosted at `http://example.com/foo.js`, the +`faro.receiver` component will: + +1. Remove the minified path prefix to extract the path to the file (`foo.js`). +2. Search for that file path with a `.map` extension (`foo.js.map`) in `path` + (`/var/my-app/build/foo.js.map`). + +Optionally, the value for the `path` argument may contain `{{ .Release }}` as a +template value, such as `/var/my-app/{{ .Release }}/build`. The template value +will be replaced with the release value provided by the [Faro Web App SDK][faro-sdk]. + +### output block + +The `output` block specifies where to forward collected logs and traces. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`logs` | `list(LogsReceiver)` | A list of `loki` components to forward logs to. | `[]` | no +`traces` | `list(otelcol.Consumer)` | A list of `otelcol` components to forward traces to. | `[]` | no + +## Exported fields + +`faro.receiver` does not export any fields. + +## Component health + +`faro.receiver` is reported as unhealthy when the integrated server fails to +start. + +## Debug information + +`faro.receiver` does not expose any component-specific debug information. + +### Debug metrics + +`faro.receiver` exposes the following metrics for monitoring the component: + +* `faro_receiver_logs_total` (counter): Total number of ingested logs. +* `faro_receiver_measurements_total` (counter): Total number of ingested measurements. +* `faro_receiver_exceptions_total` (counter): Total number of ingested exceptions. +* `faro_receiver_events_total` (counter): Total number of ingested events. +* `faro_receiver_exporter_errors_total` (counter): Total number of errors produced by an internal exporter. +* `faro_receiver_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. +* `faro_receiver_request_message_bytes` (histogram): Size (in bytes) of HTTP requests received from clients. +* `faro_receiver_response_message_bytes` (histogram): Size (in bytes) of HTTP responses sent to clients. +* `faro_receiver_inflight_requests` (gauge): Current number of inflight requests. +* `faro_receiver_sourcemap_cache_size` (counter): Number of items in sourcemap cache per origin. +* `faro_receiver_sourcemap_downloads_total` (counter): Total number of sourcemap downloads performed per origin and status. +* `faro_receiver_sourcemap_file_reads_total` (counter): Total number of sourcemap retrievals using the filesystem per origin and status. + +## Example + +```river +faro.receiver "default" { + server { + listen_address = "NETWORK_ADDRESS" + } + + sourcemaps { + location { + path = "PATH_TO_SOURCEMAPS" + minified_path_prefix = "WEB_APP_PREFIX" + } + } + + output { + logs = [loki.write.default.receiver] + traces = [otelcol.exporter.otlp.traces.input] + } +} + +loki.write "default" { + endpoint { + url = "https://LOKI_ADDRESS/api/v1/push" + } +} + +otelcol.exporter.otlp "traces" { + client { + endpoint = "OTLP_ADDRESS" + } +} +``` + +Replace the following: + +* `NETWORK_ADDRESS`: IP address of the network interface to listen to traffic + on. This IP address must be reachable by browsers using the web application + to instrument. + +* `PATH_TO_SOURCEMAPS`: Path on disk where sourcemaps are located. + +* `WEB_APP_PREFIX`: Prefix of the web application being instrumented. + +* `LOKI_ADDRESS`: Address of the Loki server to send logs to. + + * If authentication is required to send logs to the Loki server, refer to the + documentation of [loki.write][] for more information. + +* `OTLP_ADDRESS`: The address of the OTLP-compatible server to send traces to. + + * If authentication is required to send logs to the Loki server, refer to the + documentation of [otelcol.exporter.otlp][] for more information. + +[loki.write]: {{< relref "./loki.write.md" >}} +[otelcol.exporter.otlp]: {{< relref "./otelcol.exporter.otlp.md" >}} diff --git a/docs/sources/flow/reference/components/otelcol.auth.oauth2.md b/docs/sources/flow/reference/components/otelcol.auth.oauth2.md index 429cab7a6c0c..b4f0cdd686e9 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.oauth2.md +++ b/docs/sources/flow/reference/components/otelcol.auth.oauth2.md @@ -39,7 +39,7 @@ otelcol.auth.oauth2 "LABEL" { Name | Type | Description | Default | Required ---- | ---- | ----------- | ------- | -------- `client_id` | `string` | The client identifier issued to the client. | | yes -`client_secret` | `string` | The secret string associated with the client identifier. | | yes +`client_secret` | `secret` | The secret string associated with the client identifier. | | yes `token_url` | `string` | The server endpoint URL from which to get tokens. | | yes `endpoint_params` | `map(list(string))` | Additional parameters that are sent to the token endpoint. | `{}` | no `scopes` | `list(string)` | Requested permissions associated for the client. | `[]` | no diff --git a/docs/sources/flow/reference/components/otelcol.processor.transform.md b/docs/sources/flow/reference/components/otelcol.processor.transform.md new file mode 100644 index 000000000000..60aaa3fee27b --- /dev/null +++ b/docs/sources/flow/reference/components/otelcol.processor.transform.md @@ -0,0 +1,575 @@ +--- +aliases: +- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.transform/ +- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.transform/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.transform/ +canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.transform/ +labels: + stage: experimental +title: otelcol.processor.transform +description: Learn about otelcol.processor.transform +--- + +# otelcol.processor.transform + +{{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} + +`otelcol.processor.transform` accepts telemetry data from other `otelcol` +components and modifies it using the [OpenTelemetry Transformation Language (OTTL)][OTTL]. +OTTL statements consist of [OTTL functions][], which act on paths. +A path is a reference to a telemetry data such as: +* Resource attributes. +* Instrumentation scope name. +* Span attributes. + +In addition to the [standard OTTL functions][OTTL functions], +there is also a set of metrics-only functions: +* [convert_sum_to_gauge][] +* [convert_gauge_to_sum][] +* [convert_summary_count_val_to_sum][] +* [convert_summary_sum_val_to_sum][] + +[OTTL][] statements can also contain constructs such as: +* [Booleans][OTTL booleans]: + * `not true` + * `not IsMatch(name, "http_.*")` +* [Boolean Expressions][OTTL boolean expressions] consisting of a `where` followed by one or more booleans: + * `set(attributes["whose_fault"], "ours") where attributes["http.status"] == 500` + * `set(attributes["whose_fault"], "theirs") where attributes["http.status"] == 400 or attributes["http.status"] == 404` +* [Math expressions][OTTL math expressions]: + * `1 + 1` + * `end_time_unix_nano - start_time_unix_nano` + * `sum([1, 2, 3, 4]) + (10 / 1) - 1` + +{{% admonition type="note" %}} +Some characters inside River strings [need to be escaped][river-strings] with a `\` character. +For example, the OTTL statement `set(description, "Sum") where type == "Sum"` +is written in River as `"set(description, \"Sum\") where type == \"Sum\""`. + +[river-strings]: {{< relref "../../config-language/expressions/types_and_values.md/#strings" >}} +{{% /admonition %}} + +{{% admonition type="note" %}} +`otelcol.processor.transform` is a wrapper over the upstream +OpenTelemetry Collector `transform` processor. If necessary, bug reports or feature requests +will be redirected to the upstream repository. +{{% /admonition %}} + +You can specify multiple `otelcol.processor.transform` components by giving them different labels. + +{{% admonition type="warning" %}} +`otelcol.processor.transform` allows you to modify all aspects of your telemetry. Some specific risks are given below, +but this is not an exhaustive list. It is important to understand your data before using this processor. + +- [Unsound Transformations][]: Transformations between metric data types are not defined in the [metrics data model][]. +To use these functions, you must understand the incoming data and know that it can be meaningfully converted +to a new metric data type or can be used to create new metrics. + - Although OTTL allows you to use the `set` function with `metric.data_type`, + its implementation in the transform processor is a [no-op][]. + To modify a data type, you must use a specific function such as `convert_gauge_to_sum`. +- [Identity Conflict][]: Transformation of metrics can potentially affect a metric's identity, + leading to an Identity Crisis. Be especially cautious when transforming a metric name and when reducing or changing + existing attributes. Adding new attributes is safe. +- [Orphaned Telemetry][]: The processor allows you to modify `span_id`, `trace_id`, and `parent_span_id` for traces + and `span_id`, and `trace_id` logs. Modifying these fields could lead to orphaned spans or logs. + +[Unsound Transformations]: https://github.com/open-telemetry/opentelemetry-collector/blob/v0.85.0/docs/standard-warnings.md#unsound-transformations +[Identity Conflict]: https://github.com/open-telemetry/opentelemetry-collector/blob/v0.85.0/docs/standard-warnings.md#identity-conflict +[Orphaned Telemetry]: https://github.com/open-telemetry/opentelemetry-collector/blob/v0.85.0/docs/standard-warnings.md#orphaned-telemetry +[no-op]: https://en.wikipedia.org/wiki/NOP_(code) +[metrics data model]: https://github.com/open-telemetry/opentelemetry-specification/blob/main//specification/metrics/data-model.md +{{% /admonition %}} + +## Usage + +```river +otelcol.processor.transform "LABEL" { + output { + metrics = [...] + logs = [...] + traces = [...] + } +} +``` + +## Arguments + +`otelcol.processor.transform` supports the following arguments: + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no + +The supported values for `error_mode` are: +* `ignore`: Ignore errors returned by statements and continue on to the next statement. This is the recommended mode. +* `propagate`: Return the error up the pipeline. This will result in the payload being dropped from the Agent. + +## Blocks + +The following blocks are supported inside the definition of +`otelcol.processor.transform`: + +Hierarchy | Block | Description | Required +--------- | ----- | ----------- | -------- +trace_statements | [trace_statements][] | Statements which transform traces. | no +metric_statements | [metric_statements][] | Statements which transform metrics. | no +log_statements | [log_statements][] | Statements which transform logs. | no +output | [output][] | Configures where to send received telemetry data. | yes + +[trace_statements]: #trace_statements-block +[metric_statements]: #metric_statements-block +[log_statements]: #log_statements-block +[output]: #output-block + +[OTTL Context]: #ottl-context + +### trace_statements block + +The `trace_statements` block specifies statements which transform trace telemetry signals. +Multiple `trace_statements` blocks can be specified. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes + +The supported values for `context` are: +* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +* `span`: Use when interacting only with OTLP spans. +* `spanevent`: Use when interacting only with OTLP span events. + +See [OTTL Context][] for more information about how ot use contexts. + +### metric_statements block + +The `metric_statements` block specifies statements which transform metric telemetry signals. +Multiple `metric_statements` blocks can be specified. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes + +The supported values for `context` are: +* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +* `metric`: Use when interacting only with individual OTLP metrics. +* `datapoint`: Use when interacting only with individual OTLP metric data points. + +Refer to [OTTL Context][] for more information about how to use contexts. + +### log_statements block + +The `log_statements` block specifies statements which transform log telemetry signals. +Multiple `log_statements` blocks can be specified. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes + +The supported values for `context` are: +* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +* `log`: Use when interacting only with OTLP logs. + +See [OTTL Context][] for more information about how ot use contexts. + +### OTTL Context + +Each context allows the transformation of its type of telemetry. +For example, statements associated with a `resource` context will be able to transform the resource's +`attributes` and `dropped_attributes_count`. + +Each type of `context` defines its own paths and enums specific to that context. +Refer to the OpenTelemetry documentation for a list of paths and enums for each context: +* [resource][OTTL resource context] +* [scope][OTTL scope context] +* [span][OTTL span context] +* [spanevent][OTTL spanevent context] +* [log][OTTL log context] +* [metric][OTTL metric context] +* [datapoint][OTTL datapoint context] + + +Contexts __NEVER__ supply access to individual items "lower" in the protobuf definition. +- This means statements associated to a `resource` __WILL NOT__ be able to access the underlying instrumentation scopes. +- This means statements associated to a `scope` __WILL NOT__ be able to access the underlying telemetry slices (spans, metrics, or logs). +- Similarly, statements associated to a `metric` __WILL NOT__ be able to access individual datapoints, but can access the entire datapoints slice. +- Similarly, statements associated to a `span` __WILL NOT__ be able to access individual SpanEvents, but can access the entire SpanEvents slice. + +For practical purposes, this means that a context cannot make decisions on its telemetry based on telemetry "lower" in the structure. +For example, __the following context statement is not possible__ because it attempts to use individual datapoint +attributes in the condition of a statement associated to a `metric`: + +```river +metric_statements { + context = "metric" + statements = [ + "set(description, \"test passed\") where datapoints.attributes[\"test\"] == \"pass\"", + ] +} +``` + +Context __ALWAYS__ supply access to the items "higher" in the protobuf definition that are associated to the telemetry being transformed. +- This means that statements associated to a `datapoint` have access to a datapoint's metric, instrumentation scope, and resource. +- This means that statements associated to a `spanevent` have access to a spanevent's span, instrumentation scope, and resource. +- This means that statements associated to a `span`/`metric`/`log` have access to the telemetry's instrumentation scope, and resource. +- This means that statements associated to a `scope` have access to the scope's resource. + +For example, __the following context statement is possible__ because `datapoint` statements can access the datapoint's metric. + +```river +metric_statements { + context = "datapoint" + statements = [ + "set(metric.description, \"test passed\") where attributes[\"test\"] == \"pass\"", + ] +} +``` + +The protobuf definitions for OTLP signals are maintained on GitHub: +* [traces][traces protobuf] +* [metrics][metrics protobuf] +* [logs][logs protobuf] + +Whenever possible, associate your statements to the context which the statement intens to transform. +The contexts are nested, and the higher-level contexts don't have to iterate through any of the +contexts at a lower level. For example, although you can modify resource attributes associated to a +span using the `span` context, it is more efficient to use the `resource` context. + +### output block + +{{< docs/shared lookup="flow/reference/components/output-block.md" source="agent" version="" >}} + +## Exported fields + +The following fields are exported and can be referenced by other components: + +Name | Type | Description +---- | ---- | ----------- +`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. + +`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, +logs, or traces). + +## Component health + +`otelcol.processor.transform` is only reported as unhealthy if given an invalid +configuration. + +## Debug information + +`otelcol.processor.transform` does not expose any component-specific debug +information. + +## Debug metrics + +`otelcol.processor.transform` does not expose any component-specific debug metrics. + +## Examples + +### Perform a transformation if an attribute does not exist + +This example sets the attribute `test` to `pass` if the attribute `test` does not exist. + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + trace_statements { + context = "span" + statements = [ + // Accessing a map with a key that does not exist will return nil. + "set(attributes[\"test\"], \"pass\") where attributes[\"test\"] == nil", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Each `"` is [escaped][river-strings] with `\"` inside the River string. + +### Rename a resource attribute + +The are two ways to rename an attribute key. +One way is to set a new attribute and delete the old one: + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + trace_statements { + context = "resource" + statements = [ + "set(attributes[\"namespace\"], attributes[\"k8s.namespace.name\"])", + "delete_key(attributes, \"k8s.namespace.name\")", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Another way is to update the key using regular expressions: + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + trace_statements { + context = "resource" + statements = [ + "replace_all_patterns(attributes, \"key\", \"k8s\\\\.namespace\\\\.name\", \"namespace\")", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Some values in the River string are [escaped][river-strings]: +* `\` is escaped with `\\` +* `"` is escaped with `\"` + +### Create an attribute from the contents of a log body + +This example sets the attribute `body` to the value of the log body: + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + log_statements { + context = "log" + statements = [ + "set(attributes[\"body\"], body)", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Each `"` is [escaped][river-strings] with `\"` inside the River string. + +### Combine two attributes + +This example sets the attribute `test` to the value of attributes `service.name` and `service.version` combined. + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + trace_statements { + context = "resource" + statements = [ + // The Concat function combines any number of strings, separated by a delimiter. + "set(attributes[\"test\"], Concat([attributes[\"service.name\"], attributes[\"service.version\"]], \" \"))", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Each `"` is [escaped][river-strings] with `\"` inside the River string. + +### Parsing JSON logs + +Given the following JSON body: + +```json +{ + "name": "log", + "attr1": "example value 1", + "attr2": "example value 2", + "nested": { + "attr3": "example value 3" + } +} +``` + +You can add specific fields as attributes on the log: + +```river +otelcol.processor.transform "default" { + error_mode = "ignore" + + log_statements { + context = "log" + + statements = [ + // Parse body as JSON and merge the resulting map with the cache map, ignoring non-json bodies. + // cache is a field exposed by OTTL that is a temporary storage place for complex operations. + "merge_maps(cache, ParseJSON(body), \"upsert\") where IsMatch(body, \"^\\\\{\") ", + + // Set attributes using the values merged into cache. + // If the attribute doesn't exist in cache then nothing happens. + "set(attributes[\"attr1\"], cache[\"attr1\"])", + "set(attributes[\"attr2\"], cache[\"attr2\"])", + + // To access nested maps you can chain index ([]) operations. + // If nested or attr3 do no exist in cache then nothing happens. + "set(attributes[\"nested.attr3\"], cache[\"nested\"][\"attr3\"])", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} +``` + +Some values in the River strings are [escaped][river-strings]: +* `\` is escaped with `\\` +* `"` is escaped with `\"` + +### Various transformations of attributes and status codes + +The example takes advantage of context efficiency by grouping transformations +with the context which it intends to transform. + +```river +otelcol.receiver.otlp "default" { + http {} + grpc {} + + output { + metrics = [otelcol.processor.transform.default.input] + logs = [otelcol.processor.transform.default.input] + traces = [otelcol.processor.transform.default.input] + } +} + +otelcol.processor.transform "default" { + error_mode = "ignore" + + trace_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"])", + "replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\")", + "limit(attributes, 100, [])", + "truncate_all(attributes, 4096)", + ] + } + + trace_statements { + context = "span" + statements = [ + "set(status.code, 1) where attributes[\"http.path\"] == \"/health\"", + "set(name, attributes[\"http.route\"])", + "replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\")", + "limit(attributes, 100, [])", + "truncate_all(attributes, 4096)", + ] + } + + metric_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"host.name\"])", + "truncate_all(attributes, 4096)", + ] + } + + metric_statements { + context = "metric" + statements = [ + "set(description, \"Sum\") where type == \"Sum\"", + ] + } + + metric_statements { + context = "datapoint" + statements = [ + "limit(attributes, 100, [\"host.name\"])", + "truncate_all(attributes, 4096)", + "convert_sum_to_gauge() where metric.name == \"system.processes.count\"", + "convert_gauge_to_sum(\"cumulative\", false) where metric.name == \"prometheus_metric\"", + ] + } + + log_statements { + context = "resource" + statements = [ + "keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\"])", + ] + } + + log_statements { + context = "log" + statements = [ + "set(severity_text, \"FAIL\") where body == \"request failed\"", + "replace_all_matches(attributes, \"/user/*/list/*\", \"/user/{userId}/list/{listId}\")", + "replace_all_patterns(attributes, \"value\", \"/account/\\\\d{4}\", \"/account/{accountId}\")", + "set(body, attributes[\"http.route\"])", + ] + } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} + +otelcol.exporter.otlp "default" { + client { + endpoint = env("OTLP_ENDPOINT") + } +} +``` + +Some values in the River strings are [escaped][river-strings]: +* `\` is escaped with `\\` +* `"` is escaped with `\"` + +[river-strings]: {{< relref "../../config-language/expressions/types_and_values.md/#strings" >}} + +[traces protobuf]: https://github.com/open-telemetry/opentelemetry-proto/blob/v0.17.0/opentelemetry/proto/trace/v1/trace.proto +[metrics protobuf]: https://github.com/open-telemetry/opentelemetry-proto/blob/v0.17.0/opentelemetry/proto/metrics/v1/metrics.proto +[logs protobuf]: https://github.com/open-telemetry/opentelemetry-proto/blob/v0.17.0/opentelemetry/proto/logs/v1/logs.proto + + +[OTTL]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/README.md +[OTTL functions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/ottlfuncs/README.md +[convert_sum_to_gauge]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/processor/transformprocessor#convert_sum_to_gauge +[convert_gauge_to_sum]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/processor/transformprocessor#convert_gauge_to_sum +[convert_summary_count_val_to_sum]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/processor/transformprocessor#convert_summary_count_val_to_sum +[convert_summary_sum_val_to_sum]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/processor/transformprocessor#convert_summary_sum_val_to_sum +[OTTL booleans]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/pkg/ottl#booleans +[OTTL math expressions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/pkg/ottl#math-expressions +[OTTL boolean expressions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/pkg/ottl#boolean-expressions +[OTTL resource context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottlresource/README.md +[OTTL scope context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottlscope/README.md +[OTTL span context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottlspan/README.md +[OTTL spanevent context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottlspanevent/README.md +[OTTL metric context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottlmetric/README.md +[OTTL datapoint context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottldatapoint/README.md +[OTTL log context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/contexts/ottllog/README.md diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md new file mode 100644 index 000000000000..e9c26d21caff --- /dev/null +++ b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md @@ -0,0 +1,126 @@ +--- +aliases: +- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.cadvisor/ +- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.cadvisor/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.cadvisor/ +canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.cadvisor/ +title: prometheus.exporter.cadvisor +description: Learn about the prometheus.exporter.cadvisor +--- + +# prometheus.exporter.cadvisor +The `prometheus.exporter.cadvisor` component exposes container metrics using +[cAdvisor](https://github.com/google/cadvisor). + +## Usage + +```river +prometheus.exporter.cadvisor "LABEL" { +} +``` + +## Arguments +The following arguments can be used to configure the exporter's behavior. +All arguments are optional. Omitted fields take their default values. + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`store_container_labels` | `bool` | Whether to convert container labels and environment variables into labels on Prometheus metrics for each container. | `true` | no +`allowlisted_container_labels` | `list(string)` | Allowlist of container labels to convert to Prometheus labels. | `[]` | no +`env_metadata_allowlist` | `list(string)` | Allowlist of environment variable keys matched with a specified prefix that needs to be collected for containers. | `[]` | no +`raw_cgroup_prefix_allowlist` | `list(string)` | List of cgroup path prefixes that need to be collected, even when docker_only is specified. | `[]` | no +`perf_events_config` | `string` | Path to a JSON file containing the configuration of perf events to measure. | `""` | no +`resctrl_interval` | `duration` | Interval to update resctrl mon groups. | `0` | no +`disabled_metrics` | `list(string)` | List of metrics to be disabled which, if set, overrides the default disabled metrics. | (see below) | no +`enabled_metrics` | `list(string)` | List of metrics to be enabled which, if set, overrides disabled_metrics. | `[]` | no +`storage_duration` | `duration` | Length of time to keep data stored in memory. | `2m` | no +`containerd_host` | `string` | Containerd endpoint. | `/run/containerd/containerd.sock` | no +`containerd_namespace` | `string` | Containerd namespace. | `k8s.io` | no +`docker_host` | `string` | Docker endpoint. | `unix:///var/run/docker.sock` | no +`use_docker_tls` | `bool` | Use TLS to connect to docker. | `false` | no +`docker_tls_cert` | `string` | Path to client certificate for TLS connection to docker. | `cert.pem` | no +`docker_tls_key` | `string` | Path to private key for TLS connection to docker. | `key.pem` | no +`docker_tls_ca` | `string` | Path to a trusted CA for TLS connection to docker. | `ca.pem` | no +`docker_only` | `bool` | Only report docker containers in addition to root stats. | `false` | no + +For `allowlisted_container_labels` to take effect, `store_container_labels` must be set to `false`. + +`env_metadata_allowlist` is only supported for containerd and Docker runtimes. + +If `perf_events_config` is not set, measurement of perf events is disabled. + +A `resctrl_interval` of `0` disables updating mon groups. + +The values for `enabled_metrics` and `disabled_metrics` do not correspond to +Prometheus metrics, but to kinds of metrics that should (or shouldn't) be +exposed. The full list of values that can be used is +``` +"cpu", "sched", "percpu", "memory", "memory_numa", "cpuLoad", "diskIO", "disk", +"network", "tcp", "advtcp", "udp", "app", "process", "hugetlb", "perf_event", +"referenced_memory", "cpu_topology", "resctrl", "cpuset", "oom_event" +``` + +By default the following metric kinds are disabled: `"memory_numa", "tcp", "udp", "advtcp", "process", "hugetlb", "referenced_memory", "cpu_topology", "resctrl", "cpuset"` + +## Blocks + +The `prometheus.exporter.cadvisor` component does not support any blocks, and is configured +fully through arguments. + +## Exported fields + +{{< docs/shared lookup="flow/reference/components/exporter-component-exports.md" source="agent" version="" >}} + +## Component health + +`prometheus.exporter.cadvisor` is only reported as unhealthy if given +an invalid configuration. In those cases, exported fields retain their last +healthy values. + +## Debug information + +`prometheus.exporter.cadvisor` does not expose any component-specific +debug information. + +## Debug metrics + +`prometheus.exporter.cadvisor` does not expose any component-specific +debug metrics. + +## Example + +This example uses a [`prometheus.scrape` component][scrape] to collect metrics +from `prometheus.exporter.cadvisor`: + +```river +prometheus.exporter.cadvisor "example" { + docker = "unix:///var/run/docker.sock" + + storage_duration = "5m" +} + +// Configure a prometheus.scrape component to collect cadvisor metrics. +prometheus.scrape "scraper" { + targets = prometheus.exporter.cadvisor.example.targets + forward_to = [ prometheus.remote_write.demo.receiver ] +} + +prometheus.remote_write "demo" { + endpoint { + url = PROMETHEUS_REMOTE_WRITE_URL + + basic_auth { + username = USERNAME + password = PASSWORD + } + } +} +``` + +Replace the following: + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. + +[scrape]: {{< relref "./prometheus.scrape.md" >}} diff --git a/docs/sources/flow/reference/components/prometheus.exporter.github.md b/docs/sources/flow/reference/components/prometheus.exporter.github.md index 61b6460a9d08..aab5f2ceb7dd 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.github.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.github.md @@ -11,7 +11,7 @@ description: Learn about prometheus.exporter.github # prometheus.exporter.github The `prometheus.exporter.github` component embeds -[github_exporter](https://github.com/infinityworks/github-exporter) for collecting statistics from GitHub. +[github_exporter](https://github.com/githubexporter/github-exporter) for collecting statistics from GitHub. ## Usage diff --git a/docs/sources/flow/reference/components/prometheus.exporter.redis.md b/docs/sources/flow/reference/components/prometheus.exporter.redis.md index 39e5eca72921..7c310c801e6d 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.redis.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.redis.md @@ -42,6 +42,7 @@ Omitted fields take their default values. | `check_single_keys` | `list(string)` | List of single keys to export value and length/size. | | no | | `check_streams` | `list(string)` | List of stream-patterns to export info about streams, groups, and consumers to search for with SCAN. | | no | | `check_single_streams` | `list(string)` | List of single streams to export info about streams, groups, and consumers. | | no | +| `export_key_values` | `bool` | Whether to export key values as labels when using `check_keys` or `check_single_keys`. | `true` | no | | `count_keys` | `list(string)` | List of individual keys to export counts for. | | no | | `script_path` | `string` | Path to Lua Redis script for collecting extra metrics. | | no | | `script_paths` | `list(string)` | List of paths to Lua Redis scripts for collecting extra metrics. | | no | diff --git a/docs/sources/flow/reference/components/prometheus.exporter.unix.md b/docs/sources/flow/reference/components/prometheus.exporter.unix.md index 4eb5aad000f6..74bea7db7459 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.unix.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.unix.md @@ -18,9 +18,6 @@ The `node_exporter` itself is comprised of various _collectors_, which can be enabled and disabled at will. For more information on collectors, refer to the [`collectors-list`](#collectors-list) section. -The `prometheus.exporter.unix` component can only appear once per -configuration file, and a block label must not be passed to it. - ## Usage ```river @@ -381,11 +378,11 @@ This example uses a [`prometheus.scrape` component][scrape] to collect metrics from `prometheus.exporter.unix`: ```river -prometheus.exporter.unix { } +prometheus.exporter.unix "demo" { } // Configure a prometheus.scrape component to collect unix metrics. prometheus.scrape "demo" { - targets = prometheus.exporter.unix.targets + targets = prometheus.exporter.unix.demo.targets forward_to = [prometheus.remote_write.demo.receiver] } diff --git a/docs/sources/flow/reference/components/prometheus.exporter.windows.md b/docs/sources/flow/reference/components/prometheus.exporter.windows.md index 069e0f734700..98bd096a3329 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.windows.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.windows.md @@ -180,7 +180,7 @@ For a server name to be included, it must match the regular expression specified ### text_file block Name | Type | Description | Default | Required ---- |----------| ----------- | ------- | -------- -`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\windows_exporter\textfile_inputs` | no +`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\Grafana Agent Flow\textfile_inputs` | no When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. Each `.prom` file found must end with an empty line feed to work properly. diff --git a/docs/sources/flow/reference/components/prometheus.remote_write.md b/docs/sources/flow/reference/components/prometheus.remote_write.md index 28b877538c52..e3b8c9b7e3fe 100644 --- a/docs/sources/flow/reference/components/prometheus.remote_write.md +++ b/docs/sources/flow/reference/components/prometheus.remote_write.md @@ -54,6 +54,9 @@ endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +endpoint > sigv4 | [sigv4][] | Configure AWS Signature Verification 4 for authenticating to the endpoint. | no +endpoint > azuread | [azuread][] | Configure AzureAD for authenticating to the endpoint. | no +endpoint > azuread > managed_identity | [managed_identity][] | Configure Azure user-assigned managed identity. | yes endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no endpoint > queue_config | [queue_config][] | Configuration for how metrics are batched before sending. | no endpoint > metadata_config | [metadata_config][] | Configuration for how metric metadata is sent. | no @@ -68,6 +71,9 @@ basic_auth` refers to a `basic_auth` block defined inside an [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block +[sigv4]: #sigv4-block +[azuread]: #azuread-block +[managed_identity]: #managed_identity-block [tls_config]: #tls_config-block [queue_config]: #queue_config-block [metadata_config]: #metadata_config-block @@ -101,6 +107,8 @@ Name | Type | Description | Default | Required - [`basic_auth` block][basic_auth]. - [`authorization` block][authorization]. - [`oauth2` block][oauth2]. + - [`sigv4` block][sigv4]. + - [`azuread` block][azuread]. When multiple `endpoint` blocks are provided, metrics are concurrently sent to all configured locations. Each endpoint has a _queue_ which is used to read metrics @@ -128,6 +136,18 @@ metrics fails. {{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +### sigv4 block + +{{< docs/shared lookup="flow/reference/components/sigv4-block.md" source="agent" version="" >}} + +### azuread block + +{{< docs/shared lookup="flow/reference/components/azuread-block.md" source="agent" version="" >}} + +### managed_identity block + +{{< docs/shared lookup="flow/reference/components/managed_identity-block.md" source="agent" version="" >}} + ### tls_config block {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} @@ -361,3 +381,8 @@ prometheus.remote_write "default" { `prometheus.remote_write` uses [snappy](https://en.wikipedia.org/wiki/Snappy_(compression)) for compression. Any labels that start with `__` will be removed before sending to the endpoint. + +## Data retention + +{{< docs/shared source="agent" lookup="/wal-data-retention.md" version="" >}} + diff --git a/docs/sources/flow/reference/components/prometheus.scrape.md b/docs/sources/flow/reference/components/prometheus.scrape.md index cd196eacd66d..5a0a2aba8b33 100644 --- a/docs/sources/flow/reference/components/prometheus.scrape.md +++ b/docs/sources/flow/reference/components/prometheus.scrape.md @@ -51,7 +51,7 @@ Name | Type | Description | Default | Required `honor_labels` | `bool` | Indicator whether the scraped metrics should remain unmodified. | `false` | no `honor_timestamps` | `bool` | Indicator whether the scraped timestamps should be respected. | `true` | no `params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no -`scrape_classic_histogram` | `bool` | Whether to scrape a classic histogram that is also exposed as a native histogram. | `false` | no +`scrape_classic_histograms` | `bool` | Whether to scrape a classic histogram that is also exposed as a native histogram. | `false` | no `scrape_interval` | `duration` | How frequently to scrape the targets of this scrape config. | `"60s"` | no `scrape_timeout` | `duration` | The timeout for scraping targets of this config. | `"10s"` | no `metrics_path` | `string` | The HTTP resource path on which to fetch metrics from targets. | `/metrics` | no @@ -234,7 +234,7 @@ processed. When the target is behaving normally, the `up` metric is set to To enable scraping of Prometheus' native histograms over gRPC, the `enable_protobuf_negotiation` must be set to true. The -`scrape_classic_histogram` argument controls whether the component should also +`scrape_classic_histograms` argument controls whether the component should also scrape the 'classic' histogram equivalent of a native histogram, if it is present. diff --git a/docs/sources/flow/reference/components/pyroscope.ebpf.md b/docs/sources/flow/reference/components/pyroscope.ebpf.md index b902d60079ac..fb086500e4de 100644 --- a/docs/sources/flow/reference/components/pyroscope.ebpf.md +++ b/docs/sources/flow/reference/components/pyroscope.ebpf.md @@ -41,18 +41,20 @@ You can use the following arguments to configure a `pyroscope.ebpf`. Only the `forward_to` and `targets` fields are required. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -|---------------------------|--------------------------|--------------------------------------------------------------|---------|----------| -| `targets` | `list(map(string))` | List of targets to group profiles by container id | | yes | -| `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes | -| `collect_interval` | `duration` | How frequently to collect profiles | `15s` | no | -| `sample_rate` | `int` | How many times per second to collect profile samples | 97 | no | -| `pid_cache_size` | `int` | The size of the pid -> proc symbols table LRU cache | 32 | no | -| `build_id_cache_size` | `int` | The size of the elf file build id -> symbols table LRU cache | 64 | no | -| `same_file_cache_size` | `int` | The size of the elf file -> symbols table LRU cache | 8 | no | -| `container_id_cache_size` | `int` | The size of the pid -> container ID table LRU cache | 1024 | no | -| `collect_user_profile` | `bool` | A flag to enable/disable collection of userspace profiles | true | no | -| `collect_kernel_profile` | `bool` | A flag to enable/disable collection of kernelspace profiles | true | no | +| Name | Type | Description | Default | Required | +|---------------------------|--------------------------|-------------------------------------------------------------------------------------|---------|----------| +| `targets` | `list(map(string))` | List of targets to group profiles by container id | | yes | +| `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes | +| `collect_interval` | `duration` | How frequently to collect profiles | `15s` | no | +| `sample_rate` | `int` | How many times per second to collect profile samples | 97 | no | +| `pid_cache_size` | `int` | The size of the pid -> proc symbols table LRU cache | 32 | no | +| `build_id_cache_size` | `int` | The size of the elf file build id -> symbols table LRU cache | 64 | no | +| `same_file_cache_size` | `int` | The size of the elf file -> symbols table LRU cache | 8 | no | +| `container_id_cache_size` | `int` | The size of the pid -> container ID table LRU cache | 1024 | no | +| `collect_user_profile` | `bool` | A flag to enable/disable collection of userspace profiles | true | no | +| `collect_kernel_profile` | `bool` | A flag to enable/disable collection of kernelspace profiles | true | no | +| `demangle` | `string` | C++ demangle mode. Available options are: `none`, `simplified`, `templates`, `full` | `none` | no | + ## Exported fields diff --git a/docs/sources/flow/release-notes.md b/docs/sources/flow/release-notes.md index fb0ab1f221bb..8d4ba736dce3 100644 --- a/docs/sources/flow/release-notes.md +++ b/docs/sources/flow/release-notes.md @@ -62,6 +62,23 @@ to `true` and controlled via the `include_scope_labels` argument. `name` to `otel_scope_name` and `version` to `otel_scope_version`. This is now correct with the OTLP Instrumentation Scope specification. +### Breaking change: `prometheus.exporter.unix` now requires a label. + +Previously the exporter was a singleton and did not require a label. The exporter now can be used multiple times and +needs a label. + +Old configuration example: + +```river +prometheus.exporter.unix { /* ... */ } +``` + +New configuration example: + +```river +prometheus.exporter.unix "example" { /* ... */ } +``` + ## v0.36 ### Breaking change: The default value of `retry_on_http_429` is changed to `true` for the `queue_config` in `prometheus.remote_write` diff --git a/docs/sources/flow/setup/deploy-agent.md b/docs/sources/flow/setup/deploy-agent.md new file mode 100644 index 000000000000..fb372d8d14b0 --- /dev/null +++ b/docs/sources/flow/setup/deploy-agent.md @@ -0,0 +1,14 @@ +--- +aliases: +- /docs/grafana-cloud/agent/flow/setup/deploy-agent/ +- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/deploy-agent/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/deploy-agent/ +canonical: https://grafana.com/docs/agent/latest/flow/setup/start-agent/ +description: Learn about possible deployment topologies for Grafana Agent +menuTitle: Deploy Grafana Agent +title: Grafana Agent deployment topologies +weight: 900 +--- + +{{< docs/shared source="agent" lookup="/deploy-agent.md" version="" >}} + diff --git a/docs/sources/operator/deploy-agent-operator-resources.md b/docs/sources/operator/deploy-agent-operator-resources.md index 6c7bf9467540..cb92b9b87edb 100644 --- a/docs/sources/operator/deploy-agent-operator-resources.md +++ b/docs/sources/operator/deploy-agent-operator-resources.md @@ -62,7 +62,7 @@ To deploy the `GrafanaAgent` resource: labels: app: grafana-agent spec: - image: grafana/agent:v0.36.2 + image: grafana/agent:v0.37.0-rc.0 integrations: selector: matchLabels: diff --git a/docs/sources/operator/getting-started.md b/docs/sources/operator/getting-started.md index f148d399b297..6717d0079e66 100644 --- a/docs/sources/operator/getting-started.md +++ b/docs/sources/operator/getting-started.md @@ -79,7 +79,7 @@ To install Agent Operator: serviceAccountName: grafana-agent-operator containers: - name: operator - image: grafana/agent-operator:v0.36.2 + image: grafana/agent-operator:v0.37.0-rc.0 args: - --kubelet-service=default/kubelet --- diff --git a/docs/sources/operator/helm-getting-started.md b/docs/sources/operator/helm-getting-started.md index 6fdab1983359..56317949a5ed 100644 --- a/docs/sources/operator/helm-getting-started.md +++ b/docs/sources/operator/helm-getting-started.md @@ -54,7 +54,7 @@ To install the Agent Operator Helm chart: ```bash helm install my-release grafana/grafana-agent-operator -f values.yaml -n my-namespace ``` - You can find a list of configurable template parameters in the [Helm chart repository](/grafana/helm-charts/blob/main/charts/agent-operator/values.yaml). + You can find a list of configurable template parameters in the [Helm chart repository](https://github.com/grafana/helm-charts/blob/main/charts/agent-operator/values.yaml). 1. Once you've successfully deployed the Helm release, confirm that Agent Operator is up and running: diff --git a/docs/sources/operator/release-notes.md b/docs/sources/operator/release-notes.md index b83f3a50aab6..28eac0947108 100644 --- a/docs/sources/operator/release-notes.md +++ b/docs/sources/operator/release-notes.md @@ -19,16 +19,19 @@ The release notes provide information about deprecations and breaking changes in For a complete list of changes to Grafana Agent, with links to pull requests and related issues when available, refer to the [Changelog](https://github.com/grafana/agent/blob/main/CHANGELOG.md). -{{% admonition type="note" %}} -These release notes are specific to the Static mode Kubernetes Operator. -Other release notes for the different Grafana Agent variants are contained on separate pages: - -* [Static mode release notes][release-notes-static] -* [Flow mode release notes][release-notes-flow] - -[release-notes-static]: {{< relref "../static/release-notes.md" >}} -[release-notes-flow]: {{< relref "../flow/release-notes.md" >}} -{{% /admonition %}} +> **Note:** These release notes are specific to the Static mode Kubernetes Operator. +> Other release notes for the different Grafana Agent variants are contained on separate pages: +> +> - [Static mode release notes][release-notes-static] +> - [Flow mode release notes][release-notes-flow] + +{{% docs/reference %}} +[release-notes-static]: "/docs/agent/ -> /docs/agent//static/release-notes" +[release-notes-static]: "/docs/agent/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/release-notes" + +[release-notes-flow]: "/docs/agent/ -> /docs/agent//flow/release-notes" +[release-notes-flow]: "/docs/grafana-cloud/ -> /docs/agent//flow/release-notes" +{{% /docs/reference %}} ## v0.33 diff --git a/docs/sources/shared/deploy-agent.md b/docs/sources/shared/deploy-agent.md new file mode 100644 index 000000000000..5a6618215d89 --- /dev/null +++ b/docs/sources/shared/deploy-agent.md @@ -0,0 +1,124 @@ +--- +aliases: +- /docs/agent/shared/deploy-agent/ +- /docs/grafana-cloud/agent/shared/deploy-agent/ +- /docs/grafana-cloud/monitor-infrastructure/agent/shared/deploy-agent/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/deploy-agent/ +canonical: https://grafana.com/docs/agent/latest/shared/deploy-agent/ +description: Shared content, deployment topologies for Grafana Agent +headless: true +--- + +# Deploying Grafana Agent + +Grafana Agent is a flexible, vendor-neutral telemetry collector. This +flexibility means that Grafana Agent doesn’t enforce a specific deployment topology +but can work in multiple scenarios. + +This page lists common topologies used for deployments of Grafana Agent, when +to consider using each topology, issues you may run into, and scaling +considerations. + +## As a centralized collection service +Deploying Grafana Agent as a centralized service is recommended for +collecting application telemetry. This topology allows you to use a smaller number of agents to +coordinate service discovery, collection, and remote writing. + +![centralized-collection](/media/docs/agent/agent-topologies/centralized-collection.png) + +Using this topology requires deploying the Agent on separate infrastructure, +and making sure that agents can discover and reach these applications over the +network. The main predictor for the size of the agent is the number of active +metrics series it is scraping; a rule of thumb is approximately 10 KB of memory for each +series. We recommend you start looking towards horizontal scaling around the 1 million +active series mark. + +### Using Kubernetes StatefulSets +Deploying Grafana Agent as a StatefulSet is the recommended option for metrics +collection. +The persistent pod identifiers make it possible to consistently match volumes +with pods so that you can use them for the WAL directory. + +You can also use a Kubernetes deployment in cases where persistent storage is not required, such as a traces-only pipeline. + +### Pros +* Straightforward scaling using [clustering][] or [hashmod sharding][] +* Minimizes the “noisy neighbor” effect +* Easy to meta-monitor + +### Cons +* Requires running on separate infrastructure + +### Use for +* Scalable telemetry collection + +### Don’t use for +* Host-level metrics and logs + +## As a host daemon +Deploying one Grafana Agent per machine is required for collecting +machine-level metrics and logs, such as node_exporter hardware and network +metrics or journald system logs. + +![daemonset](/media/docs/agent/agent-topologies/daemonset.png) + +Each Grafana Agent requires you to open an outgoing connection for each remote endpoint +it’s shipping data to. This can lead to NAT port exhaustion on the egress +infrastructure. Each egress IP can support up to (65535 - 1024 = 64511) +outgoing connections on different ports. So, if all agents are shipping metrics +and log data, an egress IP can support up to 32,255 agents. + +### Using Kubernetes DaemonSets +The simplest use case of the host daemon topology is a Kubernetes DaemonSet, +and it is required for node-level observability (for example cAdvisor metrics) and +collecting pod logs. + +### Pros +* Doesn’t require running on separate infrastructure +* Typically leads to smaller-sized agents +* Lower network latency to instrumented applications + +### Cons +* Requires planning a process for provisioning Grafana Agent on new machines, as well as keeping configuration up to date to avoid configuration drift +* Not possible to scale agents independently when using Kubernetes DaemonSets +* Scaling the topology can strain external APIs (like service discovery) and network infrastructure (like firewalls, proxy servers, and egress points) + +### Use for +* Collecting machine-level metrics and logs (for example, node_exporter hardware metrics, Kubernetes pod logs) + +### Don’t use for +* Scenarios where Grafana Agent grows so large it can become a noisy neighbor +* Collecting an unpredictable amount of telemetry + +## As a container sidecar +Deploying Grafana Agent as a container sidecar is only recommended for +short-lived applications or specialized agent deployments. + +![daemonset](/media/docs/agent/agent-topologies/sidecar.png) + +### Using Kubernetes pod sidecars +In a Kubernetes environment, the sidecar model consists of deploying Grafana Agent +as an extra container on the pod. The pod’s controller, network configuration, +enabled capabilities, and available resources are shared between the actual +application and the sidecar agent. + +### Pros +* Doesn’t require running on separate infrastructure +* Straightforward networking with partner applications + +### Cons +* Doesn’t scale separately +* Makes resource consumption harder to monitor and predict +* Agents do not have a life cycle of their own, making it harder to reason about things like recovering from network outages + +### Use for +* Serverless services +* Job/batch applications that work with a push model +* Air-gapped applications that can’t be otherwise reached over the network + +### Don’t use for +* Long-lived applications +* Scenarios where the agent size grows so large it can become a noisy neighbor + +[hashmod sharding]: {{< relref "../static/operation-guide/_index.md" >}} +[clustering]: {{< relref "../flow/concepts/clustering.md" >}} diff --git a/docs/sources/shared/flow/reference/components/azuread-block.md b/docs/sources/shared/flow/reference/components/azuread-block.md new file mode 100644 index 000000000000..b1ecc1e03c3a --- /dev/null +++ b/docs/sources/shared/flow/reference/components/azuread-block.md @@ -0,0 +1,19 @@ +--- +aliases: +- /docs/agent/shared/flow/reference/components/azuread-block/ +- /docs/grafana-cloud/agent/shared/flow/reference/components/azuread-block/ +- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/azuread-block/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/azuread-block/ +canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/azuread-block/ +description: Shared content, azuread block +headless: true +--- + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`cloud` | `string` | The Azure Cloud. | `"AzurePublic"` | no + +The supported values for `cloud` are: +* `"AzurePublic"` +* `"AzureChina"` +* `"AzureGovernment"` \ No newline at end of file diff --git a/docs/sources/shared/flow/reference/components/managed_identity-block.md b/docs/sources/shared/flow/reference/components/managed_identity-block.md new file mode 100644 index 000000000000..bf4dd4aa6831 --- /dev/null +++ b/docs/sources/shared/flow/reference/components/managed_identity-block.md @@ -0,0 +1,22 @@ +--- +aliases: +- /docs/agent/shared/flow/reference/components/managed_identity-block/ +- /docs/grafana-cloud/agent/shared/flow/reference/components/managed_identity-block/ +- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/managed_identity-block/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/managed_identity-block/ +canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/managed_identity-block/ +description: Shared content, managed_identity block +headless: true +--- + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`client_id` | `string` | Client ID of the managed identity which is used to authenticate. | | yes + +`client_id` should be a valid [UUID][] in one of the supported formats: +* `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` +* `urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` +* Microsoft encoding: `{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}` +* Raw hex encoding: `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` + +[UUID]: https://en.wikipedia.org/wiki/Universally_unique_identifier \ No newline at end of file diff --git a/docs/sources/shared/flow/reference/components/sigv4-block.md b/docs/sources/shared/flow/reference/components/sigv4-block.md new file mode 100644 index 000000000000..65a67da4e3a5 --- /dev/null +++ b/docs/sources/shared/flow/reference/components/sigv4-block.md @@ -0,0 +1,24 @@ +--- +aliases: +- /docs/agent/shared/flow/reference/components/sigv4-block/ +- /docs/grafana-cloud/agent/shared/flow/reference/components/sigv4-block/ +- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/sigv4-block/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/sigv4-block/ +canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/sigv4-block/ +description: Shared content, sigv4 block +headless: true +--- + +Name | Type | Description | Default | Required +---- | ---- | ----------- | ------- | -------- +`region` | `string` | AWS region. | | no +`access_key` | `string` | AWS API access key. | | no +`secret_key` | `secret` | AWS API secret key.| | no +`profile` | `string` | Named AWS profile used to authenticate. | | no +`role_arn` | `string` | AWS Role ARN, an alternative to using AWS API keys. | | no + +If `region` is left blank, the region from the default credentials chain is used. + +If `access_key` is left blank, the environment variable `AWS_ACCESS_KEY_ID` is used. + +If `secret_key` is left blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. diff --git a/docs/sources/shared/wal-data-retention.md b/docs/sources/shared/wal-data-retention.md new file mode 100644 index 000000000000..5ff90ae03301 --- /dev/null +++ b/docs/sources/shared/wal-data-retention.md @@ -0,0 +1,78 @@ +--- +aliases: +- /docs/agent/shared/wal-data-retention/ +- /docs/grafana-cloud/agent/shared/wal-data-retention/ +- /docs/grafana-cloud/monitor-infrastructure/agent/shared/wal-data-retention/ +- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/wal-data-retention/ +canonical: https://grafana.com/docs/agent/latest/shared/wal-data-retention/ +description: Shared content, information about data retention in the WAL +headless: true +--- + +The `prometheus.remote_write` component uses a Write Ahead Log (WAL) to prevent +data loss during network outages. The component buffers the received metrics in +a WAL for each configured endpoint. The queue shards can use the WAL after the +network outage is resolved and flush the buffered metrics to the endpoints. + +The WAL records metrics in 128 MB files called segments. To avoid having a WAL +that grows on-disk indefinitely, the component _truncates_ its segments on a +set interval. + +On each truncation, the WAL deletes references to series that are no longer +present and also _checkpoints_ roughly the oldest two thirds of the segments +(rounded down to the nearest integer) written to it since the last truncation +period. A checkpoint means that the WAL only keeps track of the unique +identifier for each existing metrics series, and can no longer use the samples +for remote writing. If that data has not yet been pushed to the remote +endpoint, it is lost. + +This behavior dictates the data retention for the `prometheus.remote_write` +component. It also means that it is impossible to directly correlate data +retention directly to the data age itself, as the truncation logic works on +_segments_, not the samples themselves. This makes data retention less +predictable when the component receives a non-consistent rate of data. + +The [WAL block][] in Flow mode, or the [metrics config][] in Static mode +contain some configurable parameters that can be used to control the tradeoff +between memory usage, disk usage, and data retention. + +The `truncate_frequency` or `wal_truncate_frequency` parameter configures the +interval at which truncations happen. A lower value leads to reduced memory +usage, but also provides less resiliency to long outages. + +When a WAL clean-up starts, the most recently successfully sent timestamp is +used to determine how much data is safe to remove from the WAL. +The `min_keepalive_time` or `min_wal_time` controls the minimum age of samples +considered for removal. No samples more recent than `min_keepalive_time` are +removed. The `max_keepalive_time` or `max_wal_time` controls the maximum age of +samples that can be kept in the WAL. Samples older than +`max_keepalive_time` are forcibly removed. + +### In cases of `remote_write` outages +When the remote write endpoint is unreachable over a period of time, the most +recent successfully sent timestamp is not updated. The +`min_keepalive_time` and `max_keepalive_time` arguments control the age range +of data kept in the WAL. + +If the remote write outage is longer than the `max_keepalive_time` parameter, +then the WAL is truncated, and the oldest data is lost. + +### In cases of intermittent `remote_write` outages +If the remote write endpoint is intermittently reachable, the most recent +successfully sent timestamp is updated whenever the connection is successful. +A successful connection updates the series' comparison with +`min_keepalive_time` and triggers a truncation on the next `truncate_frequency` +interval which checkpoints two thirds of the segments (rounded down to the +nearest integer) written since the previous truncation. + +### In cases of falling behind +If the queue shards cannot flush data quickly enough to keep +up-to-date with the most recent data buffered in the WAL, we say that the +component is 'falling behind'. +It's not unusual for the component to temporarily fall behind 2 or 3 scrape intervals. +If the component falls behind more than one third of the data written since the +last truncate interval, it is possible for the truncate loop to checkpoint data +before being pushed to the remote_write endpoint. + +[WAL block]: {{< relref "../flow/reference/components/prometheus.remote_write.md/#wal-block" >}} +[metrics config]: {{< relref "../static/configuration/metrics-config.md" >}} diff --git a/docs/sources/static/configuration/agent-management.md b/docs/sources/static/configuration/agent-management.md index 5200064d6d69..d9af4706d70e 100644 --- a/docs/sources/static/configuration/agent-management.md +++ b/docs/sources/static/configuration/agent-management.md @@ -70,6 +70,9 @@ agent_management: # Whether to use labels from the label management service. If enabled, labels from the API supersede the ones configured in the agent. label_management_enabled: | default = false + # Whether to accept HTTP 304 Not Modified responses from the API server. If enabled, the agent will use the cached configuration if the API server responds with HTTP 304 Not Modified. You can set this argument to `false` for debugging or testing. + accept_http_not_modified: | default = true + # A unique ID for the agent, which is used to identify the agent. agent_id: ``` diff --git a/docs/sources/static/configuration/create-config-file.md b/docs/sources/static/configuration/create-config-file.md index 98d8b7e62dc6..25e774a64d69 100644 --- a/docs/sources/static/configuration/create-config-file.md +++ b/docs/sources/static/configuration/create-config-file.md @@ -186,5 +186,5 @@ integrations: {{% docs/reference %}} [configure]: "/docs/agent/ -> /docs/agent//static/configuration" -[configure]: "/docs/grafana-cloud/ -> ./configuration" +[configure]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/configuration" {{% /docs/reference %}} diff --git a/docs/sources/static/configuration/flags.md b/docs/sources/static/configuration/flags.md index 99e67288fac9..8d54ade73e73 100644 --- a/docs/sources/static/configuration/flags.md +++ b/docs/sources/static/configuration/flags.md @@ -146,9 +146,11 @@ YAML configuration when the `-server.http.tls-enabled` flag is used. {{% docs/reference %}} [retrieving]: "/docs/agent/ -> /docs/agent//static/configuration#remote-configuration-experimental" -[retrieving]: "/docs/grafana-cloud/ -> ./configuration#remote-configuration-experimental" +[retrieving]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/configuration#remote-configuration-experimental" + [revamp]: "/docs/agent/ -> /docs/agent//static/configuration/integrations/integrations-next/" -[revamp]: "/docs/grafana-cloud/ -> ./integrations/integrations-next/" +[revamp]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/configuration/integrations/integrations-next" + [management]: "/docs/agent/ -> /docs/agent//static/configuration/agent-management" -[management]: "/docs/grafana-cloud/ -> ./agent-management" +[management]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/monitor-infrastructure/agent/static/configuration/agent-management" {{% /docs/reference %}} diff --git a/docs/sources/static/configuration/integrations/github-exporter-config.md b/docs/sources/static/configuration/integrations/github-exporter-config.md index 750849b8b484..20f2027d78a0 100644 --- a/docs/sources/static/configuration/integrations/github-exporter-config.md +++ b/docs/sources/static/configuration/integrations/github-exporter-config.md @@ -10,7 +10,7 @@ description: Learn about github_exporter_config The `github_exporter_config` block configures the `github_exporter` integration, which is an embedded version of -[`github_exporter`](https://github.com/infinityworks/github-exporter). This allows for the collection of metrics from the GitHub api. +[`github_exporter`](https://github.com/githubexporter/github-exporter). This allows for the collection of metrics from the GitHub api. We strongly recommend that you configure a separate authentication token for the Agent, and give it only the strictly mandatory security privileges necessary for monitoring your repositories, as per the [official documentation](https://docs.github.com/en/rest/reference/permissions-required-for-github-apps). diff --git a/docs/sources/static/configuration/integrations/node-exporter-config.md b/docs/sources/static/configuration/integrations/node-exporter-config.md index a3be4a248e4f..25b2c3ff34ca 100644 --- a/docs/sources/static/configuration/integrations/node-exporter-config.md +++ b/docs/sources/static/configuration/integrations/node-exporter-config.md @@ -30,7 +30,7 @@ docker run \ -v "/proc:/host/proc:ro,rslave" \ -v /tmp/agent:/etc/agent \ -v /path/to/config.yaml:/etc/agent-config/agent.yaml \ - grafana/agent:v0.36.2 \ + grafana/agent:v0.37.0-rc.0 \ --config.file=/etc/agent-config/agent.yaml ``` @@ -70,7 +70,7 @@ metadata: name: agent spec: containers: - - image: grafana/agent:v0.36.2 + - image: grafana/agent:v0.37.0-rc.0 name: agent args: - --config.file=/etc/agent-config/agent.yaml diff --git a/docs/sources/static/configuration/integrations/process-exporter-config.md b/docs/sources/static/configuration/integrations/process-exporter-config.md index 944d65803ce7..d7de37e534c1 100644 --- a/docs/sources/static/configuration/integrations/process-exporter-config.md +++ b/docs/sources/static/configuration/integrations/process-exporter-config.md @@ -22,7 +22,7 @@ docker run \ -v "/proc:/proc:ro" \ -v /tmp/agent:/etc/agent \ -v /path/to/config.yaml:/etc/agent-config/agent.yaml \ - grafana/agent:v0.36.2 \ + grafana/agent:v0.37.0-rc.0 \ --config.file=/etc/agent-config/agent.yaml ``` @@ -39,7 +39,7 @@ metadata: name: agent spec: containers: - - image: grafana/agent:v0.36.2 + - image: grafana/agent:v0.37.0-rc.0 name: agent args: - --config.file=/etc/agent-config/agent.yaml diff --git a/docs/sources/static/configuration/integrations/redis-exporter-config.md b/docs/sources/static/configuration/integrations/redis-exporter-config.md index f19bcc05c5db..cf098a2d9826 100644 --- a/docs/sources/static/configuration/integrations/redis-exporter-config.md +++ b/docs/sources/static/configuration/integrations/redis-exporter-config.md @@ -117,6 +117,9 @@ Full reference of options: # Comma separated list of single streams to export info about streams, groups and consumers. [check_single_streams: ] + # Whether to export key values as labels when using `check_keys` or `check_single_keys`. + [export_key_values: | default = true] + # Comma separated list of individual keys to export counts for. [count_keys: ] diff --git a/docs/sources/static/configuration/metrics-config.md b/docs/sources/static/configuration/metrics-config.md index bc61eabfd820..70926e003791 100644 --- a/docs/sources/static/configuration/metrics-config.md +++ b/docs/sources/static/configuration/metrics-config.md @@ -340,6 +340,10 @@ remote_write: > * [`scrape_config`](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#scrape_config) > * [`remote_write`](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#remote_write) +## Data retention + +{{< docs/shared source="agent" lookup="/wal-data-retention.md" version="" >}} + {{% docs/reference %}} [scrape]: "/docs/agent/ -> /docs/agent//static/configuration/scraping-service" [scrape]: "/docs/grafana-cloud/ -> ./scraping-service" diff --git a/docs/sources/static/release-notes.md b/docs/sources/static/release-notes.md index 6cbf64156f07..f78df5ccde06 100644 --- a/docs/sources/static/release-notes.md +++ b/docs/sources/static/release-notes.md @@ -24,8 +24,12 @@ For a complete list of changes to Grafana Agent, with links to pull requests and {{% docs/reference %}} [release-notes-operator]: "/docs/agent/ -> /docs/agent//operator/release-notes" [release-notes-operator]: "/docs/grafana-cloud/ -> ../operator/release-notes" + [release-notes-flow]: "/docs/agent/ -> /docs/agent//flow/release-notes" -[release-notes-flow]: "/docs/grafana-cloud/ -> ../flow/release-notes" +[release-notes-flow]: "/docs/grafana-cloud/ -> /docs/agent//flow/release-notes" + +[Modules]: "/docs/agent/ -> /docs/agent//flow/concepts/modules" +[Modules]: "/docs/grafana-cloud/ -> /docs/agent//flow/concepts/modules" {{% /docs/reference %}} ## v0.37 @@ -140,7 +144,7 @@ See [Module and Auth Split Migration](https://github.com/prometheus/snmp_exporte ### Removal of Dynamic Configuration The experimental feature Dynamic Configuration has been removed. The use case of dynamic configuration will be replaced -with [Modules]({{< relref "../flow/concepts/modules" >}}) in Grafana Agent Flow. +with [Modules][] in Grafana Agent Flow. ### Breaking change: Removed and renamed tracing metrics diff --git a/docs/sources/static/set-up/deploy-agent.md b/docs/sources/static/set-up/deploy-agent.md new file mode 100644 index 000000000000..4d4bc23002e8 --- /dev/null +++ b/docs/sources/static/set-up/deploy-agent.md @@ -0,0 +1,11 @@ +--- +canonical: https://grafana.com/docs/agent/latest/static/set-up/deploy-agent/ +description: Learn how to plan for the Grafana Agent deployment +menuTitle: Deploy static mode +title: Deploy Grafana Agent in static mode +description: Learn how to deploy Grafana Agent in different topologies +weight: 300 +--- + +{{< docs/shared source="agent" lookup="/deploy-agent.md" version="" >}} + diff --git a/docs/sources/static/set-up/install/install-agent-binary.md b/docs/sources/static/set-up/install/install-agent-binary.md index 8d53d4a7e5ae..fb679436b9c1 100644 --- a/docs/sources/static/set-up/install/install-agent-binary.md +++ b/docs/sources/static/set-up/install/install-agent-binary.md @@ -54,7 +54,7 @@ To download the Grafana Agent as a standalone binary, perform the following step [linux]: "/docs/agent/ -> /docs/agent//static/set-up/install/install-agent-linux" [linux]: "/docs/grafana-cloud/ -> ./install-agent-linux" [macos]: "/docs/agent/ -> /docs/agent//static/set-up/install/install-agent-macos" -[macos]: "/docs/grafana-cloud/ -> ./install-agent-mac-os" +[macos]: "/docs/grafana-cloud/ -> ./install-agent-macos" [windows]: "/docs/agent/ -> /docs/agent//static/set-up/install/install-agent-on-windows" [windows]: "/docs/grafana-cloud/ -> ./install-agent-on-windows" [start]: "/docs/agent/ -> /docs/agent//static/set-up/start-agent#standalone-binary" diff --git a/docs/sources/static/set-up/install/install-agent-docker.md b/docs/sources/static/set-up/install/install-agent-docker.md index 41029ab77198..715b2f18a870 100644 --- a/docs/sources/static/set-up/install/install-agent-docker.md +++ b/docs/sources/static/set-up/install/install-agent-docker.md @@ -34,7 +34,7 @@ To run a Grafana Agent Docker container on Linux, run the following command in a docker run \ -v WAL_DATA_DIRECTORY:/etc/agent/data \ -v CONFIG_FILE_PATH:/etc/agent/agent.yaml \ - grafana/agent:v0.36.2 + grafana/agent:v0.37.0-rc.0 ``` Replace `CONFIG_FILE_PATH` with the configuration file path on your Linux host system. @@ -51,7 +51,7 @@ To run a Grafana Agent Docker container on Windows, run the following command in docker run ^ -v WAL_DATA_DIRECTORY:C:\etc\grafana-agent\data ^ -v CONFIG_FILE_PATH:C:\etc\grafana-agent ^ - grafana/agent:v0.36.2-windows + grafana/agent:v0.37.0-rc.0-windows ``` Replace the following: diff --git a/go.mod b/go.mod index 2e75bc6025ab..ce7abbae2856 100644 --- a/go.mod +++ b/go.mod @@ -56,12 +56,13 @@ require ( github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2 github.com/grafana/dskit v0.0.0-20230829141140-06955c011ffd github.com/grafana/go-gelf/v2 v2.0.1 - github.com/grafana/loki v1.6.2-0.20230927083715-42fba5b19183 // k169 branch + // Loki main commit where the Prometheus dependency matches ours. TODO(@tpaschalis) Update to kXYZ branch once it's available + github.com/grafana/loki v1.6.2-0.20231004111112-07cbef92268a github.com/grafana/pyroscope-go/godeltaprof v0.1.3 github.com/grafana/pyroscope/api v0.2.0 - github.com/grafana/pyroscope/ebpf v0.2.1 + github.com/grafana/pyroscope/ebpf v0.2.3 github.com/grafana/regexp v0.0.0-20221123153739-15dc172cd2db - github.com/grafana/river v0.1.2-0.20230830200459-0ff21cf610eb + github.com/grafana/river v0.1.2-0.20231003183959-75f893ffa7df github.com/grafana/snowflake-prometheus-exporter v0.0.0-20221213150626-862cad8e9538 github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0 github.com/grafana/vmware_exporter v0.0.4-beta @@ -82,7 +83,6 @@ require ( github.com/hashicorp/vault/api/auth/userpass v0.2.0 github.com/heroku/x v0.0.61 github.com/iamseth/oracledb_exporter v0.0.0-20230504204552-f801dc432dcf - github.com/infinityworks/github-exporter v0.0.0-20210802160115-284088c21e7d github.com/influxdata/go-syslog/v3 v3.0.1-0.20210608084020-ac565dc76ba6 github.com/jaegertracing/jaeger v1.48.0 github.com/jmespath/go-jmespath v0.4.0 @@ -97,10 +97,10 @@ require ( github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f github.com/ncabatoff/process-exporter v0.7.10 github.com/nerdswords/yet-another-cloudwatch-exporter v0.54.0 - github.com/ohler55/ojg v1.19.2 // indirect + github.com/ohler55/ojg v1.19.3 // indirect github.com/oklog/run v1.1.0 github.com/olekukonko/tablewriter v0.0.5 - github.com/oliver006/redis_exporter v1.51.0 + github.com/oliver006/redis_exporter v1.54.0 github.com/open-telemetry/opentelemetry-collector-contrib/connector/servicegraphconnector v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/exporter/jaegerexporter v0.85.0 @@ -123,6 +123,7 @@ require ( github.com/open-telemetry/opentelemetry-collector-contrib/processor/spanmetricsprocessor v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/processor/spanprocessor v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor v0.85.0 + github.com/open-telemetry/opentelemetry-collector-contrib/processor/transformprocessor v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/receiver/jaegerreceiver v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kafkareceiver v0.85.0 github.com/open-telemetry/opentelemetry-collector-contrib/receiver/opencensusreceiver v0.85.0 @@ -151,9 +152,9 @@ require ( github.com/prometheus/memcached_exporter v0.13.0 github.com/prometheus/mysqld_exporter v0.14.0 github.com/prometheus/node_exporter v1.6.0 - github.com/prometheus/procfs v0.11.0 + github.com/prometheus/procfs v0.11.1 github.com/prometheus/prometheus v1.99.0 - github.com/prometheus/snmp_exporter v0.23.0 + github.com/prometheus/snmp_exporter v0.24.1 github.com/prometheus/statsd_exporter v0.22.8 github.com/richardartoul/molecule v1.0.1-0.20221107223329-32cfee06a052 github.com/rs/cors v1.10.0 @@ -301,7 +302,7 @@ require ( github.com/aws/aws-sdk-go-v2/service/ssooidc v1.15.5 // indirect github.com/aws/aws-sdk-go-v2/service/sts v1.21.5 // indirect github.com/aws/smithy-go v1.14.2 // indirect - github.com/beevik/ntp v0.3.0 // indirect + github.com/beevik/ntp v1.3.0 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/blang/semver v3.5.2-0.20180723201105-3c1074078d32+incompatible // indirect github.com/blang/semver/v4 v4.0.0 // indirect @@ -343,7 +344,7 @@ require ( github.com/efficientgo/tools/core v0.0.0-20220817170617-6c25e3b627dd // indirect github.com/elastic/go-sysinfo v1.8.1 // indirect github.com/elastic/go-windows v1.0.1 // indirect - github.com/ema/qdisc v0.0.0-20230120214811-5b708f463de3 // indirect + github.com/ema/qdisc v1.0.0 // indirect github.com/emicklei/go-restful/v3 v3.10.2 // indirect github.com/emirpasic/gods v1.12.0 // indirect github.com/envoyproxy/go-control-plane v0.11.1 // indirect @@ -427,7 +428,6 @@ require ( github.com/hashicorp/vault/sdk v0.5.1 // indirect github.com/hashicorp/vic v1.5.1-0.20190403131502-bbfe86ec9443 // indirect github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d // indirect - github.com/hetznercloud/hcloud-go v1.45.1 // indirect github.com/hodgesds/perf-utils v0.7.0 // indirect github.com/huandu/xstrings v1.3.3 // indirect github.com/iancoleman/strcase v0.3.0 // indirect @@ -456,7 +456,7 @@ require ( github.com/josharian/native v1.1.0 // indirect github.com/joyent/triton-go v0.0.0-20180628001255-830d2b111e62 // indirect github.com/jpillora/backoff v1.0.0 // indirect - github.com/jsimonetti/rtnetlink v1.3.2 // indirect + github.com/jsimonetti/rtnetlink v1.3.5 // indirect github.com/karrick/godirwalk v1.17.0 // indirect github.com/kevinburke/ssh_config v1.1.0 // indirect github.com/klauspost/asmfmt v1.3.2 // indirect @@ -476,11 +476,11 @@ require ( github.com/mattn/go-runewidth v0.0.14 // indirect github.com/mattn/go-xmlrpc v0.0.3 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect - github.com/mdlayher/ethtool v0.0.0-20221212131811-ba3b4bc2e02c // indirect - github.com/mdlayher/genetlink v1.3.1 // indirect + github.com/mdlayher/ethtool v0.1.0 // indirect + github.com/mdlayher/genetlink v1.3.2 // indirect github.com/mdlayher/netlink v1.7.2 // indirect github.com/mdlayher/socket v0.4.1 // indirect - github.com/mdlayher/wifi v0.0.0-20220330172155-a44c70b6d3c8 // indirect + github.com/mdlayher/wifi v0.1.0 // indirect github.com/microsoft/go-mssqldb v0.19.0 // indirect github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 // indirect github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 // indirect @@ -515,7 +515,7 @@ require ( github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/zipkin v0.85.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/image-spec v1.1.0-rc4 // indirect - github.com/opencontainers/runc v1.1.5 // indirect + github.com/opencontainers/runc v1.1.9 // indirect github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect github.com/opencontainers/selinux v1.11.0 // indirect github.com/openzipkin/zipkin-go v0.4.2 // indirect @@ -621,12 +621,17 @@ require ( sigs.k8s.io/structured-merge-diff/v4 v4.3.0 // indirect ) +require github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab + +require github.com/githubexporter/github-exporter v0.0.0-20230925090839-9e31cd0e7721 + require ( dario.cat/mergo v1.0.0 // indirect github.com/Shopify/sarama v1.38.1 // indirect github.com/Workiva/go-datastructures v1.1.0 // indirect github.com/drone/envsubst v1.0.3 // indirect github.com/google/gnostic-models v0.6.8 // indirect + github.com/hetznercloud/hcloud-go/v2 v2.0.0 // indirect github.com/julienschmidt/httprouter v1.3.0 // indirect github.com/knadh/koanf/v2 v2.0.1 // indirect github.com/leoluk/perflib_exporter v0.2.0 // indirect @@ -666,11 +671,13 @@ replace ( k8s.io/klog/v2 => github.com/simonpasquier/klog-gokit/v3 v3.3.0 ) -// TODO(rfratto): remove replace directive once: +// TODO(tpaschalis): remove replace directive once: // -// * We remove our dependency on Cortex, which forces Prometheus to an older -// version since Go thinks v1 is newer than v0. -replace github.com/prometheus/prometheus => github.com/prometheus/prometheus v0.45.0 +// * There is a release of Prometheus which contains +// prometheus/prometheus#12677 and prometheus/prometheus#12729. +// We use the last v1-related tag as the replace statement does not work for v2 +// tags without the v2 suffix to the module root +replace github.com/prometheus/prometheus => github.com/grafana/prometheus v1.8.2-0.20231003113207-17e15326a784 // grafana:prometheus:v0.46.0-retry-improvements replace gopkg.in/yaml.v2 => github.com/rfratto/go-yaml v0.0.0-20211119180816-77389c3526dc @@ -690,8 +697,10 @@ replace ( // TODO(rfratto): remove forks when changes are merged upstream replace ( - // Upstream seems to be inactive, see https://github.com/grafana/agent/issues/1845 - github.com/infinityworks/github-exporter => github.com/grafana/github-exporter v0.0.0-20230418063919-fa34e926116a + // TODO(tpaschalis) this is to remove global instantiation of plugins + // and allow non-singleton components. + // https://github.com/grafana/cadvisor/tree/grafana-v0.47-noglobals + github.com/google/cadvisor => github.com/grafana/cadvisor v0.0.0-20230927082732-0d72868a513e // TODO(mattdurham): this is so you can debug on windows, when PR is merged into perflib, can you use that // and eventually remove if windows_exporter shifts to it. https://github.com/leoluk/perflib_exporter/pull/43 @@ -701,6 +710,9 @@ replace ( // TODO(mattdurham): this is to allow defaults to propogate properly. github.com/prometheus-community/windows_exporter => github.com/grafana/windows_exporter v0.15.1-0.20230612134738-fdb3ba7accd8 github.com/prometheus/mysqld_exporter => github.com/grafana/mysqld_exporter v0.12.2-0.20201015182516-5ac885b2d38a + + // Replace node_export with custom fork for multi usage. https://github.com/prometheus/node_exporter/pull/2812 + github.com/prometheus/node_exporter => github.com/grafana/node_exporter v0.18.1-grafana-r01.0.20231004161416-702318429731 ) // Excluding fixes a conflict in test packages and allows "go mod tidy" to run. diff --git a/go.sum b/go.sum index 83537dcaf8af..13fba5d7d3b5 100644 --- a/go.sum +++ b/go.sum @@ -28,9 +28,6 @@ cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aD cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI= cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4= cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc= -cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA= -cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A= -cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc= cloud.google.com/go v0.110.6 h1:8uYAkj3YHTP/1iwReuHPxLSbdcyc+dSBbzFMrVwDR6Q= cloud.google.com/go v0.110.6/go.mod h1:+EYjdK8e5RME/VY/qLCAtuyALQ9q67dvuum8i+H5xsI= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= @@ -39,20 +36,12 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow= -cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM= -cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M= -cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s= -cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= -cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U= cloud.google.com/go/compute v1.23.0 h1:tP41Zoavr8ptEqaW6j+LQOnyBBhO7OkOMAGrgLopTwY= cloud.google.com/go/compute v1.23.0/go.mod h1:4tCnrn48xsqlwSAiLf1HXMQk8CONslYbdiEZc9FEIbM= -cloud.google.com/go/compute/metadata v0.2.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= cloud.google.com/go/compute/metadata v0.2.4-0.20230617002413-005d2dfb6b68 h1:aRVqY1p2IJaBGStWMsQMpkAa83cPkCDLl80eOj0Rbz4= cloud.google.com/go/compute/metadata v0.2.4-0.20230617002413-005d2dfb6b68/go.mod h1:1a3eRNYX12fs5UABBIXS8HXVvQbX9hRB/RkEBPORpe8= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= -cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY= cloud.google.com/go/iam v1.1.1 h1:lW7fzj15aVIXYHREOqjRBV9PsH0Z6u8Y46a1YGvQP4Y= cloud.google.com/go/iam v1.1.1/go.mod h1:A5avdyVL2tCppe4unb0951eI9jreack+RJ0/d+KUZOU= cloud.google.com/go/kms v1.15.0 h1:xYl5WEaSekKYN5gGRyhjvZKM22GVBBCzegGNVPy+aIs= @@ -69,7 +58,6 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo= -cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y= code.cloudfoundry.org/clock v1.0.0/go.mod h1:QD9Lzhd/ux6eNQVUDVRJX/RKTigpewimNYBi7ivZKY8= collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE= contrib.go.opencensus.io/exporter/prometheus v0.4.2 h1:sqfsYl5GIY/L570iT+l93ehxaWJs2/OwXtiWwew3oAg= @@ -119,7 +107,6 @@ github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0/go.mod h1:2e8rMJtl2+ github.com/Azure/azure-storage-queue-go v0.0.0-20181215014128-6ed74e755687/go.mod h1:K6am8mT+5iFXgingS9LUc7TmbsW6XBw3nxaRyaMyWc8= github.com/Azure/go-amqp v0.12.6/go.mod h1:qApuH6OFTSKZFmCOxccvAv5rLizBQf4v8pRmG138DPo= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= -github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0= github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= github.com/Azure/go-autorest v10.7.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= @@ -218,7 +205,6 @@ github.com/Microsoft/ApplicationInsights-Go v0.4.2/go.mod h1:CukZ/G66zxXtI+h/VcV github.com/Microsoft/go-winio v0.4.3/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA= github.com/Microsoft/go-winio v0.4.9/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA= github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= -github.com/Microsoft/go-winio v0.4.15/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw= github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0= github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= @@ -268,7 +254,6 @@ github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw= github.com/alecthomas/assert/v2 v2.2.2 h1:Z/iVC0xZfWTaFNE6bA3z07T86hd45Xe2eLt6WVy2bbk= github.com/alecthomas/assert/v2 v2.2.2/go.mod h1:pXcQ2Asjp247dahGEmsZ6ru0UVwnkhktn7S0bBDLxvQ= -github.com/alecthomas/kingpin/v2 v2.3.1/go.mod h1:oYL5vtsvEHZGHxU7DMp32Dvx+qL+ptGn6lWaot2vCNE= github.com/alecthomas/kingpin/v2 v2.3.2 h1:H0aULhgmSzN8xQ3nX1uxtdlTHYoPLu5AhHxWrKI6ocU= github.com/alecthomas/kingpin/v2 v2.3.2/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE= github.com/alecthomas/participle/v2 v2.0.0 h1:Fgrq+MbuSsJwIkw3fEj9h75vDP0Er5JzepJ0/HNHv0g= @@ -329,7 +314,6 @@ github.com/aws/aws-sdk-go v1.25.41/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpi github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.30.27/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/aws/aws-sdk-go v1.34.34/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= -github.com/aws/aws-sdk-go v1.35.24/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k= github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.38.68/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.45.2 h1:hTong9YUklQKqzrGk3WnKABReb5R8GjbG4Y6dEQfjnk= @@ -418,8 +402,8 @@ github.com/aws/smithy-go v1.14.2 h1:MJU9hqBGbvWZdApzpvoF2WAIJDbtjK2NDJSiJP7HblQ= github.com/aws/smithy-go v1.14.2/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA= github.com/aybabtme/iocontrol v0.0.0-20150809002002-ad15bcfc95a0 h1:0NmehRCgyk5rljDQLKUO+cRJCnduDyn11+zGZIc9Z48= github.com/aybabtme/iocontrol v0.0.0-20150809002002-ad15bcfc95a0/go.mod h1:6L7zgvqo0idzI7IO8de6ZC051AfXb5ipkIJ7bIA2tGA= -github.com/beevik/ntp v0.3.0 h1:xzVrPrE4ziasFXgBVBZJDP0Wg/KpMwk2KHJ4Ba8GrDw= -github.com/beevik/ntp v0.3.0/go.mod h1:hIHWr+l3+/clUnF44zdK+CWW7fO8dR5cIylAQ76NRpg= +github.com/beevik/ntp v1.3.0 h1:/w5VhpW5BGKS37vFm1p9oVk/t4HnnkKZAZIubHM6F7Q= +github.com/beevik/ntp v1.3.0/go.mod h1:vD6h1um4kzXpqmLTuu0cCLcC+NfvC0IC+ltmEDA8E78= github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM= github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o= @@ -483,7 +467,6 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs= -github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA= github.com/cilium/ebpf v0.11.0 h1:V8gS/bTCCjX9uUnkUFUpPsksM8n1lXBAvHcpiFk1X2Y= github.com/cilium/ebpf v0.11.0/go.mod h1:WE7CZAnqOL2RouJ4f1uyNhqr2P4CCvXFIqdRDUgWsVs= github.com/circonus-labs/circonus-gometrics v0.0.0-20161109192337-d17a8420c36e/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag= @@ -498,12 +481,8 @@ github.com/cloudflare/golz4 v0.0.0-20150217214814-ef862a3cdc58/go.mod h1:EOBUe0h github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI= github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 h1:/inchEIKaYC1Akx+H+gqO04wryn5h75LSazbRlnya1k= github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I= @@ -522,7 +501,6 @@ github.com/containerd/continuity v0.0.0-20181203112020-004b46473808/go.mod h1:GL github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y= github.com/containerd/continuity v0.4.1 h1:wQnVrjIyQ8vhU2sgOiL5T07jo+ouqc2bnKsv5/EqGhU= github.com/containerd/continuity v0.4.1/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ= -github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ= github.com/containerd/ttrpc v1.2.2 h1:9vqZr0pxwOF5koz6N0N3kJ0zDHokrcPxIR/ZR2YFtOs= github.com/containerd/ttrpc v1.2.2/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak= github.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY= @@ -554,7 +532,6 @@ github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY= github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4= -github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyriltovena/cloudflare-go v0.27.1-0.20211118103540-ff77400bcb93 h1:PEBeRA25eDfHWkXNJs0HOnMhjIuKMcxKg/Z3VeuoRbU= @@ -603,12 +580,10 @@ github.com/docker/cli v20.10.11+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hH github.com/docker/cli v23.0.3+incompatible h1:Zcse1DuDqBdgI7OQDV8Go7b83xLgfhW1eza4HfEdxpY= github.com/docker/cli v23.0.3+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= github.com/docker/distribution v2.6.0-rc.1.0.20170726174610-edc3ab29cdff+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= -github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8= github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/docker v17.12.0-ce-rc1.0.20200916142827-bd33bbf0497b+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v20.10.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= -github.com/docker/docker v20.10.21+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v24.0.5+incompatible h1:WmgcE4fxyI6EEXxBRxsHnZXrO1pQ3smi0k/jho4HLeY= github.com/docker/docker v24.0.5+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec= @@ -655,8 +630,8 @@ github.com/elastic/go-windows v1.0.1/go.mod h1:FoVvqWSun28vaDQPbj2Elfc0JahhPB7WQ github.com/elazarl/go-bindata-assetfs v0.0.0-20160803192304-e1a2a7ec64b0/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4= github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc= github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc= -github.com/ema/qdisc v0.0.0-20230120214811-5b708f463de3 h1:Jrl8sD8wO34+EE1dV2vhOXrqFAZa/FILDnZRaV28+cw= -github.com/ema/qdisc v0.0.0-20230120214811-5b708f463de3/go.mod h1:FhIc0fLYi7f+lK5maMsesDqwYojIOh3VfRs8EVd5YJQ= +github.com/ema/qdisc v1.0.0 h1:EHLG08FVRbWLg8uRICa3xzC9Zm0m7HyMHfXobWFnXYg= +github.com/ema/qdisc v1.0.0/go.mod h1:FhIc0fLYi7f+lK5maMsesDqwYojIOh3VfRs8EVd5YJQ= github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful/v3 v3.10.2 h1:hIovbnmBTLjHXkqEBUz3HGpXZdM7ZrE9fJIZIqlJLqE= @@ -673,7 +648,6 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.m github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ= github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= -github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE= github.com/envoyproxy/go-control-plane v0.11.1 h1:wSUXTlLfiAQRWs2F+p+EKOY9rUyis1MyGqJ2DIk5HpM= github.com/envoyproxy/go-control-plane v0.11.1/go.mod h1:uhMcXKCQMEJHiAb0w+YGefQLaTEw+YhGluxZkrTmD0g= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= @@ -730,6 +704,8 @@ github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9 github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32/go.mod h1:GIjDIg/heH5DOkXY3YJ/wNhfHsQHoXGjl8G8amsYQ1I= +github.com/githubexporter/github-exporter v0.0.0-20230925090839-9e31cd0e7721 h1:QiyzldQ6CkWDD7sqSiPa0k++1xv48RMuoQi5pXoupa4= +github.com/githubexporter/github-exporter v0.0.0-20230925090839-9e31cd0e7721/go.mod h1:q49R4E4fu+HqGnSSSFpAuJIMm8DV5YNhKBW/Ke9SBPE= github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0= github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= github.com/glinton/ping v0.1.4-0.20200311211934-5ac87da8cd96/go.mod h1:uY+1eqFUyotrQxF1wYFNtMeHp/swbYRsoGzfcPZ8x3o= @@ -876,7 +852,6 @@ github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MG github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 h1:ZpnhV/YsD2/4cESfV5+Hoeu/iUR3ruzNvZ+yQfO03a0= github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4= github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= -github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk= github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/gofrs/uuid v2.1.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= @@ -959,8 +934,6 @@ github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Z github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU= github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= -github.com/google/cadvisor v0.47.0 h1:gZYmiMh5lolrxE/KoxpZPKJjZ8Bet0UjeXRilIOme1g= -github.com/google/cadvisor v0.47.0/go.mod h1:8Y5Q1eUA/uA5ud05l/3WF+209ni+Uf3qmKr4TTiXftM= github.com/google/dnsmasq_exporter v0.2.1-0.20230620100026-44b14480804a h1:cnKkC6FdLEg+Bg30BIrxSbRm+ktT+Dg0GUXsViJDMOQ= github.com/google/dnsmasq_exporter v0.2.1-0.20230620100026-44b14480804a/go.mod h1:0UipONgYNVYq/tP7xau4Kr5Xlv7jcb9te+sOSDjylnQ= github.com/google/flatbuffers v23.5.26+incompatible h1:M9dgRyhJemaM4Sw8+66GHBu8ioaQmyPLg1b8VwK5WJg= @@ -1033,23 +1006,17 @@ github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4= github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8= github.com/googleapis/enterprise-certificate-proxy v0.2.5 h1:UR4rDjcgpgEnqpIEvkiqTYKBCKLNmlge2eVjoZfySzM= github.com/googleapis/enterprise-certificate-proxy v0.2.5/go.mod h1:RxW0N9901Cko1VOCW3SXCpWP+mlIEkk2tP7jnHy9a3w= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0= -github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM= -github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM= -github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM= -github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c= github.com/googleapis/gax-go/v2 v2.12.0 h1:A+gCJKdRfqXkr+BIRGtZLibNXf0m1f9E4HG56etFpas= github.com/googleapis/gax-go/v2 v2.12.0/go.mod h1:y+aIqrI5eb1YGMVJfuV3185Ts/D7qKpsEkdD5+I6QGU= github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= github.com/googleapis/gnostic v0.2.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg= -github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4= github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g= github.com/gopcua/opcua v0.1.12/go.mod h1:a6QH4F9XeODklCmWuvaOdL8v9H0d73CEKUHWVZLQyE8= github.com/gophercloud/gophercloud v0.0.0-20180828235145-f29afc2cceca/go.mod h1:3WdhXV3rUYy9p6AUW8d94kr+HS62Y4VL9mBnFxsD8q4= @@ -1077,41 +1044,45 @@ github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad github.com/gosnmp/gosnmp v1.36.0 h1:1Si+MImHcKIqFc3/kJEs2LOULP1nlFKlzPFyrMOk5Qk= github.com/gosnmp/gosnmp v1.36.0/go.mod h1:iLcZxN2MxKhH0jPQDVMZaSNypw1ykqVi27O79koQj6w= github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY= +github.com/grafana/cadvisor v0.0.0-20230927082732-0d72868a513e h1:hCYDh2cmnNFAjwcMlrSuptDZqjXUc3he4h61/xL/ANY= +github.com/grafana/cadvisor v0.0.0-20230927082732-0d72868a513e/go.mod h1:XjiOCFjmxXIWwauV5p39Mr2Yxlpyk72uKQH1UZvd4fQ= github.com/grafana/ckit v0.0.0-20230906125525-c046c99a5c04 h1:tG8Qxq4dN1WqakMmsPaxaH4+OQhYg5HVsarw5acLBX8= github.com/grafana/ckit v0.0.0-20230906125525-c046c99a5c04/go.mod h1:HOnDIbkxfvVlDM5FBujt0uawGLfdpdTeqE7fIwfBmQk= github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2 h1:qhugDMdQ4Vp68H0tp/0iN17DM2ehRo1rLEdOFe/gB8I= github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2/go.mod h1:w/aiO1POVIeXUQyl0VQSZjl5OAGDTL5aX+4v0RA1tcw= github.com/grafana/dskit v0.0.0-20230829141140-06955c011ffd h1:RHZuBHWNS2HRJ5XhQK7cKP11EMMJPtJO2xKvQ+ws+PU= github.com/grafana/dskit v0.0.0-20230829141140-06955c011ffd/go.mod h1:3u7fr4hmOhuUL9Yc1QP/oa3za73kxvqJnRJH4BA5fOM= -github.com/grafana/github-exporter v0.0.0-20230418063919-fa34e926116a h1:pvi8kX3TWSQcdHNFDxUOyItgUqebu3vtbA2L/tMYX7w= -github.com/grafana/github-exporter v0.0.0-20230418063919-fa34e926116a/go.mod h1:6XoOvFDTfk3aqGaOLHLxoWiZNx4zHobApOhKc3oHF/g= github.com/grafana/go-gelf/v2 v2.0.1 h1:BOChP0h/jLeD+7F9mL7tq10xVkDG15he3T1zHuQaWak= github.com/grafana/go-gelf/v2 v2.0.1/go.mod h1:lexHie0xzYGwCgiRGcvZ723bSNyNI8ZRD4s0CLobh90= github.com/grafana/gocql v0.0.0-20200605141915-ba5dc39ece85/go.mod h1:crI9WX6p0IhrqB+DqIUHulRW853PaNFf7o4UprV//3I= github.com/grafana/gomemcache v0.0.0-20230316202710-a081dae0aba9 h1:WB3bGH2f1UN6jkd6uAEWfHB8OD7dKJ0v2Oo6SNfhpfQ= github.com/grafana/gomemcache v0.0.0-20230316202710-a081dae0aba9/go.mod h1:PGk3RjYHpxMM8HFPhKKo+vve3DdlPUELZLSDEFehPuU= -github.com/grafana/loki v1.6.2-0.20230927083715-42fba5b19183 h1:klVM7g0Tl31QSd9dvypCe5XAQ49AM+acGAH2DwXdllA= -github.com/grafana/loki v1.6.2-0.20230927083715-42fba5b19183/go.mod h1:hiGUcnMCZ86m122rzu8xySnRiGDQ8TzPowVmCvWZAJ4= +github.com/grafana/loki v1.6.2-0.20231004111112-07cbef92268a h1:lvSHlNONeo/H+aWRk86QEfBpRDCEX1yoqpsCK0Tys+g= +github.com/grafana/loki v1.6.2-0.20231004111112-07cbef92268a/go.mod h1:a5c5ZTC6FNufKkvF8NeDAb2nCWJpgkVDrejmV+O9hac= github.com/grafana/loki/pkg/push v0.0.0-20230904153656-e4cc2a4f5ec8 h1:yQK/dX7WBva5QvITvmIcbv4boLwSo65a8zjuZcucnko= github.com/grafana/loki/pkg/push v0.0.0-20230904153656-e4cc2a4f5ec8/go.mod h1:5ll3An1wAxYejo6aM04+3/lc6N4joYVYLY5U+Z4O6vI= github.com/grafana/mysqld_exporter v0.12.2-0.20201015182516-5ac885b2d38a h1:D5NSR64/6xMXnSFD9y1m1DPYIcBcHvtfeuI9/M/0qtI= github.com/grafana/mysqld_exporter v0.12.2-0.20201015182516-5ac885b2d38a/go.mod h1:rjb/swXiCWLlC3gWlyugy/xEOZioF5PclbB8sf/9p/Q= +github.com/grafana/node_exporter v0.18.1-grafana-r01.0.20231004161416-702318429731 h1:vyyIYY2sLpmgFIckJ1vSO/oYkvB0thDF6UiFYp5PThM= +github.com/grafana/node_exporter v0.18.1-grafana-r01.0.20231004161416-702318429731/go.mod h1:vOZxEzxm0nZmuNqjtIfvtmvdRtJik9POmcN5mQVLf5E= github.com/grafana/opentelemetry-collector v0.4.1-0.20230925123210-ef4435f79a8a h1:co7JnXySBilXDBu0hhw3furJZLJg9SFojos1PMO+lW4= github.com/grafana/opentelemetry-collector v0.4.1-0.20230925123210-ef4435f79a8a/go.mod h1:jcETa0UJmwkDSyhkOTwQi8rgie1M3TjsIO98KeGM2vk= github.com/grafana/perflib_exporter v0.1.1-0.20230511173423-6166026bd090 h1:Ko80Xcl7xo1eYqkqLUb9AVVCLGVmuQp2jOV69hEEeZw= github.com/grafana/perflib_exporter v0.1.1-0.20230511173423-6166026bd090/go.mod h1:MinSWm88jguXFFrGsP56PtleUb4Qtm4tNRH/wXNXRTI= github.com/grafana/postgres_exporter v0.8.1-0.20210722175051-db35d7c2f520 h1:HnFWqxhoSF3WC7sKAdMZ+SRXvHLVZlZ3sbQjuUlTqkw= github.com/grafana/postgres_exporter v0.8.1-0.20210722175051-db35d7c2f520/go.mod h1:+HPXgiOV0InDHcZ2jNijL1SOKvo0eEPege5fQA0+ICI= +github.com/grafana/prometheus v1.8.2-0.20231003113207-17e15326a784 h1:B8AAFKq7WQPUYdoGwWjgxFARn5XodYlKJNHCYUopah8= +github.com/grafana/prometheus v1.8.2-0.20231003113207-17e15326a784/go.mod h1:10L5IJE5CEsjee1FnOcVswYXlPIscDWWt3IJ2UDYrz4= github.com/grafana/pyroscope-go/godeltaprof v0.1.3 h1:eunWpv1B3Z7ZK9o4499EmQGlY+CsDmSZ4FbxjRx37uk= github.com/grafana/pyroscope-go/godeltaprof v0.1.3/go.mod h1:1HSPtjU8vLG0jE9JrTdzjgFqdJ/VgN7fvxBNq3luJko= github.com/grafana/pyroscope/api v0.2.0 h1:TzOxL0s6SiaLEy944ZAKgHcx/JDRJXu4O8ObwkqR6p4= github.com/grafana/pyroscope/api v0.2.0/go.mod h1:nhH+xai9cYFgs6lMy/+L0pKj0d5yCMwji/QAiQFCP+U= -github.com/grafana/pyroscope/ebpf v0.2.1 h1:OE/i5NMsM8cuD9ibRhC2Ms7wz1t5vW1Ijb9UkmTWMSE= -github.com/grafana/pyroscope/ebpf v0.2.1/go.mod h1:KTvAJ+C8PFW2C0aI0KzSUhsqslQCcWvw3eECQkuvfSQ= +github.com/grafana/pyroscope/ebpf v0.2.3 h1:OH7Un2x0UN998U85by4vyvImHs6mkFTo45SnO+PjHdk= +github.com/grafana/pyroscope/ebpf v0.2.3/go.mod h1:NO9mIMKewDuohQlYaj2Q0v3miUmREjGpadz8RuA76Jw= github.com/grafana/regexp v0.0.0-20221123153739-15dc172cd2db h1:7aN5cccjIqCLTzedH7MZzRZt5/lsAHch6Z3L2ZGn5FA= github.com/grafana/regexp v0.0.0-20221123153739-15dc172cd2db/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A= -github.com/grafana/river v0.1.2-0.20230830200459-0ff21cf610eb h1:hOblg36rOTgGIOp7A3+53OtlXqq0iNnI9qDcOn7fPfQ= -github.com/grafana/river v0.1.2-0.20230830200459-0ff21cf610eb/go.mod h1:F8rcwfPL98xF/m+OSCFH7gJyLPYMIadXCVfY+/OcqEI= +github.com/grafana/river v0.1.2-0.20231003183959-75f893ffa7df h1:jgguNFI2lCCOD49W1WS+TYHhN5KL5aTXTbn2N14tQVs= +github.com/grafana/river v0.1.2-0.20231003183959-75f893ffa7df/go.mod h1:icSidCSHYXJUYy6TjGAi/D+X7FsP7Gc7cxvBUIwYMmY= github.com/grafana/smimesign v0.2.1-0.20220408144937-2a5adf3481d3 h1:UPkAxuhlAcRmJT3/qd34OMTl+ZU7BLLfOO2+NXBlJpY= github.com/grafana/smimesign v0.2.1-0.20220408144937-2a5adf3481d3/go.mod h1:iZiiwNT4HbtGRVqCQu7uJPEZCuEE5sfSSttcnePkDl4= github.com/grafana/snowflake-prometheus-exporter v0.0.0-20221213150626-862cad8e9538 h1:tkT0yha3JzB5S5VNjfY4lT0cJAe20pU8XGt3Nuq73rM= @@ -1300,8 +1271,8 @@ github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d h1:W+SIwDdl3+jXWei github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/heroku/x v0.0.61 h1:yfoAAtnFWSFZj+UlS+RZL/h8QYEp1R4wHVEg0G+Hwh4= github.com/heroku/x v0.0.61/go.mod h1:C7xYbpMdond+s6L5VpniDUSVPRwm3kZum1o7XiD5ZHk= -github.com/hetznercloud/hcloud-go v1.45.1 h1:nl0OOklFfQT5J6AaNIOhl5Ruh3fhmGmhvZEqHbibVuk= -github.com/hetznercloud/hcloud-go v1.45.1/go.mod h1:aAUGxSfSnB8/lVXHNEDxtCT1jykaul8kqjD7f5KQXF8= +github.com/hetznercloud/hcloud-go/v2 v2.0.0 h1:Sg1DJ+MAKvbYAqaBaq9tPbwXBS2ckPIaMtVdUjKu+4g= +github.com/hetznercloud/hcloud-go/v2 v2.0.0/go.mod h1:4iUG2NG8b61IAwNx6UsMWQ6IfIf/i1RsG0BbsKAyR5Q= github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= github.com/hjson/hjson-go/v4 v4.0.0/go.mod h1:KaYt3bTw3zhBjYqnXkYywcYctk0A2nxeEFTse3rH13E= @@ -1318,6 +1289,8 @@ github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47 github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w= +github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab h1:BA4a7pe6ZTd9F8kXETBoijjFJ/ntaa//1wiH9BZu4zU= +github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw= github.com/illumos/go-kstat v0.0.0-20210513183136-173c9b0a9973 h1:hk4LPqXIY/c9XzRbe7dA6qQxaT6Axcbny0L/G5a4owQ= github.com/illumos/go-kstat v0.0.0-20210513183136-173c9b0a9973/go.mod h1:PoK3ejP3LJkGTzKqRlpvCIFas3ncU02v8zzWDW+g0FY= github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= @@ -1441,8 +1414,8 @@ github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2E github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= github.com/jsimonetti/rtnetlink v0.0.0-20190606172950-9527aa82566a/go.mod h1:Oz+70psSo5OFh8DBl0Zv2ACw7Esh6pPUphlvZG9x7uw= github.com/jsimonetti/rtnetlink v0.0.0-20200117123717-f846d4f6c1f4/go.mod h1:WGuG/smIU4J/54PblvSbh+xvCZmpJnFgr3ds6Z55XMQ= -github.com/jsimonetti/rtnetlink v1.3.2 h1:dcn0uWkfxycEEyNy0IGfx3GrhQ38LH7odjxAghimsVI= -github.com/jsimonetti/rtnetlink v1.3.2/go.mod h1:BBu4jZCpTjP6Gk0/wfrO8qcqymnN3g0hoFqObRmUo6U= +github.com/jsimonetti/rtnetlink v1.3.5 h1:hVlNQNRlLDGZz31gBPicsG7Q53rnlsz1l1Ix/9XlpVA= +github.com/jsimonetti/rtnetlink v1.3.5/go.mod h1:0LFedyiTkebnd43tE4YAkWGIq9jQphow4CcwxaT2Y00= github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= @@ -1587,17 +1560,16 @@ github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOq github.com/mattn/go-xmlrpc v0.0.3 h1:Y6WEMLEsqs3RviBrAa1/7qmbGB7DVD3brZIbqMbQdGY= github.com/mattn/go-xmlrpc v0.0.3/go.mod h1:mqc2dz7tP5x5BKlCahN/n+hs7OSZKJkS9JsHNBRlrxA= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= -github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= github.com/maxatome/go-testdeep v1.12.0 h1:Ql7Go8Tg0C1D/uMMX59LAoYK7LffeJQ6X2T04nTH68g= github.com/maxatome/go-testdeep v1.12.0/go.mod h1:lPZc/HAcJMP92l7yI6TRz1aZN5URwUBUAfUNvrclaNM= github.com/mdlayher/apcupsd v0.0.0-20200608131503-2bf01da7bf1b/go.mod h1:WYK/Z/aXq9cbMFIL5ihcA4sX/r/3/WCas/Qvs/2fXcA= -github.com/mdlayher/ethtool v0.0.0-20221212131811-ba3b4bc2e02c h1:Y7LoKqIgD7vmqJ7+6ZVnADuwUO+m3tGXbf2lK0OvjIw= -github.com/mdlayher/ethtool v0.0.0-20221212131811-ba3b4bc2e02c/go.mod h1:i0nPbE+sL2G3OtdIb9SXxW/T4UiAwh6rxPW7zcuX+KQ= +github.com/mdlayher/ethtool v0.1.0 h1:XAWHsmKhyPOo42qq/yTPb0eFBGUKKTR1rE0dVrWVQ0Y= +github.com/mdlayher/ethtool v0.1.0/go.mod h1:fBMLn2UhfRGtcH5ZFjr+6GUiHEjZsItFD7fSn7jbZVQ= github.com/mdlayher/genetlink v1.0.0/go.mod h1:0rJ0h4itni50A86M2kHcgS85ttZazNt7a8H2a2cw0Gc= -github.com/mdlayher/genetlink v1.3.1 h1:roBiPnual+eqtRkKX2Jb8UQN5ZPWnhDCGj/wR6Jlz2w= -github.com/mdlayher/genetlink v1.3.1/go.mod h1:uaIPxkWmGk753VVIzDtROxQ8+T+dkHqOI0vB1NA9S/Q= +github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw= +github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o= github.com/mdlayher/netlink v0.0.0-20190409211403-11939a169225/go.mod h1:eQB3mZE4aiYnlUsyGGCOpPETfdQq4Jhsgf1fk3cwQaA= github.com/mdlayher/netlink v1.0.0/go.mod h1:KxeJAFOFLG6AjpyDkQ/iIhxygIUKD+vcwqcnu43w/+M= github.com/mdlayher/netlink v1.1.0/go.mod h1:H4WCitaheIsdF9yOYu8CFmCgQthAPIWZmcKp9uZHgmY= @@ -1605,8 +1577,8 @@ github.com/mdlayher/netlink v1.7.2 h1:/UtM3ofJap7Vl4QWCPDGXY8d3GIY2UGSDbK+QWmY8/ github.com/mdlayher/netlink v1.7.2/go.mod h1:xraEF7uJbxLhc5fpHL4cPe221LI2bdttWlU+ZGLfQSw= github.com/mdlayher/socket v0.4.1 h1:eM9y2/jlbs1M615oshPQOHZzj6R6wMT7bX5NPiQvn2U= github.com/mdlayher/socket v0.4.1/go.mod h1:cAqeGjoufqdxWkD7DkpyS+wcefOtmu5OQ8KuoJGIReA= -github.com/mdlayher/wifi v0.0.0-20220330172155-a44c70b6d3c8 h1:/HCRFfpoICSWHvNrJ356VO4opd9dg/LaU7m8Tzdf39c= -github.com/mdlayher/wifi v0.0.0-20220330172155-a44c70b6d3c8/go.mod h1:IqdtNfemiXr50M8tnxLWSFdZKZ9vcI1Mgt0oTrCIS7A= +github.com/mdlayher/wifi v0.1.0 h1:y8wYRUXwok5CtUZOXT3egghYesX0O79E3ALl+SIDm9Q= +github.com/mdlayher/wifi v0.1.0/go.mod h1:+gBYnZAMcUKHSFzMJXwlz7tLsEHgwDJ9DJCefhJM+gI= github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a h1:0usWxe5SGXKQovz3p+BiQ81Jy845xSMu2CWKuXsXuUM= github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a/go.mod h1:3OETvrxfELvGsU2RoGGWercfeZ4bCL3+SOwzIWtJH/Q= github.com/microsoft/go-mssqldb v0.19.0 h1:LMRSgLcNMF8paPX14xlyQBmBH+jnFylPsYpVZf86eHM= @@ -1664,13 +1636,11 @@ github.com/moby/patternmatcher v0.5.0 h1:YCZgJOeULcxLw1Q+sVR636pmS7sPEn1Qo2iAN6M github.com/moby/patternmatcher v0.5.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc= github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A= -github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU= github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78= github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI= github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc= github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo= github.com/moby/term v0.0.0-20201216013528-df9cb8a40635/go.mod h1:FBS0z0QWA44HXygs7VXDUOGoN/1TV3RuWkLO04am3wc= -github.com/moby/term v0.0.0-20221205130635-1aeaba878587/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0= github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= @@ -1732,8 +1702,8 @@ github.com/npillmayer/nestext v0.1.3/go.mod h1:h2lrijH8jpicr25dFY+oAJLyzlya6jhnu github.com/nsqio/go-nsq v1.0.7/go.mod h1:XP5zaUs3pqf+Q71EqUJs3HYfBIqfK6G83WQMdNN+Ito= github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= -github.com/ohler55/ojg v1.19.2 h1:XvMP9oF6jo3T4eUyCsDwi3YFv0bGyQ4PvYBSMF9LiBM= -github.com/ohler55/ojg v1.19.2/go.mod h1:uHcD1ErbErC27Zhb5Df2jUjbseLLcmOCo6oxSr3jZxo= +github.com/ohler55/ojg v1.19.3 h1:rFmEc33aZOvlwb7tibAmwVGEiPfMZkgvurK0YDDr1HI= +github.com/ohler55/ojg v1.19.3/go.mod h1:uHcD1ErbErC27Zhb5Df2jUjbseLLcmOCo6oxSr3jZxo= github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs= github.com/oklog/run v0.0.0-20180308005104-6934b124db28/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= @@ -1744,8 +1714,8 @@ github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= -github.com/oliver006/redis_exporter v1.51.0 h1:nhRpeawfiraIiYprVY1l8iaiXjQn6fqG24G9R7QoXNA= -github.com/oliver006/redis_exporter v1.51.0/go.mod h1:DygrV+nXovoyQv2KDb454PG33ktpu0BoLr5bUeNT64A= +github.com/oliver006/redis_exporter v1.54.0 h1:Z60r78IW+OqHz0v8Ocfg8Nz7wKz7NB9+eJa+EkQQicQ= +github.com/oliver006/redis_exporter v1.54.0/go.mod h1:izaCCnJvn8Js8L6ZZ3ZnXZrkw7xTKI7y0TKKRnTZe1g= github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= @@ -1833,6 +1803,8 @@ github.com/open-telemetry/opentelemetry-collector-contrib/processor/spanprocesso github.com/open-telemetry/opentelemetry-collector-contrib/processor/spanprocessor v0.85.0/go.mod h1:Rp+79qY7tJEgja2kHdWkWS0WDHP4+KRPfHmttCjbwDo= github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor v0.85.0 h1:R4oQpH2bRx6UEGDdv15S2GkkkDFRbjXLO1b8fcjS8Lg= github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor v0.85.0/go.mod h1:WuIjXo1AF9FWkG5HVSnm5koj27SEoVmENH3YVDNR1x4= +github.com/open-telemetry/opentelemetry-collector-contrib/processor/transformprocessor v0.85.0 h1:8bhHzQpYe0zFANlQU+yRvhqsVFMRpaqtBVlHzyPqHkc= +github.com/open-telemetry/opentelemetry-collector-contrib/processor/transformprocessor v0.85.0/go.mod h1:D1VNHbfUdVgtWiXUwr6OBetOl+TVDsZ8PkC5Fji1AAc= github.com/open-telemetry/opentelemetry-collector-contrib/receiver/jaegerreceiver v0.85.0 h1:1oQGK9OibOxvtJcnCdB6+r6jGpzSp3DrcoeaVgCwP28= github.com/open-telemetry/opentelemetry-collector-contrib/receiver/jaegerreceiver v0.85.0/go.mod h1:GNnx5ftrhf2bBSVIUzvfAkXBBjHfUYvmUOuR7mJWUOE= github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kafkareceiver v0.85.0 h1:yDfzy7NPgjWptm/wmbLunsbV4L6YzY1fwXY5i++afFU= @@ -1853,15 +1825,12 @@ github.com/opencontainers/image-spec v1.1.0-rc4 h1:oOxKUJWnFC4YGHCCMNql1x4YaDfYB github.com/opencontainers/image-spec v1.1.0-rc4/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8= github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0= -github.com/opencontainers/runc v1.1.4/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg= -github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs= -github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg= +github.com/opencontainers/runc v1.1.9 h1:XR0VIHTGce5eWPkaPesqTBrhW2yAcaraWfsEalNwQLM= +github.com/opencontainers/runc v1.1.9/go.mod h1:CbUumNnWCuTGFukNXahoo/RFBZvDAgRh/smNYNOhA50= github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= -github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w= github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8= -github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI= github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU= github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec= github.com/openshift/api v0.0.0-20210521075222-e273a339932a/go.mod h1:izBmoXbUu3z5kUa4FjZhvekTsyzIWiOoaIgJiZBBMQs= @@ -1981,8 +1950,6 @@ github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqr github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY= github.com/prometheus/client_golang v1.12.2/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY= github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ= -github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y= -github.com/prometheus/client_golang v1.15.1/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt1N9XgF6zxWmaC0xOk= github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8= github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc= github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= @@ -1992,7 +1959,6 @@ github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1: github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w= github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY= github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU= github.com/prometheus/common v0.0.0-20180326160409-38c53a9f4bfc/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= @@ -2009,8 +1975,6 @@ github.com/prometheus/common v0.31.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+ github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls= github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA= github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA= -github.com/prometheus/common v0.38.0/go.mod h1:MBXfmBQZrK5XpbCkjofnXs96LD2QQ7fEq4C0xjC/yec= -github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc= github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY= github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY= github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4= @@ -2024,8 +1988,6 @@ github.com/prometheus/exporter-toolkit v0.10.1-0.20230714054209-2f4150c63f97 h1: github.com/prometheus/exporter-toolkit v0.10.1-0.20230714054209-2f4150c63f97/go.mod h1:LoBCZeRh+5hX+fSULNyFnagYlQG/gBsyA/deNzROkq8= github.com/prometheus/memcached_exporter v0.13.0 h1:d246RYODFCXy39XA8S2PBrqp5jLCSvl9b4KsYspDCHk= github.com/prometheus/memcached_exporter v0.13.0/go.mod h1:fp7Wk6v0RFijeP3Syvd1TShBSJoCG5iFfvPdi5dCMEU= -github.com/prometheus/node_exporter v1.6.0 h1:TKPvENRy8/yhKQWf862ssecaT9kQ1jnW9mQDtTzTIAU= -github.com/prometheus/node_exporter v1.6.0/go.mod h1:+zK+m9vwxu19JHl/kVVmixdCT6fWWHlmcOUHDFpkt0Y= github.com/prometheus/procfs v0.0.0-20180408092902-8b1c2da0d56d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= @@ -2039,13 +2001,10 @@ github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4O github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4= -github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY= -github.com/prometheus/procfs v0.11.0 h1:5EAgkfkMl659uZPbe9AS2N68a7Cc1TJbPEuGzFuRbyk= -github.com/prometheus/procfs v0.11.0/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM= -github.com/prometheus/prometheus v0.45.0 h1:O/uG+Nw4kNxx/jDPxmjsSDd+9Ohql6E7ZSY1x5x/0KI= -github.com/prometheus/prometheus v0.45.0/go.mod h1:jC5hyO8ItJBnDWGecbEucMyXjzxGv9cxsxsjS9u5s1w= -github.com/prometheus/snmp_exporter v0.23.0 h1:v+NUGGSj2a8QaLC4+cWAlTNWoI0qEZ8cEuJmtpVhsew= -github.com/prometheus/snmp_exporter v0.23.0/go.mod h1:vdODeAhHaSbDD2B4yngJgkWkAB393LuCGJUbK8AFfFU= +github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI= +github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY= +github.com/prometheus/snmp_exporter v0.24.1 h1:AihTbJHurMo8bjtjJde8U+4gMEvpvYvT21Xbd4SzJgY= +github.com/prometheus/snmp_exporter v0.24.1/go.mod h1:j6uIGkdR0DXvKn7HJtSkeDj//UY0sWmdd6XhvdBjln0= github.com/prometheus/statsd_exporter v0.22.7/go.mod h1:N/TevpjkIh9ccs6nuzY3jQn9dFqnUakOjnEuMPJJJnI= github.com/prometheus/statsd_exporter v0.22.8 h1:Qo2D9ZzaQG+id9i5NYNGmbf1aa/KxKbB9aKfMS+Yib0= github.com/prometheus/statsd_exporter v0.22.8/go.mod h1:/DzwbTEaFTE0Ojz5PqcSk6+PFHOPWGxdXVr6yC8eFOM= @@ -2073,7 +2032,6 @@ github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE= -github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/rs/cors v1.10.0 h1:62NOS1h+r8p1mW6FM0FSB0exioXLhd/sh15KpjWBZ+8= @@ -2189,7 +2147,6 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.16.0 h1:rGGH0XDZhdUOryiDWjmIvUSWpbNqisK8Wk0Vyefw8hc= github.com/spf13/viper v1.16.0/go.mod h1:yg78JgCJcbrQOvV9YLXgkLaZqUidkY9K+Dd1FofRzQg= -github.com/steinfletcher/apitest v1.3.8/go.mod h1:LOVbGzWvWCiiVE4PZByfhRnA5L00l5uZQEx403xQ4K8= github.com/streadway/amqp v0.0.0-20180528204448-e5adc2ada8b8/go.mod h1:1WNBiOZtZQLpVAyu0iTduoJL9hEsMloAK5XWrtW0xdY= github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw= github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw= @@ -2314,7 +2271,6 @@ github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHo github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= -github.com/xhit/go-str2duration v1.2.0/go.mod h1:3cPSlfZlUHVlneIVfePFWcJZsuwf+P1v2SRTV4cUmp4= github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc= github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= @@ -2539,6 +2495,7 @@ golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0 golang.org/x/crypto v0.0.0-20220829220503-c86fa9a7ed90/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= +golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I= golang.org/x/crypto v0.13.0 h1:mvySKfSWJ+UKUii46M40LOvyWfN0s2U+46/jDd0e6Ck= golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -2585,6 +2542,7 @@ golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc= golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -2654,18 +2612,14 @@ golang.org/x/net v0.0.0-20211029224645-99673261e6eb/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.0.0-20220805013720-a33c5aa5df48/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= -golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ= golang.org/x/net v0.15.0 h1:ugBLEUaxABaB5AJqW9enI0ACdci2RUd4eP51NTBvuJ8= golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= golang.org/x/oauth2 v0.0.0-20170807180024-9a379c6b3e95/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -2684,13 +2638,7 @@ golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE= -golang.org/x/oauth2 v0.0.0-20220722155238-128564f6959c/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= -golang.org/x/oauth2 v0.5.0/go.mod h1:9/XBHVqLaWO3/BRHs5jbpYCnOZVjj5V0ndyaAM7KB4I= golang.org/x/oauth2 v0.11.0 h1:vPL4xzxBM4niKCW6g9whtaWVXTJf1U5e4aZxxFx/gbU= golang.org/x/oauth2 v0.11.0/go.mod h1:LdF7O/8bLR/qWK9DrpXmbHLTouvRHK0SgJl0GmDBchk= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -2814,27 +2762,16 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211031064116-611d5d643895/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220224120231-95c6836cb0e7/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220708085239-5a0f0661e09d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -2843,10 +2780,11 @@ golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -2856,8 +2794,9 @@ golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9sn golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= -golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.9.0/go.mod h1:M6DEAAIenWoTxdKrOltXcmDY3rSplQUkrvaDU5FcQyo= golang.org/x/term v0.12.0 h1:/ZfYdc3zq+q02Rv9vGqTeSItdzZTSNDmfTi0mBAuidU= golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -2873,8 +2812,9 @@ golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.10.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -2964,6 +2904,7 @@ golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.12.0 h1:YW6HUoUmYBpwSgyaGaZq1fHjrBjX1rlpZ54T6mu2kss= golang.org/x/tools v0.12.0/go.mod h1:Sc0INKfu04TlqNoRA1hgpFZbhYXHPr4V5DzpSBTPqQM= golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -2972,9 +2913,6 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= -golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk= golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= golang.zx2c4.com/wireguard v0.0.20200121/go.mod h1:P2HsVp8SKwZEufsnezXZA4GRX/T49/HlU7DGuelXsU4= @@ -3016,18 +2954,7 @@ google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNe google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU= google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k= google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= -google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI= -google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I= -google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo= -google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g= -google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA= -google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8= -google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs= -google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA= -google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw= -google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg= -google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o= google.golang.org/api v0.139.0 h1:A1TrCPgMmOiYu0AiNkvQIpIx+D8blHTDcJ5EogkP7LI= google.golang.org/api v0.139.0/go.mod h1:CVagp6Eekz9CjGZ718Z+sloknzkDJE7Vc1Ckj9+viBk= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= @@ -3086,7 +3013,6 @@ google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6D google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210329143202-679c6ae281ee/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= @@ -3102,29 +3028,7 @@ google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEc google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E= -google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA= -google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA= google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5 h1:L6iMMGrtzgHsWofoFcihmDEMYeDR9KN/ThbPWGrh++g= google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5/go.mod h1:oH/ZOT02u4kWEp7oYBGYFFkCdKS/uYR9Z7+0/xuuFp8= google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d h1:DoPTO70H+bcDXcd39vOqb2viZxgqeBeSGtZ55yZU4/Q= @@ -3166,14 +3070,7 @@ google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= -google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k= -google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU= -google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ= -google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.51.0/go.mod h1:wgNDFcnuBGmxLKI/qn4T+m5BtEBYXJPvibbUPsAIPww= google.golang.org/grpc v1.58.0 h1:32JY8YpPMSR45K+c3o6b8VL73V+rR8k+DeMIr4vRH8o= google.golang.org/grpc v1.58.0/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= @@ -3192,7 +3089,6 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= @@ -3301,7 +3197,6 @@ k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5Ohx k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM= k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= -k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20230711102312-30195339c3c7 h1:ZgnF1KZsYxWIifwSNZFZgNtWE89WI5yiP5WwlfDoIyc= k8s.io/utils v0.0.0-20230711102312-30195339c3c7/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= modernc.org/httpfs v1.0.0/go.mod h1:BSkfoMUcahSijQD5J/Vu4UMOxzmEf5SNRwyXC4PJBEw= diff --git a/operations/agent-flow-mixin/dashboards/controller.libsonnet b/operations/agent-flow-mixin/dashboards/controller.libsonnet index 9d7d706ab24a..1dcfcd3c9ab3 100644 --- a/operations/agent-flow-mixin/dashboards/controller.libsonnet +++ b/operations/agent-flow-mixin/dashboards/controller.libsonnet @@ -174,9 +174,9 @@ local filename = 'agent-flow-controller.json'; ]) ), - // Graph evaluation rate + // Component evaluation rate ( - panel.new(title='Graph evaluation rate', type='timeseries') { + panel.new(title='Component evaluation rate', type='timeseries') { fieldConfig: { defaults: { custom: { @@ -188,7 +188,7 @@ local filename = 'agent-flow-controller.json'; } + panel.withUnit('ops') + panel.withDescription(||| - The frequency in which the component graph gets updated. + The frequency at which components get updated. |||) + panel.withPosition({ x: 0, y: 12, w: 8, h: 10 }) + panel.withMultiTooltip() + @@ -199,15 +199,15 @@ local filename = 'agent-flow-controller.json'; ]) ), - // Graph evaluation time + // Component evaluation time ( - panel.new(title='Graph evaluation time', type='timeseries') + + panel.new(title='Component evaluation time', type='timeseries') + panel.withUnit('s') + panel.withDescription(||| - The percentiles for how long it takes to complete a graph evaluation. + The percentiles for how long it takes to complete component evaluations. - Graph evaluations must complete for components to have the latest - arguments. The longer graph evaluations take, the slower it will be to + Component evaluations must complete for components to have the latest + arguments. The longer the evaluations take, the slower it will be to reconcile the state of components. If evaluation is taking too long, consider sharding your components to @@ -233,11 +233,11 @@ local filename = 'agent-flow-controller.json'; ]) ), - // Graph evaluation histogram + // Component evaluation histogram ( - panel.newHeatmap('Graph evaluation histogram') + + panel.newHeatmap('Component evaluation histogram') + panel.withDescription(||| - Detailed histogram view of how long graph evaluations take. + Detailed histogram view of how long component evaluations take. The goal is to design your config so that evaluations take as little time as possible; under 100ms is a good goal. @@ -251,5 +251,24 @@ local filename = 'agent-flow-controller.json'; ), ]) ), + + // Component dependency wait time histogram + ( + panel.newHeatmap('Component dependency wait histogram') + + panel.withDescription(||| + Detailed histogram of how long components wait to be evaluated after their dependency is updated. + + The goal is to design your config so that most of the time components do not + queue for long; under 10ms is a good goal. + |||) + + panel.withPosition({ x: 0, y: 22, w: 8, h: 10 }) + + panel.withQueries([ + panel.newQuery( + expr='sum by (le) (increase(agent_component_dependencies_wait_seconds_bucket{cluster="$cluster", namespace="$namespace"}[$__rate_interval]))', + format='heatmap', + legendFormat='{{le}}', + ), + ]) + ), ]), } diff --git a/pkg/agentctl/waltools/walstats_test.go b/pkg/agentctl/waltools/walstats_test.go index f3a9dea7eaac..cb0a0e42b09c 100644 --- a/pkg/agentctl/waltools/walstats_test.go +++ b/pkg/agentctl/waltools/walstats_test.go @@ -51,7 +51,7 @@ func setupTestWAL(t *testing.T) string { walDir := t.TempDir() reg := prometheus.NewRegistry() - w, err := wlog.NewSize(log.NewNopLogger(), reg, filepath.Join(walDir, "wal"), wlog.DefaultSegmentSize, true) + w, err := wlog.NewSize(log.NewNopLogger(), reg, filepath.Join(walDir, "wal"), wlog.DefaultSegmentSize, wlog.CompressionSnappy) require.NoError(t, err) defer w.Close() diff --git a/pkg/config/agentmanagement.go b/pkg/config/agentmanagement.go index e05298e635aa..1f50ccc639d4 100644 --- a/pkg/config/agentmanagement.go +++ b/pkg/config/agentmanagement.go @@ -25,12 +25,24 @@ const ( apiPath = "/agent-management/api/agent/v2" labelManagementEnabledHeader = "X-LabelManagementEnabled" agentIDHeader = "X-AgentID" + agentNamespaceVersionHeader = "X-AgentNamespaceVersion" + agentInfoVersionHeader = "X-AgentInfoVersion" + acceptNotModifiedHeader = "X-AcceptHTTPNotModified" +) + +var ( + agentInfoVersion string + agentNamespaceVersion string + defaultRemoteConfiguration = RemoteConfiguration{ + AcceptHTTPNotModified: true, + } ) type remoteConfigProvider interface { GetCachedRemoteConfig() ([]byte, error) CacheRemoteConfig(remoteConfigBytes []byte) error FetchRemoteConfig() ([]byte, error) + GetPollingInterval() time.Duration } type remoteConfigHTTPProvider struct { @@ -138,6 +150,16 @@ func (r remoteConfigHTTPProvider) FetchRemoteConfig() ([]byte, error) { labelManagementEnabledHeader: "1", agentIDHeader: r.InitialConfig.RemoteConfiguration.AgentID, } + + if agentNamespaceVersion != "" { + remoteOpts.headers[agentNamespaceVersionHeader] = agentNamespaceVersion + } + if agentInfoVersion != "" { + remoteOpts.headers[agentInfoVersionHeader] = agentInfoVersion + } + if r.InitialConfig.RemoteConfiguration.AcceptHTTPNotModified { + remoteOpts.headers[acceptNotModifiedHeader] = "1" + } } url, err := r.InitialConfig.fullUrl() @@ -149,23 +171,51 @@ func (r remoteConfigHTTPProvider) FetchRemoteConfig() ([]byte, error) { return nil, fmt.Errorf("error reading remote config: %w", err) } - bb, err := rc.retrieve() + bb, headers, err := rc.retrieve() + + // If the server returns a 304, return it and the caller will handle it. + var nme notModifiedError + if errors.Is(err, nme) { + return nil, nme + } + if err != nil { return nil, fmt.Errorf("error retrieving remote config: %w", err) } + + nsVersion := headers.Get(agentNamespaceVersionHeader) + infoVersion := headers.Get(agentInfoVersionHeader) + if nsVersion != "" && infoVersion != "" { + agentNamespaceVersion = nsVersion + agentInfoVersion = infoVersion + } + return bb, nil } +func (r remoteConfigHTTPProvider) GetPollingInterval() time.Duration { + return r.InitialConfig.PollingInterval +} + type labelMap map[string]string type RemoteConfiguration struct { Labels labelMap `yaml:"labels"` LabelManagementEnabled bool `yaml:"label_management_enabled"` + AcceptHTTPNotModified bool `yaml:"accept_http_not_modified"` AgentID string `yaml:"agent_id"` Namespace string `yaml:"namespace"` CacheLocation string `yaml:"cache_location"` } +// UnmarshalYAML implement YAML Unmarshaler +func (rc *RemoteConfiguration) UnmarshalYAML(unmarshal func(interface{}) error) error { + // Apply defaults + *rc = defaultRemoteConfiguration + type plain RemoteConfiguration + return unmarshal((*plain)(rc)) +} + type AgentManagementConfig struct { Enabled bool `yaml:"-"` // Derived from enable-features=agent-management Host string `yaml:"host"` @@ -181,12 +231,27 @@ type AgentManagementConfig struct { // error will be returned. func getRemoteConfig(expandEnvVars bool, configProvider remoteConfigProvider, log *server.Logger, fs *flag.FlagSet, retry bool) (*Config, error) { remoteConfigBytes, err := configProvider.FetchRemoteConfig() + if errors.Is(err, notModifiedError{}) { + level.Info(log).Log("msg", "remote config has not changed since last fetch, using cached copy") + remoteConfigBytes, err = configProvider.GetCachedRemoteConfig() + } if err != nil { var retryAfterErr retryAfterError if errors.As(err, &retryAfterErr) && retry { - level.Error(log).Log("msg", "received retry-after from API, sleeping and falling back to cache", "retry-after", retryAfterErr.retryAfter) - time.Sleep(retryAfterErr.retryAfter) - return getRemoteConfig(expandEnvVars, configProvider, log, fs, false) + // In the case that the server is telling us to retry after a time greater than our polling interval, + // the agent should sleep for the duration of the retry-after header. + // + // If the duration of the retry-after is lower than the polling interval, the agent will simply + // fall back to the cache and continue polling at the polling interval, effectively skipping + // this poll. + if retryAfterErr.retryAfter > configProvider.GetPollingInterval() { + level.Info(log).Log("msg", "received retry-after from API, sleeping and falling back to cache", "retry-after", retryAfterErr.retryAfter) + time.Sleep(retryAfterErr.retryAfter) + } else { + level.Info(log).Log("msg", "received retry-after from API, falling back to cache", "retry-after", retryAfterErr.retryAfter) + } + // Return the cached config, as this is the last known good config and a config must be returned here. + return getCachedRemoteConfig(expandEnvVars, configProvider, fs, log) } level.Error(log).Log("msg", "could not fetch from API, falling back to cache", "err", err) return getCachedRemoteConfig(expandEnvVars, configProvider, fs, log) diff --git a/pkg/config/agentmanagement_test.go b/pkg/config/agentmanagement_test.go index 03cfd44eb33d..9ca5b453f75f 100644 --- a/pkg/config/agentmanagement_test.go +++ b/pkg/config/agentmanagement_test.go @@ -48,6 +48,10 @@ func (t *testRemoteConfigProvider) CacheRemoteConfig(r []byte) error { return nil } +func (t *testRemoteConfigProvider) GetPollingInterval() time.Duration { + return t.InitialConfig.PollingInterval +} + var validAgentManagementConfig = AgentManagementConfig{ Enabled: true, Host: "localhost:1234", @@ -69,6 +73,21 @@ var validAgentManagementConfig = AgentManagementConfig{ var cachedConfig = []byte(`{"base_config":"","snippets":[]}`) +func TestUnmarshalDefault(t *testing.T) { + cfg := `host: "localhost:1234" +protocol: "https" +polling_interval: "1m" +remote_configuration: + namespace: "test_namespace"` + var am AgentManagementConfig + err := yaml.Unmarshal([]byte(cfg), &am) + assert.NoError(t, err) + assert.True(t, am.RemoteConfiguration.AcceptHTTPNotModified) + assert.Equal(t, "https", am.Protocol) + assert.Equal(t, time.Minute, am.PollingInterval) + assert.Equal(t, "test_namespace", am.RemoteConfiguration.Namespace) +} + func TestValidateValidConfig(t *testing.T) { assert.NoError(t, validAgentManagementConfig.Validate()) } @@ -551,9 +570,8 @@ func TestGetCachedConfig_RetryAfter(t *testing.T) { assert.NoError(t, err) assert.False(t, testProvider.didCacheRemoteConfig) - // check that FetchRemoteConfig was called twice on the TestProvider: - // 1 call for the initial attempt, a second for the retry - assert.Equal(t, 2, testProvider.fetchRemoteConfigCallCount) + // check that FetchRemoteConfig was called only once on the TestProvider + assert.Equal(t, 1, testProvider.fetchRemoteConfigCallCount) // the cached config should have been retrieved once, on the second // attempt to fetch the remote config diff --git a/pkg/config/config.go b/pkg/config/config.go index 968b3267dd48..c11493dc90b9 100644 --- a/pkg/config/config.go +++ b/pkg/config/config.go @@ -330,7 +330,7 @@ func LoadRemote(url string, expandEnvVars bool, c *Config) error { if rc == nil { return LoadFile(url, expandEnvVars, c) } - bb, err := rc.retrieve() + bb, _, err := rc.retrieve() if err != nil { return fmt.Errorf("error retrieving remote config: %w", err) } diff --git a/pkg/config/remote_config.go b/pkg/config/remote_config.go index aec0d9abe3df..9b272553cb51 100644 --- a/pkg/config/remote_config.go +++ b/pkg/config/remote_config.go @@ -26,7 +26,7 @@ type remoteOpts struct { // remoteProvider interface should be implemented by config providers type remoteProvider interface { - retrieve() ([]byte, error) + retrieve() ([]byte, http.Header, error) } // newRemoteProvider constructs a new remote configuration provider. The rawURL is parsed @@ -94,11 +94,17 @@ func (r retryAfterError) Error() string { return fmt.Sprintf("server indicated to retry after %s", r.retryAfter) } +type notModifiedError struct{} + +func (n notModifiedError) Error() string { + return "server indicated no changes" +} + // retrieve implements remoteProvider and fetches the config -func (p httpProvider) retrieve() ([]byte, error) { +func (p httpProvider) retrieve() ([]byte, http.Header, error) { req, err := http.NewRequest(http.MethodGet, p.myURL.String(), nil) if err != nil { - return nil, fmt.Errorf("error creating request: %w", err) + return nil, nil, fmt.Errorf("error creating request: %w", err) } for header, headerVal := range p.headers { req.Header.Set(header, headerVal) @@ -106,7 +112,7 @@ func (p httpProvider) retrieve() ([]byte, error) { response, err := p.httpClient.Do(req) if err != nil { instrumentation.InstrumentRemoteConfigFetchError() - return nil, fmt.Errorf("request failed: %w", err) + return nil, nil, fmt.Errorf("request failed: %w", err) } defer response.Body.Close() @@ -115,21 +121,25 @@ func (p httpProvider) retrieve() ([]byte, error) { if response.StatusCode == http.StatusTooManyRequests || response.StatusCode == http.StatusServiceUnavailable { retryAfter := response.Header.Get("Retry-After") if retryAfter == "" { - return nil, fmt.Errorf("server indicated to retry, but no Retry-After header was provided") + return nil, nil, fmt.Errorf("server indicated to retry, but no Retry-After header was provided") } retryAfterDuration, err := time.ParseDuration(retryAfter) if err != nil { - return nil, fmt.Errorf("server indicated to retry, but Retry-After header was not a valid duration: %w", err) + return nil, nil, fmt.Errorf("server indicated to retry, but Retry-After header was not a valid duration: %w", err) } - return nil, retryAfterError{retryAfter: retryAfterDuration} + return nil, nil, retryAfterError{retryAfter: retryAfterDuration} + } + + if response.StatusCode == http.StatusNotModified { + return nil, nil, notModifiedError{} } if response.StatusCode/100 != 2 { - return nil, fmt.Errorf("error fetching config: status code: %d", response.StatusCode) + return nil, nil, fmt.Errorf("error fetching config: status code: %d", response.StatusCode) } bb, err := io.ReadAll(response.Body) if err != nil { - return nil, err + return nil, nil, err } - return bb, nil + return bb, response.Header, nil } diff --git a/pkg/config/remote_config_test.go b/pkg/config/remote_config_test.go index fe4a022a6076..f8b5b046ce1a 100644 --- a/pkg/config/remote_config_test.go +++ b/pkg/config/remote_config_test.go @@ -12,6 +12,8 @@ import ( "github.com/stretchr/testify/require" ) +const configPath = "/agent.yml" + func TestRemoteConfigHTTP(t *testing.T) { testCfg := ` metrics: @@ -20,7 +22,7 @@ metrics: ` svr := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - if r.URL.Path == "/agent.yml" { + if r.URL.Path == configPath { _, _ = w.Write([]byte(testCfg)) } })) @@ -31,7 +33,15 @@ metrics: w.WriteHeader(http.StatusUnauthorized) return } - if r.URL.Path == "/agent.yml" { + if r.URL.Path == configPath { + _, _ = w.Write([]byte(testCfg)) + } + })) + + svrWithHeaders := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path == configPath { + w.Header().Add("X-Test-Header", "test") + w.Header().Add("X-Other-Header", "test2") _, _ = w.Write([]byte(testCfg)) } })) @@ -55,10 +65,11 @@ metrics: opts *remoteOpts } tests := []struct { - name string - args args - want []byte - wantErr bool + name string + args args + want []byte + wantErr bool + wantHeaders map[string][]string }{ { name: "httpScheme config", @@ -111,6 +122,18 @@ metrics: want: nil, wantErr: true, }, + { + name: "response headers are returned", + args: args{ + rawURL: fmt.Sprintf("%s/agent.yml", svrWithHeaders.URL), + }, + want: []byte(testCfg), + wantErr: false, + wantHeaders: map[string][]string{ + "X-Test-Header": {"test"}, + "X-Other-Header": {"test2"}, + }, + }, } for _, tt := range tests { @@ -121,9 +144,12 @@ metrics: return } assert.NoError(t, err) - bb, err := rc.retrieve() + bb, header, err := rc.retrieve() assert.NoError(t, err) assert.Equal(t, string(tt.want), string(bb)) + for k, v := range tt.wantHeaders { + assert.Equal(t, v, header[k]) + } }) } } diff --git a/pkg/flow/flow.go b/pkg/flow/flow.go index a47eefea6711..9b4d43be73b9 100644 --- a/pkg/flow/flow.go +++ b/pkg/flow/flow.go @@ -52,6 +52,7 @@ import ( "github.com/go-kit/log/level" "github.com/grafana/agent/pkg/flow/internal/controller" + "github.com/grafana/agent/pkg/flow/internal/worker" "github.com/grafana/agent/pkg/flow/logging" "github.com/grafana/agent/pkg/flow/tracing" "github.com/grafana/agent/service" @@ -124,6 +125,7 @@ func New(o Options) *Flow { Options: o, ModuleRegistry: newModuleRegistry(), IsModule: false, // We are creating a new root controller. + WorkerPool: worker.NewDefaultWorkerPool(), }) } @@ -135,6 +137,8 @@ type controllerOptions struct { ComponentRegistry controller.ComponentRegistry // Custom component registry used in tests. ModuleRegistry *moduleRegistry // Where to register created modules. IsModule bool // Whether this controller is for a module. + // A worker pool to evaluate components asynchronously. A default one will be created if this is nil. + WorkerPool worker.Pool } // newController creates a new, unstarted Flow controller with a specific @@ -142,8 +146,9 @@ type controllerOptions struct { // given modReg. func newController(o controllerOptions) *Flow { var ( - log = o.Logger - tracer = o.Tracer + log = o.Logger + tracer = o.Tracer + workerPool = o.WorkerPool ) if tracer == nil { @@ -155,6 +160,11 @@ func newController(o controllerOptions) *Flow { } } + if workerPool == nil { + level.Info(log).Log("msg", "no worker pool provided, creating a default pool", "controller", o.ControllerID) + workerPool = worker.NewDefaultWorkerPool() + } + f := &Flow{ log: log, tracer: tracer, @@ -192,6 +202,7 @@ func newController(o controllerOptions) *Flow { DataPath: o.DataPath, ID: id, ServiceMap: serviceMap.FilterByName(availableServices), + WorkerPool: workerPool, }) }, GetServiceData: func(name string) (interface{}, error) { @@ -206,6 +217,7 @@ func newController(o controllerOptions) *Flow { Services: o.Services, Host: f, ComponentRegistry: o.ComponentRegistry, + WorkerPool: workerPool, }) return f @@ -214,8 +226,8 @@ func newController(o controllerOptions) *Flow { // Run starts the Flow controller, blocking until the provided context is // canceled. Run must only be called once. func (f *Flow) Run(ctx context.Context) { - defer f.sched.Close() - defer f.loader.Cleanup() + defer func() { _ = f.sched.Close() }() + defer f.loader.Cleanup(!f.opts.IsModule) defer level.Debug(f.log).Log("msg", "flow controller exiting") for { @@ -224,19 +236,11 @@ func (f *Flow) Run(ctx context.Context) { return case <-f.updateQueue.Chan(): - // We need to pop _everything_ from the queue and evaluate each of them. - // If we only pop a single element, other components may sit waiting for - // evaluation forever. - for { - updated := f.updateQueue.TryDequeue() - if updated == nil { - break - } - - level.Debug(f.log).Log("msg", "handling component with updated state", "node_id", updated.NodeID()) - f.loader.EvaluateDependencies(updated) - } - + // Evaluate all components that have been updated. Sending the entire batch together will improve + // throughput - it prevents the situation where two components have the same dependency, and the first time + // it's picked up by the worker pool and the second time it's enqueued again, resulting in more evaluations. + all := f.updateQueue.DequeueAll() + f.loader.EvaluateDependencies(ctx, all) case <-f.loadFinished: level.Info(f.log).Log("msg", "scheduling loaded components and services") diff --git a/pkg/flow/flow_services_test.go b/pkg/flow/flow_services_test.go index a527b474cbbf..128b1267e180 100644 --- a/pkg/flow/flow_services_test.go +++ b/pkg/flow/flow_services_test.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/agent/component" "github.com/grafana/agent/pkg/flow/internal/controller" - "github.com/grafana/agent/pkg/flow/internal/testcomponents" // Import test components + "github.com/grafana/agent/pkg/flow/internal/testcomponents" "github.com/grafana/agent/pkg/flow/internal/testservices" "github.com/grafana/agent/pkg/util" "github.com/grafana/agent/service" @@ -16,6 +16,7 @@ import ( ) func TestServices(t *testing.T) { + defer verifyNoGoroutineLeaks(t) ctx, cancel := context.WithCancel(context.Background()) defer cancel() @@ -45,6 +46,7 @@ func TestServices(t *testing.T) { } func TestServices_Configurable(t *testing.T) { + defer verifyNoGoroutineLeaks(t) type ServiceOptions struct { Name string `river:"name,attr"` } @@ -98,6 +100,7 @@ func TestServices_Configurable(t *testing.T) { // arguments is configured properly even when it is not defined in the config // file. func TestServices_Configurable_Optional(t *testing.T) { + defer verifyNoGoroutineLeaks(t) type ServiceOptions struct { Name string `river:"name,attr,optional"` } @@ -140,6 +143,7 @@ func TestServices_Configurable_Optional(t *testing.T) { } func TestFlow_GetServiceConsumers(t *testing.T) { + defer verifyNoGoroutineLeaks(t) var ( svcA = &testservices.Fake{ DefinitionFunc: func() service.Definition { @@ -163,6 +167,7 @@ func TestFlow_GetServiceConsumers(t *testing.T) { opts.Services = append(opts.Services, svcA, svcB) ctrl := New(opts) + defer cleanUpController(ctrl) require.NoError(t, ctrl.LoadSource(makeEmptyFile(t), nil)) expectConsumers := []service.Consumer{{ @@ -174,6 +179,7 @@ func TestFlow_GetServiceConsumers(t *testing.T) { } func TestFlow_GetServiceConsumers_Modules(t *testing.T) { + defer verifyNoGoroutineLeaks(t) ctx, cancel := context.WithCancel(context.Background()) defer cancel() @@ -244,6 +250,7 @@ func TestFlow_GetServiceConsumers_Modules(t *testing.T) { } func TestComponents_Using_Services(t *testing.T) { + defer verifyNoGoroutineLeaks(t) ctx, cancel := context.WithCancel(context.Background()) defer cancel() @@ -327,6 +334,7 @@ func TestComponents_Using_Services(t *testing.T) { } func TestComponents_Using_Services_In_Modules(t *testing.T) { + defer verifyNoGoroutineLeaks(t) ctx, cancel := context.WithCancel(context.Background()) defer cancel() diff --git a/pkg/flow/flow_test.go b/pkg/flow/flow_test.go index f99e191237da..590f97a424f1 100644 --- a/pkg/flow/flow_test.go +++ b/pkg/flow/flow_test.go @@ -1,6 +1,7 @@ package flow import ( + "context" "os" "testing" @@ -10,6 +11,7 @@ import ( "github.com/grafana/agent/pkg/flow/internal/testcomponents" "github.com/grafana/agent/pkg/flow/logging" "github.com/stretchr/testify/require" + "go.uber.org/goleak" ) var testFile = ` @@ -31,7 +33,9 @@ var testFile = ` ` func TestController_LoadSource_Evaluation(t *testing.T) { + defer verifyNoGoroutineLeaks(t) ctrl := New(testOptions(t)) + defer cleanUpController(ctrl) // Use testFile from graph_builder_test.go. f, err := ParseSource(t.Name(), []byte(testFile)) @@ -71,3 +75,24 @@ func testOptions(t *testing.T) Options { Reg: nil, } } + +func cleanUpController(ctrl *Flow) { + // To avoid leaking goroutines and clean-up, we need to run and shut down the controller. + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + cancel() + <-done +} + +func verifyNoGoroutineLeaks(t *testing.T) { + t.Helper() + goleak.VerifyNone( + t, + goleak.IgnoreTopFunction("go.opencensus.io/stats/view.(*worker).start"), + goleak.IgnoreTopFunction("go.opentelemetry.io/otel/sdk/trace.(*batchSpanProcessor).processQueue"), + ) +} diff --git a/pkg/flow/flow_updates_test.go b/pkg/flow/flow_updates_test.go new file mode 100644 index 000000000000..90960f6eef8e --- /dev/null +++ b/pkg/flow/flow_updates_test.go @@ -0,0 +1,381 @@ +package flow + +import ( + "context" + "testing" + "time" + + "github.com/grafana/agent/pkg/flow/internal/testcomponents" + "github.com/grafana/agent/pkg/flow/internal/worker" + "github.com/stretchr/testify/require" +) + +func TestController_Updates(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // Simple pipeline with a minimal lag + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.passthrough.inc_dep_1.output + lag = "1ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_2.output + } +` + + ctrl := newTestController(t) + + // Use testUpdatesFile from graph_builder_test.go. + f, err := ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + // Wait for the updates to propagate + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + return out.(testcomponents.SummationExports).LastAdded == 10 + }, 3*time.Second, 10*time.Millisecond) + + in, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_1") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, out = getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_2") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, _ = getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + require.Equal(t, 10, in.(testcomponents.SummationConfig).Input) +} + +func TestController_Updates_WithQueueFull(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // Simple pipeline with a minimal lag with one node having 3 direct dependencies and one misbehaving node. + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "misbehaving_slow" { + input = testcomponents.count.inc.count + lag = "100ms" + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_3" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_3.output + } +` + + ctrl := newController(controllerOptions{ + Options: testOptions(t), + ModuleRegistry: newModuleRegistry(), + IsModule: false, + // The small number of workers and small queue means that a lot of updates will need to be retried. + WorkerPool: worker.NewShardedWorkerPool(1, 1), + }) + + // Use testUpdatesFile from graph_builder_test.go. + f, err := ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + // Wait for the updates to propagate + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + return out.(testcomponents.SummationExports).LastAdded == 10 + }, 3*time.Second, 10*time.Millisecond) + + in, _ := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + require.Equal(t, 10, in.(testcomponents.SummationConfig).Input) + + in, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_3") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + // The dep_2 is independent of sum and dep_3, so we check for it with eventually. + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_2") + return out.(testcomponents.PassthroughExports).Output == "10" + }, 3*time.Second, 10*time.Millisecond) + + // Similar for dep_1 + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_1") + return out.(testcomponents.PassthroughExports).Output == "10" + }, 3*time.Second, 10*time.Millisecond) +} + +func TestController_Updates_WithLag(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // Simple pipeline with some lag + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "10ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.passthrough.inc_dep_1.output + lag = "10ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_2.output + } +` + + ctrl := newTestController(t) + + // Use testUpdatesFile from graph_builder_test.go. + f, err := ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + // Wait for the updates to propagate + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + return out.(testcomponents.SummationExports).LastAdded == 10 + }, 3*time.Second, 10*time.Millisecond) + + in, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_1") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, out = getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_2") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, _ = getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + require.Equal(t, 10, in.(testcomponents.SummationConfig).Input) +} + +func TestController_Updates_WithOtherLaggingPipeline(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // Another pipeline exists with a significant lag. + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.passthrough.inc_dep_1.output + lag = "1ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_2.output + } + + testcomponents.count "inc_2" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_slow" { + input = testcomponents.count.inc_2.count + lag = "500ms" + } +` + + ctrl := newTestController(t) + + // Use testUpdatesFile from graph_builder_test.go. + f, err := ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + // Wait for the updates to propagate + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + return out.(testcomponents.SummationExports).LastAdded == 10 + }, 2*time.Second, 10*time.Millisecond) + + in, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_1") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, out = getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_2") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, _ = getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + require.Equal(t, 10, in.(testcomponents.SummationConfig).Input) +} + +func TestController_Updates_WithLaggingComponent(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // Part of the pipeline has a significant lag. + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.passthrough.inc_dep_1.output + lag = "1ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_2.output + } + + testcomponents.passthrough "inc_dep_slow" { + input = testcomponents.count.inc.count + lag = "500ms" + } +` + + ctrl := newTestController(t) + + // Use testUpdatesFile from graph_builder_test.go. + f, err := ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + // Wait for the updates to propagate + require.Eventually(t, func() bool { + _, out := getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + return out.(testcomponents.SummationExports).LastAdded == 10 + }, 2*time.Second, 10*time.Millisecond) + + in, out := getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_1") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, out = getFields(t, ctrl.loader.Graph(), "testcomponents.passthrough.inc_dep_2") + require.Equal(t, "10", in.(testcomponents.PassthroughConfig).Input) + require.Equal(t, "10", out.(testcomponents.PassthroughExports).Output) + + in, _ = getFields(t, ctrl.loader.Graph(), "testcomponents.summation.sum") + require.Equal(t, 10, in.(testcomponents.SummationConfig).Input) +} + +func newTestController(t *testing.T) *Flow { + return newController(controllerOptions{ + Options: testOptions(t), + ModuleRegistry: newModuleRegistry(), + IsModule: false, + // Make sure that we have consistent number of workers for tests to make them deterministic. + WorkerPool: worker.NewShardedWorkerPool(4, 100), + }) +} diff --git a/pkg/flow/internal/controller/loader.go b/pkg/flow/internal/controller/loader.go index ec7d6aa76435..9544f60534e6 100644 --- a/pkg/flow/internal/controller/loader.go +++ b/pkg/flow/internal/controller/loader.go @@ -10,16 +10,16 @@ import ( "github.com/go-kit/log" "github.com/go-kit/log/level" "github.com/grafana/agent/pkg/flow/internal/dag" + "github.com/grafana/agent/pkg/flow/internal/worker" "github.com/grafana/agent/pkg/flow/tracing" "github.com/grafana/agent/service" + "github.com/grafana/dskit/backoff" "github.com/grafana/river/ast" "github.com/grafana/river/diag" "github.com/hashicorp/go-multierror" "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/codes" "go.opentelemetry.io/otel/trace" - - _ "github.com/grafana/agent/pkg/flow/internal/testcomponents" // Include test components ) // The Loader builds and evaluates ComponentNodes from River blocks. @@ -30,6 +30,12 @@ type Loader struct { services []service.Service host service.Host componentReg ComponentRegistry + workerPool worker.Pool + // backoffConfig is used to backoff when an updated component's dependencies cannot be submitted to worker + // pool for evaluation in EvaluateDependencies, because the queue is full. This is an unlikely scenario, but when + // it happens we should avoid retrying too often to give other goroutines a chance to progress. Having a backoff + // also prevents log spamming with errors. + backoffConfig backoff.Config mut sync.RWMutex graph *dag.Graph @@ -51,6 +57,7 @@ type LoaderOptions struct { Services []service.Service // Services to load into the DAG. Host service.Host // Service host (when running services). ComponentRegistry ComponentRegistry // Registry to search for components. + WorkerPool worker.Pool // Worker pool to use for async tasks. } // NewLoader creates a new Loader. Components built by the Loader will be built @@ -74,6 +81,14 @@ func NewLoader(opts LoaderOptions) *Loader { services: services, host: host, componentReg: reg, + workerPool: opts.WorkerPool, + + // This is a reasonable default which should work for most cases. If a component is completely stuck, we would + // retry and log an error every 10 seconds, at most. + backoffConfig: backoff.Config{ + MinBackoff: 1 * time.Millisecond, + MaxBackoff: 10 * time.Second, + }, graph: &dag.Graph{}, originalGraph: &dag.Graph{}, @@ -227,8 +242,11 @@ func (l *Loader) Apply(args map[string]any, componentBlocks []*ast.BlockStmt, co return diags } -// Cleanup unregisters any existing metrics. -func (l *Loader) Cleanup() { +// Cleanup unregisters any existing metrics and optionally stops the worker pool. +func (l *Loader) Cleanup(stopWorkerPool bool) { + if stopWorkerPool { + l.workerPool.Stop() + } if l.globals.Registerer == nil { return } @@ -440,17 +458,7 @@ func (l *Loader) populateComponentNodes(g *dag.Graph, componentBlocks []*ast.Blo continue } - if registration.Singleton && block.Label != "" { - diags.Add(diag.Diagnostic{ - Severity: diag.SeverityLevelError, - Message: fmt.Sprintf("Component %q does not support labels", componentName), - StartPos: block.LabelPos.Position(), - EndPos: block.LabelPos.Add(len(block.Label) + 1).Position(), - }) - continue - } - - if !registration.Singleton && block.Label == "" { + if block.Label == "" { diags.Add(diag.Diagnostic{ Severity: diag.SeverityLevelError, Message: fmt.Sprintf("Component %q must have a label", componentName), @@ -460,16 +468,6 @@ func (l *Loader) populateComponentNodes(g *dag.Graph, componentBlocks []*ast.Blo continue } - if registration.Singleton && l.isModule() { - diags.Add(diag.Diagnostic{ - Severity: diag.SeverityLevelError, - Message: fmt.Sprintf("Component %q is a singleton and unsupported inside a module", componentName), - StartPos: block.NamePos.Position(), - EndPos: block.NamePos.Add(len(componentName) - 1).Position(), - }) - continue - } - // Create a new component c = NewComponentNode(l.globals, registration, block) } @@ -564,80 +562,135 @@ func (l *Loader) OriginalGraph() *dag.Graph { return l.originalGraph.Clone() } -// EvaluateDependencies re-evaluates components which depend directly or -// indirectly on c. EvaluateDependencies should be called whenever a component -// updates its exports. -// -// The provided parentContext can be used to provide global variables and -// functions to components. A child context will be constructed from the parent -// to expose values of other components. -func (l *Loader) EvaluateDependencies(c *ComponentNode) { +// EvaluateDependencies sends components which depend directly on components in updatedNodes for evaluation to the +// workerPool. It should be called whenever components update their exports. +// It is beneficial to call EvaluateDependencies with a batch of components, as it will enqueue the entire batch before +// the worker pool starts to evaluate them, resulting in smaller number of total evaluations when +// node updates are frequent. If the worker pool's queue is full, EvaluateDependencies will retry with a backoff until +// it succeeds or until the ctx is cancelled. +func (l *Loader) EvaluateDependencies(ctx context.Context, updatedNodes []*ComponentNode) { + if len(updatedNodes) == 0 { + return + } tracer := l.tracer.Tracer("") + spanCtx, span := tracer.Start(context.Background(), "SubmitDependantsForEvaluation", trace.WithSpanKind(trace.SpanKindInternal)) + span.SetAttributes(attribute.Int("originators_count", len(updatedNodes))) + span.SetStatus(codes.Ok, "dependencies submitted for evaluation") + defer span.End() + + l.cm.controllerEvaluation.Set(1) + defer l.cm.controllerEvaluation.Set(0) l.mut.RLock() defer l.mut.RUnlock() - l.cm.controllerEvaluation.Set(1) - defer l.cm.controllerEvaluation.Set(0) - start := time.Now() + dependenciesToParentsMap := make(map[dag.Node]*ComponentNode) + for _, parent := range updatedNodes { + // Make sure we're in-sync with the current exports of parent. + l.cache.CacheExports(parent.ID(), parent.Exports()) + // We collect all nodes directly incoming to parent. + _ = dag.WalkIncomingNodes(l.graph, parent, func(n dag.Node) error { + dependenciesToParentsMap[n] = parent + return nil + }) + } - spanCtx, span := tracer.Start(context.Background(), "GraphEvaluatePartial", trace.WithSpanKind(trace.SpanKindInternal)) - span.SetAttributes(attribute.String("initiator", c.NodeID())) + // Submit all dependencies for asynchronous evaluation. + // During evaluation, if a node's exports change, Flow will add it to updated nodes queue (controller.Queue) and + // the Flow controller will call EvaluateDependencies on it again. This results in a concurrent breadth-first + // traversal of the nodes that need to be evaluated. + for n, parent := range dependenciesToParentsMap { + dependantCtx, span := tracer.Start(spanCtx, "SubmitForEvaluation", trace.WithSpanKind(trace.SpanKindInternal)) + span.SetAttributes(attribute.String("node_id", n.NodeID())) + span.SetAttributes(attribute.String("originator_id", parent.NodeID())) + + // Submit for asynchronous evaluation with retries and backoff. Don't use range variables in the closure. + var ( + nodeRef, parentRef = n, parent + retryBackoff = backoff.New(ctx, l.backoffConfig) + err error + ) + for retryBackoff.Ongoing() { + err = l.workerPool.SubmitWithKey(nodeRef.NodeID(), func() { + l.concurrentEvalFn(nodeRef, dependantCtx, tracer, parentRef) + }) + if err != nil { + level.Error(l.log).Log( + "msg", "failed to submit node for evaluation - the agent is likely overloaded "+ + "and cannot keep up with evaluating components - will retry", + "err", err, + "node_id", n.NodeID(), + "originator_id", parent.NodeID(), + "retries", retryBackoff.NumRetries(), + ) + retryBackoff.Wait() + } else { + break + } + } + span.SetAttributes(attribute.Int("retries", retryBackoff.NumRetries())) + if err != nil { + span.SetStatus(codes.Error, err.Error()) + } else { + span.SetStatus(codes.Ok, "node submitted for evaluation") + } + span.End() + } + + // Report queue size metric. + l.cm.evaluationQueueSize.Set(float64(l.workerPool.QueueSize())) +} + +// concurrentEvalFn returns a function that evaluates a node and updates the cache. This function can be submitted to +// a worker pool for asynchronous evaluation. +func (l *Loader) concurrentEvalFn(n dag.Node, spanCtx context.Context, tracer trace.Tracer, parent *ComponentNode) { + start := time.Now() + l.cm.dependenciesWaitTime.Observe(time.Since(parent.lastUpdateTime.Load()).Seconds()) + _, span := tracer.Start(spanCtx, "EvaluateNode", trace.WithSpanKind(trace.SpanKindInternal)) + span.SetAttributes(attribute.String("node_id", n.NodeID())) defer span.End() - logger := log.With(l.log, "trace_id", span.SpanContext().TraceID()) - level.Info(logger).Log("msg", "starting partial graph evaluation") defer func() { - span.SetStatus(codes.Ok, "") - duration := time.Since(start) - level.Info(logger).Log("msg", "finished partial graph evaluation", "duration", duration) + level.Info(l.log).Log("msg", "finished node evaluation", "node_id", n.NodeID(), "duration", duration) l.cm.componentEvaluationTime.Observe(duration.Seconds()) }() - // Make sure we're in-sync with the current exports of c. - l.cache.CacheExports(c.ID(), c.Exports()) - - _ = dag.WalkReverse(l.graph, []dag.Node{c}, func(n dag.Node) error { - if n == c { - // Skip over the starting component; the starting component passed to - // EvaluateDependencies had its exports changed and none of its input - // arguments will need re-evaluation. - return nil - } + var err error + switch n := n.(type) { + case BlockNode: + ectx := l.cache.BuildContext() + evalErr := n.Evaluate(ectx) - _, span := tracer.Start(spanCtx, "EvaluateNode", trace.WithSpanKind(trace.SpanKindInternal)) - span.SetAttributes(attribute.String("node_id", n.NodeID())) - defer span.End() - - start := time.Now() - defer func() { - level.Info(logger).Log("msg", "finished node evaluation", "node_id", n.NodeID(), "duration", time.Since(start)) - }() - - var err error + // Only obtain loader lock after we have evaluated the node, allowing for concurrent evaluation. + l.mut.RLock() + err = l.postEvaluate(l.log, n, evalErr) - switch n := n.(type) { - case BlockNode: - err = l.evaluate(logger, n) - if exp, ok := n.(*ExportConfigNode); ok { - l.cache.CacheModuleExportValue(exp.Label(), exp.Value()) - } + // Additional post-evaluation steps necessary for module exports. + if exp, ok := n.(*ExportConfigNode); ok { + l.cache.CacheModuleExportValue(exp.Label(), exp.Value()) } - - // We only use the error for updating the span status; we don't return the - // error because we want to evaluate as many nodes as we can. - if err != nil { - span.SetStatus(codes.Error, err.Error()) + if l.globals.OnExportsChange != nil && l.cache.ExportChangeIndex() != l.moduleExportIndex { + // Upgrade to write lock to update the module exports. + l.mut.RUnlock() + l.mut.Lock() + defer l.mut.Unlock() + // Check if the update still needed after obtaining the write lock and perform it. + if l.cache.ExportChangeIndex() != l.moduleExportIndex { + l.globals.OnExportsChange(l.cache.CreateModuleExports()) + l.moduleExportIndex = l.cache.ExportChangeIndex() + } } else { - span.SetStatus(codes.Ok, "") + // No need to upgrade to write lock, just release the read lock. + l.mut.RUnlock() } - return nil - }) + } - if l.globals.OnExportsChange != nil && l.cache.ExportChangeIndex() != l.moduleExportIndex { - l.globals.OnExportsChange(l.cache.CreateModuleExports()) - l.moduleExportIndex = l.cache.ExportChangeIndex() + // We only use the error for updating the span status + if err != nil { + span.SetStatus(codes.Error, err.Error()) + } else { + span.SetStatus(codes.Ok, "node successfully evaluated") } } @@ -646,7 +699,12 @@ func (l *Loader) EvaluateDependencies(c *ComponentNode) { func (l *Loader) evaluate(logger log.Logger, bn BlockNode) error { ectx := l.cache.BuildContext() err := bn.Evaluate(ectx) + return l.postEvaluate(logger, bn, err) +} +// postEvaluate is called after a node has been evaluated. It updates the caches and logs any errors. +// mut must be held when calling postEvaluate. +func (l *Loader) postEvaluate(logger log.Logger, bn BlockNode, err error) error { switch c := bn.(type) { case *ComponentNode: // Always update the cache both the arguments and exports, since both might @@ -658,6 +716,8 @@ func (l *Loader) evaluate(logger log.Logger, bn BlockNode) error { if c.Optional() { l.cache.CacheModuleArgument(c.Label(), c.Default()) } else { + // NOTE: this masks the previous evaluation error, but we treat a missing module arguments as + // a more important error to address. err = fmt.Errorf("missing required argument %q to module", c.Label()) } } diff --git a/pkg/flow/internal/controller/loader_test.go b/pkg/flow/internal/controller/loader_test.go index c806a3ad35c8..2351218d642b 100644 --- a/pkg/flow/internal/controller/loader_test.go +++ b/pkg/flow/internal/controller/loader_test.go @@ -15,6 +15,8 @@ import ( "github.com/prometheus/client_golang/prometheus" "github.com/stretchr/testify/require" "go.opentelemetry.io/otel/trace" + + _ "github.com/grafana/agent/pkg/flow/internal/testcomponents" // Include test components ) func TestLoader(t *testing.T) { @@ -175,19 +177,6 @@ func TestLoader(t *testing.T) { diags := applyFromContent(t, l, []byte(invalidFile), nil) require.Error(t, diags.ErrorOrNil()) }) - - t.Run("Handling of singleton component labels", func(t *testing.T) { - invalidFile := ` - testcomponents.tick { - } - testcomponents.singleton "first" { - } - ` - l := controller.NewLoader(newLoaderOptions()) - diags := applyFromContent(t, l, []byte(invalidFile), nil) - require.ErrorContains(t, diags[0], `Component "testcomponents.tick" must have a label`) - require.ErrorContains(t, diags[1], `Component "testcomponents.singleton" does not support labels`) - }) } // TestScopeWithFailingComponent is used to ensure that the scope is filled out, even if the component diff --git a/pkg/flow/internal/controller/metrics.go b/pkg/flow/internal/controller/metrics.go index 196557a3940c..ff6c427d7e96 100644 --- a/pkg/flow/internal/controller/metrics.go +++ b/pkg/flow/internal/controller/metrics.go @@ -8,6 +8,8 @@ import ( type controllerMetrics struct { controllerEvaluation prometheus.Gauge componentEvaluationTime prometheus.Histogram + dependenciesWaitTime prometheus.Histogram + evaluationQueueSize prometheus.Gauge } // newControllerMetrics inits the metrics for the components controller @@ -27,17 +29,35 @@ func newControllerMetrics(id string) *controllerMetrics { ConstLabels: map[string]string{"controller_id": id}, }, ) + cm.dependenciesWaitTime = prometheus.NewHistogram( + prometheus.HistogramOpts{ + Name: "agent_component_dependencies_wait_seconds", + Help: "Time spent by components waiting to be evaluated after their dependency is updated.", + ConstLabels: map[string]string{"controller_id": id}, + }, + ) + + cm.evaluationQueueSize = prometheus.NewGauge(prometheus.GaugeOpts{ + Name: "agent_component_evaluation_queue_size", + Help: "Tracks the number of components waiting to be evaluated in the worker pool", + ConstLabels: map[string]string{"controller_id": id}, + }) + return cm } func (cm *controllerMetrics) Collect(ch chan<- prometheus.Metric) { cm.componentEvaluationTime.Collect(ch) cm.controllerEvaluation.Collect(ch) + cm.dependenciesWaitTime.Collect(ch) + cm.evaluationQueueSize.Collect(ch) } func (cm *controllerMetrics) Describe(ch chan<- *prometheus.Desc) { cm.componentEvaluationTime.Describe(ch) - cm.componentEvaluationTime.Describe(ch) + cm.controllerEvaluation.Describe(ch) + cm.dependenciesWaitTime.Describe(ch) + cm.evaluationQueueSize.Describe(ch) } type controllerCollector struct { diff --git a/pkg/flow/internal/controller/node_component.go b/pkg/flow/internal/controller/node_component.go index e734787f96f1..cab7109141a0 100644 --- a/pkg/flow/internal/controller/node_component.go +++ b/pkg/flow/internal/controller/node_component.go @@ -91,6 +91,7 @@ type ComponentNode struct { exportsType reflect.Type moduleController ModuleController OnComponentUpdate func(cn *ComponentNode) // Informs controller that we need to reevaluate + lastUpdateTime atomic.Time mut sync.RWMutex block *ast.BlockStmt // Current River block to derive args from @@ -98,8 +99,6 @@ type ComponentNode struct { managed component.Component // Inner managed component args component.Arguments // Evaluated arguments for the managed component - doingEval atomic.Bool - // NOTE(rfratto): health and exports have their own mutex because they may be // set asynchronously while mut is still being held (i.e., when calling Evaluate // and the managed component immediately creates new exports) @@ -266,9 +265,6 @@ func (cn *ComponentNode) evaluate(scope *vm.Scope) error { cn.mut.Lock() defer cn.mut.Unlock() - cn.doingEval.Store(true) - defer cn.doingEval.Store(false) - argsPointer := cn.reg.CloneArguments() if err := cn.eval.Evaluate(scope, argsPointer); err != nil { return fmt.Errorf("decoding River: %w", err) @@ -390,18 +386,9 @@ func (cn *ComponentNode) setExports(e component.Exports) { } cn.exportsMut.Unlock() - if cn.doingEval.Load() { - // Optimization edge case: some components supply exports when they're - // being evaluated. - // - // Since components that are being evaluated will always cause their - // dependencies to also be evaluated, there's no reason to call - // onExportsChange here. - return - } - if changed { // Inform the controller that we have new exports. + cn.lastUpdateTime.Store(time.Now()) cn.OnComponentUpdate(cn) } } diff --git a/pkg/flow/internal/controller/node_config_argument.go b/pkg/flow/internal/controller/node_config_argument.go index aadfbb6b560b..5b979503f805 100644 --- a/pkg/flow/internal/controller/node_config_argument.go +++ b/pkg/flow/internal/controller/node_config_argument.go @@ -36,8 +36,9 @@ func NewArgumentConfigNode(block *ast.BlockStmt, globals ComponentGlobals) *Argu } type argumentBlock struct { - Optional bool `river:"optional,attr,optional"` - Default any `river:"default,attr,optional"` + Optional bool `river:"optional,attr,optional"` + Default any `river:"default,attr,optional"` + Comment string `river:"comment,attr,optional"` } // Evaluate implements BlockNode and updates the arguments for the managed config block diff --git a/pkg/flow/internal/controller/queue.go b/pkg/flow/internal/controller/queue.go index acbe6980c8d9..a8cd1b5bae05 100644 --- a/pkg/flow/internal/controller/queue.go +++ b/pkg/flow/internal/controller/queue.go @@ -1,23 +1,27 @@ package controller -import "sync" +import ( + "sync" +) -// Queue is an unordered queue of components. +// Queue is a thread-safe, insertion-ordered set of components. // // Queue is intended for tracking components that have updated their Exports // for later reevaluation. type Queue struct { - mut sync.Mutex - queued map[*ComponentNode]struct{} + mut sync.Mutex + queuedSet map[*ComponentNode]struct{} + queuedOrder []*ComponentNode updateCh chan struct{} } -// NewQueue returns a new unordered component queue. +// NewQueue returns a new queue. func NewQueue() *Queue { return &Queue{ - updateCh: make(chan struct{}, 1), - queued: make(map[*ComponentNode]struct{}), + updateCh: make(chan struct{}, 1), + queuedSet: make(map[*ComponentNode]struct{}), + queuedOrder: make([]*ComponentNode, 0), } } @@ -26,7 +30,14 @@ func NewQueue() *Queue { func (q *Queue) Enqueue(c *ComponentNode) { q.mut.Lock() defer q.mut.Unlock() - q.queued[c] = struct{}{} + + // Skip if already queued. + if _, ok := q.queuedSet[c]; ok { + return + } + + q.queuedOrder = append(q.queuedOrder, c) + q.queuedSet[c] = struct{}{} select { case q.updateCh <- struct{}{}: default: @@ -36,16 +47,14 @@ func (q *Queue) Enqueue(c *ComponentNode) { // Chan returns a channel which is written to when the queue is non-empty. func (q *Queue) Chan() <-chan struct{} { return q.updateCh } -// TryDequeue dequeues a randomly queued component. TryDequeue will return nil -// if the queue is empty. -func (q *Queue) TryDequeue() *ComponentNode { +// DequeueAll removes all components from the queue and returns them. +func (q *Queue) DequeueAll() []*ComponentNode { q.mut.Lock() defer q.mut.Unlock() - for c := range q.queued { - delete(q.queued, c) - return c - } + all := q.queuedOrder + q.queuedOrder = make([]*ComponentNode, 0) + q.queuedSet = make(map[*ComponentNode]struct{}) - return nil + return all } diff --git a/pkg/flow/internal/controller/queue_test.go b/pkg/flow/internal/controller/queue_test.go index 2e36292c85d3..c93fb14ef8fc 100644 --- a/pkg/flow/internal/controller/queue_test.go +++ b/pkg/flow/internal/controller/queue_test.go @@ -2,15 +2,90 @@ package controller import ( "testing" + "time" "github.com/stretchr/testify/require" + "go.uber.org/atomic" ) func TestEnqueueDequeue(t *testing.T) { tn := &ComponentNode{} q := NewQueue() q.Enqueue(tn) - require.Lenf(t, q.queued, 1, "queue should be 1") - fn := q.TryDequeue() - require.True(t, fn == tn) + require.Lenf(t, q.queuedSet, 1, "queue should be 1") + all := q.DequeueAll() + require.Len(t, all, 1) + require.True(t, all[0] == tn) + require.Len(t, q.queuedSet, 0) +} + +func TestDequeue_Empty(t *testing.T) { + q := NewQueue() + require.Len(t, q.queuedSet, 0) + require.Len(t, q.DequeueAll(), 0) +} + +func TestDequeue_InOrder(t *testing.T) { + c1, c2, c3 := &ComponentNode{}, &ComponentNode{}, &ComponentNode{} + q := NewQueue() + q.Enqueue(c1) + q.Enqueue(c2) + q.Enqueue(c3) + require.Len(t, q.queuedSet, 3) + all := q.DequeueAll() + require.Len(t, all, 3) + require.Len(t, q.queuedSet, 0) + require.Same(t, c1, all[0]) + require.Same(t, c2, all[1]) + require.Same(t, c3, all[2]) +} + +func TestDequeue_NoDuplicates(t *testing.T) { + c1, c2 := &ComponentNode{}, &ComponentNode{} + q := NewQueue() + q.Enqueue(c1) + q.Enqueue(c1) + q.Enqueue(c2) + q.Enqueue(c1) + q.Enqueue(c2) + q.Enqueue(c1) + require.Len(t, q.queuedSet, 2) + all := q.DequeueAll() + require.Len(t, all, 2) + require.Len(t, q.queuedSet, 0) + require.Same(t, c1, all[0]) + require.Same(t, c2, all[1]) +} + +func TestEnqueue_ChannelNotification(t *testing.T) { + c1 := &ComponentNode{} + q := NewQueue() + + notificationsCount := atomic.Int32{} + waiting := make(chan struct{}) + testDone := make(chan struct{}) + defer close(testDone) + go func() { + waiting <- struct{}{} + for { + select { + case <-testDone: + return + case <-q.Chan(): + all := q.DequeueAll() + notificationsCount.Add(int32(len(all))) + } + } + }() + + // Make sure the consumer is waiting + <-waiting + + // Write 10 items to the queue and make sure we get notified + for i := 1; i <= 10; i++ { + q.Enqueue(c1) + require.Eventually(t, func() bool { + return notificationsCount.Load() == int32(i) + }, 3*time.Second, 5*time.Millisecond) + } } diff --git a/pkg/flow/internal/dag/walk.go b/pkg/flow/internal/dag/walk.go index 5151d347bc5e..eda98f5fbf69 100644 --- a/pkg/flow/internal/dag/walk.go +++ b/pkg/flow/internal/dag/walk.go @@ -41,40 +41,13 @@ func Walk(g *Graph, start []Node, fn WalkFunc) error { return nil } -// WalkReverse performs a depth-first walk of incoming edges for all nodes in -// start, invoking the provided fn for each node. Walk returns the error -// returned by fn. -// -// Nodes unreachable from start will not be passed to fn. -func WalkReverse(g *Graph, start []Node, fn WalkFunc) error { - var ( - visited = make(nodeSet) - unchecked = make([]Node, 0, len(start)) - ) - - // Prefill the set of unchecked nodes with our start set. - unchecked = append(unchecked, start...) - - // Iterate through unchecked nodes, visiting each in turn and adding incoming - // edges to the unchecked list until all reachable nodes have been processed. - for len(unchecked) > 0 { - check := unchecked[len(unchecked)-1] - unchecked = unchecked[:len(unchecked)-1] - - if visited.Has(check) { - continue - } - visited.Add(check) - - if err := fn(check); err != nil { +// WalkIncomingNodes walks all the nodes that have a direct, incoming edge to start. +func WalkIncomingNodes(g *Graph, start Node, fn WalkFunc) error { + for n := range g.inEdges[start] { + if err := fn(n); err != nil { return err } - - for n := range g.inEdges[check] { - unchecked = append(unchecked, n) - } } - return nil } diff --git a/pkg/flow/internal/testcomponents/count.go b/pkg/flow/internal/testcomponents/count.go new file mode 100644 index 000000000000..88e37af793ce --- /dev/null +++ b/pkg/flow/internal/testcomponents/count.go @@ -0,0 +1,99 @@ +package testcomponents + +import ( + "context" + "fmt" + "sync" + "time" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/grafana/agent/component" + "go.uber.org/atomic" +) + +func init() { + component.Register(component.Registration{ + Name: "testcomponents.count", + Args: CountConfig{}, + Exports: CountExports{}, + + Build: func(opts component.Options, args component.Arguments) (component.Component, error) { + return NewCount(opts, args.(CountConfig)) + }, + }) +} + +type CountConfig struct { + Frequency time.Duration `river:"frequency,attr"` + Max int `river:"max,attr"` +} + +type CountExports struct { + Count int `river:"count,attr,optional"` +} + +type Count struct { + opts component.Options + log log.Logger + count atomic.Int32 + + cfgMut sync.Mutex + cfg CountConfig +} + +func NewCount(o component.Options, cfg CountConfig) (*Count, error) { + t := &Count{opts: o, log: o.Logger} + if err := t.Update(cfg); err != nil { + return nil, err + } + return t, nil +} + +var ( + _ component.Component = (*Count)(nil) +) + +func (t *Count) Run(ctx context.Context) error { + for { + select { + case <-ctx.Done(): + return nil + case <-time.After(t.getNextCount()): + t.cfgMut.Lock() + maxCount := t.cfg.Max + t.cfgMut.Unlock() + + currentCount := t.count.Load() + if maxCount == 0 || currentCount < int32(maxCount) { + if t.count.CompareAndSwap(currentCount, currentCount+1) { + level.Info(t.log).Log("msg", "incremented count", "count", currentCount+1) + t.opts.OnStateChange(CountExports{Count: int(currentCount + 1)}) + } else { + level.Info(t.log).Log("msg", "failed to increment count", "count", currentCount) + } + } + } + } +} + +func (t *Count) getNextCount() time.Duration { + t.cfgMut.Lock() + defer t.cfgMut.Unlock() + return t.cfg.Frequency +} + +// Update implements Component. +func (t *Count) Update(args component.Arguments) error { + t.cfgMut.Lock() + defer t.cfgMut.Unlock() + + cfg := args.(CountConfig) + if cfg.Frequency == 0 { + return fmt.Errorf("frequency must not be 0") + } + + level.Info(t.log).Log("msg", "setting count frequency", "freq", cfg.Frequency) + t.cfg = cfg + return nil +} diff --git a/pkg/flow/internal/testcomponents/passthrough.go b/pkg/flow/internal/testcomponents/passthrough.go index c23bcc4e7eee..eaa0a175d825 100644 --- a/pkg/flow/internal/testcomponents/passthrough.go +++ b/pkg/flow/internal/testcomponents/passthrough.go @@ -2,6 +2,7 @@ package testcomponents import ( "context" + "time" "github.com/go-kit/log" "github.com/go-kit/log/level" @@ -22,7 +23,8 @@ func init() { // PassthroughConfig configures the testcomponents.passthrough component. type PassthroughConfig struct { - Input string `river:"input,attr"` + Input string `river:"input,attr"` + Lag time.Duration `river:"lag,attr,optional"` } // PassthroughExports describes exported fields for the @@ -62,6 +64,11 @@ func (t *Passthrough) Run(ctx context.Context) error { func (t *Passthrough) Update(args component.Arguments) error { c := args.(PassthroughConfig) + if c.Lag != 0 { + level.Info(t.log).Log("msg", "sleeping for lag", "lag", c.Lag) + time.Sleep(c.Lag) + } + level.Info(t.log).Log("msg", "passing through value", "value", c.Input) t.opts.OnStateChange(PassthroughExports{Output: c.Input}) return nil diff --git a/pkg/flow/internal/testcomponents/singleton.go b/pkg/flow/internal/testcomponents/singleton.go deleted file mode 100644 index 3dcf8a1d0e94..000000000000 --- a/pkg/flow/internal/testcomponents/singleton.go +++ /dev/null @@ -1,59 +0,0 @@ -package testcomponents - -import ( - "context" - - "github.com/go-kit/log" - "github.com/grafana/agent/component" -) - -func init() { - component.Register(component.Registration{ - Name: "testcomponents.singleton", - Args: SingletonArguments{}, - Exports: SingletonExports{}, - Singleton: true, - - Build: func(opts component.Options, args component.Arguments) (component.Component, error) { - return NewSingleton(opts, args.(SingletonArguments)) - }, - }) -} - -// SingletonArguments configures the testcomponents.singleton component. -type SingletonArguments struct{} - -// SingletonExports describes exported fields for the -// testcomponents.singleton component. -type SingletonExports struct{} - -// Singleton implements the testcomponents.singleton component, which is a -// no-op component. -type Singleton struct { - opts component.Options - log log.Logger -} - -// NewSingleton creates a new singleton component. -func NewSingleton(o component.Options, cfg SingletonArguments) (*Singleton, error) { - t := &Singleton{opts: o, log: o.Logger} - if err := t.Update(cfg); err != nil { - return nil, err - } - return t, nil -} - -var ( - _ component.Component = (*Passthrough)(nil) -) - -// Run implements Component. -func (t *Singleton) Run(ctx context.Context) error { - <-ctx.Done() - return nil -} - -// Update implements Component. -func (t *Singleton) Update(args component.Arguments) error { - return nil -} diff --git a/pkg/flow/internal/testcomponents/sumation.go b/pkg/flow/internal/testcomponents/sumation.go new file mode 100644 index 000000000000..88995733e9d9 --- /dev/null +++ b/pkg/flow/internal/testcomponents/sumation.go @@ -0,0 +1,66 @@ +package testcomponents + +import ( + "context" + + "github.com/go-kit/log" + "github.com/go-kit/log/level" + "github.com/grafana/agent/component" + "go.uber.org/atomic" +) + +func init() { + component.Register(component.Registration{ + Name: "testcomponents.summation", + Args: SummationConfig{}, + Exports: SummationExports{}, + + Build: func(opts component.Options, args component.Arguments) (component.Component, error) { + return NewSummation(opts, args.(SummationConfig)) + }, + }) +} + +type SummationConfig struct { + Input int `river:"input,attr"` +} + +type SummationExports struct { + Sum int `river:"sum,attr"` + LastAdded int `river:"last_added,attr"` +} + +type Summation struct { + opts component.Options + log log.Logger + sum atomic.Int32 +} + +// NewSummation creates a new summation component. +func NewSummation(o component.Options, cfg SummationConfig) (*Summation, error) { + t := &Summation{opts: o, log: o.Logger} + if err := t.Update(cfg); err != nil { + return nil, err + } + return t, nil +} + +var ( + _ component.Component = (*Summation)(nil) +) + +// Run implements Component. +func (t *Summation) Run(ctx context.Context) error { + <-ctx.Done() + return nil +} + +// Update implements Component. +func (t *Summation) Update(args component.Arguments) error { + c := args.(SummationConfig) + newSum := int(t.sum.Add(int32(c.Input))) + + level.Info(t.log).Log("msg", "updated sum", "value", newSum, "input", c.Input) + t.opts.OnStateChange(SummationExports{Sum: newSum, LastAdded: c.Input}) + return nil +} diff --git a/pkg/flow/internal/worker/worker_pool.go b/pkg/flow/internal/worker/worker_pool.go new file mode 100644 index 000000000000..b5ff904b2024 --- /dev/null +++ b/pkg/flow/internal/worker/worker_pool.go @@ -0,0 +1,130 @@ +package worker + +import ( + "fmt" + "hash/fnv" + "math/rand" + "runtime" + "sync" +) + +type Pool interface { + // Stop stops the worker pool. It does not wait to drain any internal queues, but it does wait for the currently + // running tasks to complete. It must only be called once. + Stop() + // Submit submits a function to be executed by the worker pool on a random worker. Error is returned if the pool + // is unable to accept extra work. + Submit(func()) error + // SubmitWithKey submits a function to be executed by the worker pool, ensuring that only one + // job with given key can be queued at the time. Adding a job with a key that is already queued is a no-op (even if + // the submitted function is different). Error is returned if the pool is unable to accept extra work - the caller + // can decide how to handle this situation. + SubmitWithKey(string, func()) error + // QueueSize returns the number of tasks currently queued. + QueueSize() int +} + +// shardedWorkerPool is a Pool that distributes work across a fixed number of workers, using a hash of the key to +// determine which worker to use. This, to an extent, defends the pool from a slow task hogging all the workers. +type shardedWorkerPool struct { + workersCount int + workQueues []chan func() + quit chan struct{} + allStopped sync.WaitGroup + + lock sync.Mutex + set map[string]struct{} +} + +var _ Pool = (*shardedWorkerPool)(nil) + +func NewDefaultWorkerPool() Pool { + return NewShardedWorkerPool(runtime.NumCPU(), 1024) +} + +// NewShardedWorkerPool creates a new worker pool with the given number of workers and queue size for each worker. +// The queued tasks are sharded across the workers using a hash of the key. The pool is automatically started and +// ready to accept work. To prevent resource leak, Stop() must be called when the pool is no longer needed. +func NewShardedWorkerPool(workersCount int, queueSize int) Pool { + if workersCount <= 0 { + panic(fmt.Sprintf("workersCount must be positive, got %d", workersCount)) + } + queues := make([]chan func(), workersCount) + for i := 0; i < workersCount; i++ { + queues[i] = make(chan func(), queueSize) + } + pool := &shardedWorkerPool{ + workersCount: workersCount, + workQueues: queues, + quit: make(chan struct{}), + set: make(map[string]struct{}), + } + pool.start() + return pool +} + +func (w *shardedWorkerPool) Submit(f func()) error { + return w.SubmitWithKey(fmt.Sprintf("%d", rand.Int()), f) +} + +func (w *shardedWorkerPool) SubmitWithKey(key string, f func()) error { + wrapped := func() { + // NOTE: we intentionally remove from the queue before executing the function. This means that while a task is + // executing, another task with the same key can be added to the queue, potentially even by the task itself. + w.lock.Lock() + delete(w.set, key) + w.lock.Unlock() + + f() + } + queue := w.workQueues[w.workerFor(key)] + + w.lock.Lock() + defer w.lock.Unlock() + if _, exists := w.set[key]; exists { + return nil // only queue one job for given key + } + + select { + case queue <- wrapped: + w.set[key] = struct{}{} + return nil + default: + return fmt.Errorf("worker queue is full") + } +} + +func (w *shardedWorkerPool) QueueSize() int { + w.lock.Lock() + defer w.lock.Unlock() + return len(w.set) +} + +func (w *shardedWorkerPool) Stop() { + close(w.quit) + w.allStopped.Wait() +} + +func (w *shardedWorkerPool) start() { + for i := 0; i < w.workersCount; i++ { + queue := w.workQueues[i] + w.allStopped.Add(1) + go func() { + defer w.allStopped.Done() + for { + select { + case <-w.quit: + return + case f := <-queue: + f() + } + } + }() + } +} + +func (w *shardedWorkerPool) workerFor(s string) int { + h := fnv.New32a() + _, _ = h.Write([]byte(s)) + return int(h.Sum32()) % w.workersCount +} diff --git a/pkg/flow/internal/worker/worker_pool_test.go b/pkg/flow/internal/worker/worker_pool_test.go new file mode 100644 index 000000000000..27b5c24b7ebc --- /dev/null +++ b/pkg/flow/internal/worker/worker_pool_test.go @@ -0,0 +1,197 @@ +package worker + +import ( + "testing" + "time" + + "github.com/stretchr/testify/require" + "go.uber.org/atomic" + "go.uber.org/goleak" +) + +func TestWorkerPool(t *testing.T) { + t.Run("worker pool", func(t *testing.T) { + t.Run("should start and stop cleanly", func(t *testing.T) { + defer goleak.VerifyNone(t) + pool := NewShardedWorkerPool(4, 1) + require.Equal(t, 0, pool.QueueSize()) + defer pool.Stop() + }) + + t.Run("should reject invalid worker count", func(t *testing.T) { + defer goleak.VerifyNone(t) + + require.Panics(t, func() { + NewShardedWorkerPool(0, 0) + }) + + require.Panics(t, func() { + NewShardedWorkerPool(-1, 0) + }) + }) + + t.Run("should reject invalid buffer size", func(t *testing.T) { + defer goleak.VerifyNone(t) + require.Panics(t, func() { + NewShardedWorkerPool(1, -1) + }) + }) + + t.Run("should process a single task", func(t *testing.T) { + defer goleak.VerifyNone(t) + done := make(chan struct{}) + pool := NewShardedWorkerPool(4, 1) + defer pool.Stop() + + err := pool.Submit(func() { + done <- struct{}{} + }) + require.NoError(t, err) + select { + case <-done: + return + case <-time.After(3 * time.Second): + t.Fatal("timeout waiting for task to be processed") + return + } + }) + + t.Run("should process a single task with key", func(t *testing.T) { + defer goleak.VerifyNone(t) + done := make(chan struct{}) + pool := NewShardedWorkerPool(4, 1) + defer pool.Stop() + + err := pool.SubmitWithKey("testKey", func() { + done <- struct{}{} + }) + require.NoError(t, err) + select { + case <-done: + return + case <-time.After(3 * time.Second): + t.Fatal("timeout waiting for task to be processed") + return + } + }) + + t.Run("should not queue duplicated keys", func(t *testing.T) { + defer goleak.VerifyNone(t) + pool := NewShardedWorkerPool(4, 10) + defer pool.Stop() + tasksDone := atomic.Int32{} + + // First task will block the worker + blockFirstTask := make(chan struct{}) + firstTaskRunning := make(chan struct{}) + err := pool.SubmitWithKey("k1", func() { + firstTaskRunning <- struct{}{} + <-blockFirstTask + tasksDone.Inc() + }) + require.NoError(t, err) + + // Wait for the first task to be running already and blocking the worker + <-firstTaskRunning + require.Equal(t, 0, pool.QueueSize()) + + // Second task will be queued + err = pool.SubmitWithKey("k1", func() { + tasksDone.Inc() + }) + require.NoError(t, err) + require.Equal(t, 1, pool.QueueSize()) + + // Third task will be skipped, as we already have k1 in the queue + err = pool.SubmitWithKey("k1", func() { + tasksDone.Inc() + }) + require.NoError(t, err) + + // No tasks done yet as we're blocking the first task + require.Equal(t, int32(0), tasksDone.Load()) + + // After we unblock first task, two tasks should get done + blockFirstTask <- struct{}{} + require.Eventually(t, func() bool { + return tasksDone.Load() == 2 + }, 3*time.Second, 5*time.Millisecond) + require.Equal(t, 0, pool.QueueSize()) + + // No more tasks should be done, verify again with some delay + time.Sleep(100 * time.Millisecond) + require.Equal(t, int32(2), tasksDone.Load()) + }) + + t.Run("should concurrently process for different keys", func(t *testing.T) { + defer goleak.VerifyNone(t) + pool := NewShardedWorkerPool(4, 10) + defer pool.Stop() + tasksDone := atomic.Int32{} + + // First task will block the worker + blockFirstTask := make(chan struct{}) + firstTaskRunning := make(chan struct{}) + err := pool.SubmitWithKey("k1", func() { + firstTaskRunning <- struct{}{} + <-blockFirstTask + tasksDone.Inc() + }) + require.NoError(t, err) + + // Wait for the first task to be running already and blocking the worker + <-firstTaskRunning + + // Second and third tasks will complete as it has a key that will hash to a different shard + err = pool.SubmitWithKey("k2", func() { tasksDone.Inc() }) + require.NoError(t, err) + + err = pool.SubmitWithKey("k3", func() { tasksDone.Inc() }) + require.NoError(t, err) + + // Ensure the k2 and k3 tasks are done + require.Eventually(t, func() bool { + return tasksDone.Load() == 2 + }, 3*time.Second, 5*time.Millisecond) + + // After we unblock first task, it should get done as well + blockFirstTask <- struct{}{} + require.Eventually(t, func() bool { + return tasksDone.Load() == 3 + }, 3*time.Second, 5*time.Millisecond) + }) + + t.Run("should reject when queue is full", func(t *testing.T) { + defer goleak.VerifyNone(t) + // Pool with one worker and queue size of 1 - all work goes to one queue + pool := NewShardedWorkerPool(1, 1) + defer pool.Stop() + tasksDone := atomic.Int32{} + + // First task will block the worker + blockFirstTask := make(chan struct{}) + firstTaskRunning := make(chan struct{}) + err := pool.SubmitWithKey("k1", func() { + firstTaskRunning <- struct{}{} + <-blockFirstTask + tasksDone.Inc() + }) + require.NoError(t, err) + defer func() { blockFirstTask <- struct{}{} }() + + // Wait for the first task to be running already and blocking the worker + <-firstTaskRunning + require.Equal(t, 0, pool.QueueSize()) + + // Second task will be queued + err = pool.SubmitWithKey("k2", func() { tasksDone.Inc() }) + require.NoError(t, err) + require.Equal(t, 1, pool.QueueSize()) + + // Third task cannot be accepted, because the queue is full + err = pool.SubmitWithKey("k3", func() { tasksDone.Inc() }) + require.ErrorContains(t, err, "queue is full") + require.Equal(t, 1, pool.QueueSize()) + }) + }) +} diff --git a/pkg/flow/module.go b/pkg/flow/module.go index ef9ffa81b4e9..f0687b8873a7 100644 --- a/pkg/flow/module.go +++ b/pkg/flow/module.go @@ -8,6 +8,7 @@ import ( "github.com/grafana/agent/component" "github.com/grafana/agent/pkg/flow/internal/controller" + "github.com/grafana/agent/pkg/flow/internal/worker" "github.com/grafana/agent/pkg/flow/logging" "github.com/grafana/agent/pkg/flow/tracing" "github.com/grafana/river/scanner" @@ -103,6 +104,7 @@ func newModule(o *moduleOptions) *module { IsModule: true, ModuleRegistry: o.ModuleRegistry, ComponentRegistry: o.ComponentRegistry, + WorkerPool: o.WorkerPool, Options: Options{ ControllerID: o.ID, Tracer: o.Tracer, @@ -171,4 +173,8 @@ type moduleControllerOptions struct { // ServiceMap is a map of services which can be used in the module // controller. ServiceMap controller.ServiceMap + + // WorkerPool is a worker pool that can be used to run tasks asynchronously. A default pool will be created if this + // is nil. + WorkerPool worker.Pool } diff --git a/pkg/flow/module_caching_test.go b/pkg/flow/module_caching_test.go new file mode 100644 index 000000000000..3d7eb6f3448d --- /dev/null +++ b/pkg/flow/module_caching_test.go @@ -0,0 +1,192 @@ +package flow_test + +// This file contains tests which verify that the Flow controller correctly updates and caches modules' arguments +// and exports in presence of multiple components. + +import ( + "context" + "os" + "strconv" + "testing" + "time" + + "github.com/grafana/agent/component" + "github.com/grafana/agent/pkg/flow" + "github.com/grafana/agent/pkg/flow/internal/testcomponents" + "github.com/grafana/agent/pkg/flow/logging" + "github.com/grafana/agent/service" + cluster_service "github.com/grafana/agent/service/cluster" + http_service "github.com/grafana/agent/service/http" + otel_service "github.com/grafana/agent/service/otel" + "github.com/stretchr/testify/require" + "go.uber.org/goleak" + + _ "github.com/grafana/agent/component/module/string" +) + +func TestUpdates_EmptyModule(t *testing.T) { + defer verifyNoGoroutineLeaks(t) + + // There's an empty module in the config below, but the pipeline we test for propagation is not affected by it. + config := ` + module.string "test" { + content = "" + } + + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + testcomponents.passthrough "inc_dep_1" { + input = testcomponents.count.inc.count + lag = "1ms" + } + + testcomponents.passthrough "inc_dep_2" { + input = testcomponents.passthrough.inc_dep_1.output + lag = "1ms" + } + + testcomponents.summation "sum" { + input = testcomponents.passthrough.inc_dep_2.output + } +` + + ctrl := flow.New(testOptions(t)) + f, err := flow.ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + require.Eventually(t, func() bool { + export := getExport[testcomponents.SummationExports](t, ctrl, "", "testcomponents.summation.sum") + return export.LastAdded == 10 + }, 3*time.Second, 10*time.Millisecond) +} + +func TestUpdates_ThroughModule(t *testing.T) { + // We use this module in a Flow config below. + module := ` + argument "input" { + optional = false + } + + testcomponents.passthrough "pt" { + input = argument.input.value + lag = "1ms" + } + + export "output" { + value = testcomponents.passthrough.pt.output + } +` + + // We send the count increments via module and to the summation component and verify that the updates propagate. + config := ` + testcomponents.count "inc" { + frequency = "10ms" + max = 10 + } + + module.string "test" { + content = ` + strconv.Quote(module) + ` + arguments { + input = testcomponents.count.inc.count + } + } + + testcomponents.summation "sum" { + input = module.string.test.exports.output + } +` + + ctrl := flow.New(testOptions(t)) + f, err := flow.ParseSource(t.Name(), []byte(config)) + require.NoError(t, err) + require.NotNil(t, f) + + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + + ctx, cancel := context.WithCancel(context.Background()) + done := make(chan struct{}) + go func() { + ctrl.Run(ctx) + close(done) + }() + defer func() { + cancel() + <-done + }() + + require.Eventually(t, func() bool { + export := getExport[testcomponents.SummationExports](t, ctrl, "", "testcomponents.summation.sum") + return export.LastAdded == 10 + }, 3*time.Second, 10*time.Millisecond) +} + +func testOptions(t *testing.T) flow.Options { + t.Helper() + s, err := logging.New(os.Stderr, logging.DefaultOptions) + require.NoError(t, err) + + clusterService, err := cluster_service.New(cluster_service.Options{ + Log: s, + EnableClustering: false, + NodeName: "test-node", + AdvertiseAddress: "127.0.0.1:80", + }) + require.NoError(t, err) + + otelService := otel_service.New(s) + require.NotNil(t, otelService) + + return flow.Options{ + Logger: s, + DataPath: t.TempDir(), + Reg: nil, + Services: []service.Service{ + http_service.New(http_service.Options{}), + clusterService, + otelService, + }, + } +} + +func getExport[T any](t *testing.T, ctrl *flow.Flow, moduleId string, nodeId string) T { + t.Helper() + info, err := ctrl.GetComponent(component.ID{ + ModuleID: moduleId, + LocalID: nodeId, + }, component.InfoOptions{ + GetHealth: true, + GetArguments: true, + GetExports: true, + GetDebugInfo: true, + }) + require.NoError(t, err) + return info.Exports.(T) +} + +func verifyNoGoroutineLeaks(t *testing.T) { + t.Helper() + goleak.VerifyNone( + t, + goleak.IgnoreTopFunction("go.opencensus.io/stats/view.(*worker).start"), + goleak.IgnoreTopFunction("go.opentelemetry.io/otel/sdk/trace.(*batchSpanProcessor).processQueue"), + ) +} diff --git a/pkg/flow/module_test.go b/pkg/flow/module_test.go index 6edeedb0d966..d877bc4ba51c 100644 --- a/pkg/flow/module_test.go +++ b/pkg/flow/module_test.go @@ -7,6 +7,7 @@ import ( "time" "github.com/grafana/agent/component" + "github.com/grafana/agent/pkg/flow/internal/worker" "github.com/grafana/agent/pkg/flow/logging" "github.com/prometheus/client_golang/prometheus" "github.com/stretchr/testify/require" @@ -25,6 +26,13 @@ const argumentConfig = ` default = "default_value" }` +const argumentWithFullOptsConfig = ` + argument "foo" { + comment = "description of foo" + optional = true + default = "default_value" + }` + const exportStringConfig = ` export "username" { value = "bob" @@ -96,19 +104,25 @@ func TestModule(t *testing.T) { exportModuleContent: exportStringConfig + exportDummy, expectedExports: []string{"username", "dummy"}, }, + { + name: "Argument block with comment is parseable", + exportModuleContent: argumentWithFullOptsConfig, + }, } for _, tc := range tt { t.Run(tc.name, func(t *testing.T) { + defer verifyNoGoroutineLeaks(t) mc := newModuleController(testModuleControllerOptions(t)).(*moduleController) + // modules do not clean up their own worker pool as we normally use a shared one from the root controller + defer mc.o.WorkerPool.Stop() tm := &testModule{ content: tc.argumentModuleContent + tc.exportModuleContent, args: tc.args, opts: component.Options{ModuleController: mc}, } - ctx := context.Background() - ctx, cnc := context.WithTimeout(ctx, 1*time.Second) + ctx, cnc := context.WithTimeout(context.Background(), 1*time.Second) defer cnc() err := tm.Run(ctx) if tc.expectedErrorContains == "" { @@ -125,7 +139,9 @@ func TestModule(t *testing.T) { } func TestArgsNotInModules(t *testing.T) { + defer verifyNoGoroutineLeaks(t) f := New(testOptions(t)) + defer cleanUpController(f) fl, err := ParseSource("test", []byte("argument \"arg\"{}")) require.NoError(t, err) err = f.LoadSource(fl, nil) @@ -133,7 +149,9 @@ func TestArgsNotInModules(t *testing.T) { } func TestExportsNotInModules(t *testing.T) { + defer verifyNoGoroutineLeaks(t) f := New(testOptions(t)) + defer cleanUpController(f) fl, err := ParseSource("test", []byte("export \"arg\"{ value = 1}")) require.NoError(t, err) err = f.LoadSource(fl, nil) @@ -141,6 +159,7 @@ func TestExportsNotInModules(t *testing.T) { } func TestExportsWhenNotUsed(t *testing.T) { + defer verifyNoGoroutineLeaks(t) f := New(testOptions(t)) content := " export \\\"username\\\" { value = 1 } \\n export \\\"dummy\\\" { value = 2 } " fullContent := "test.module \"t1\" { content = \"" + content + "\" }" @@ -160,7 +179,10 @@ func TestExportsWhenNotUsed(t *testing.T) { } func TestIDList(t *testing.T) { - nc := newModuleController(testModuleControllerOptions(t)) + defer verifyNoGoroutineLeaks(t) + o := testModuleControllerOptions(t) + defer o.WorkerPool.Stop() + nc := newModuleController(o) require.Len(t, nc.ModuleIDs(), 0) _, err := nc.NewModule("t1", nil) @@ -173,7 +195,10 @@ func TestIDList(t *testing.T) { } func TestIDCollision(t *testing.T) { - nc := newModuleController(testModuleControllerOptions(t)) + defer verifyNoGoroutineLeaks(t) + o := testModuleControllerOptions(t) + defer o.WorkerPool.Stop() + nc := newModuleController(o) m, err := nc.NewModule("t1", nil) require.NoError(t, err) require.NotNil(t, m) @@ -183,7 +208,9 @@ func TestIDCollision(t *testing.T) { } func TestIDRemoval(t *testing.T) { + defer verifyNoGoroutineLeaks(t) opts := testModuleControllerOptions(t) + defer opts.WorkerPool.Stop() opts.ID = "test" nc := newModuleController(opts) m, err := nc.NewModule("t1", func(exports map[string]any) {}) @@ -209,6 +236,7 @@ func testModuleControllerOptions(t *testing.T) *moduleControllerOptions { DataPath: t.TempDir(), Reg: prometheus.NewRegistry(), ModuleRegistry: newModuleRegistry(), + WorkerPool: worker.NewShardedWorkerPool(1, 100), } } diff --git a/pkg/flow/source_test.go b/pkg/flow/source_test.go index abbc8cb407a5..fa79c8c1e9e1 100644 --- a/pkg/flow/source_test.go +++ b/pkg/flow/source_test.go @@ -61,6 +61,7 @@ func TestParseSource_Defaults(t *testing.T) { } func TestParseSources_DuplicateComponent(t *testing.T) { + defer verifyNoGoroutineLeaks(t) content := ` logging { format = "json" @@ -87,6 +88,7 @@ func TestParseSources_DuplicateComponent(t *testing.T) { }) require.NoError(t, err) ctrl := New(testOptions(t)) + defer cleanUpController(ctrl) err = ctrl.LoadSource(s, nil) diagErrs, ok := err.(diag.Diagnostics) require.True(t, ok) @@ -94,6 +96,7 @@ func TestParseSources_DuplicateComponent(t *testing.T) { } func TestParseSources_UniqueComponent(t *testing.T) { + defer verifyNoGoroutineLeaks(t) content := ` logging { format = "json" @@ -116,6 +119,7 @@ func TestParseSources_UniqueComponent(t *testing.T) { }) require.NoError(t, err) ctrl := New(testOptions(t)) + defer cleanUpController(ctrl) err = ctrl.LoadSource(s, nil) require.NoError(t, err) } diff --git a/pkg/integrations/cadvisor/cadvisor.go b/pkg/integrations/cadvisor/cadvisor.go index 0b7fd7d09473..b0c854e692de 100644 --- a/pkg/integrations/cadvisor/cadvisor.go +++ b/pkg/integrations/cadvisor/cadvisor.go @@ -25,12 +25,10 @@ import ( // Register container providers "github.com/google/cadvisor/container/containerd" - _ "github.com/google/cadvisor/container/containerd/install" // register containerd container plugin - _ "github.com/google/cadvisor/container/crio/install" // register crio container plugin + "github.com/google/cadvisor/container/crio" "github.com/google/cadvisor/container/docker" - _ "github.com/google/cadvisor/container/docker/install" // register docker container plugin "github.com/google/cadvisor/container/raw" - _ "github.com/google/cadvisor/container/systemd/install" // register systemd container plugin + "github.com/google/cadvisor/container/systemd" ) // Matching the default disabled set from cadvisor - https://github.com/google/cadvisor/blob/3c6e3093c5ca65c57368845ddaea2b4ca6bc0da8/cmd/cadvisor.go#L78-L93 @@ -86,22 +84,22 @@ func New(logger log.Logger, c *Config) (integrations.Integration, error) { // Do gross global configs. This works, so long as there is only one instance of the cAdvisor integration // per host. - // klog klog.SetLogger(c.logger) - - // Containerd - containerd.ArgContainerdEndpoint = &c.Containerd - containerd.ArgContainerdNamespace = &c.ContainerdNamespace - - // Docker - docker.ArgDockerEndpoint = &c.Docker - docker.ArgDockerTLS = &c.DockerTLS - docker.ArgDockerCert = &c.DockerTLSCert - docker.ArgDockerKey = &c.DockerTLSKey - docker.ArgDockerCA = &c.DockerTLSCA - - // Raw - raw.DockerOnly = &c.DockerOnly + plugins := map[string]container.Plugin{ + "containerd": containerd.NewPluginWithOptions(&containerd.Options{ + ContainerdEndpoint: c.Containerd, + ContainerdNamespace: c.ContainerdNamespace, + }), + "crio": crio.NewPlugin(), + "docker": docker.NewPluginWithOptions(&docker.Options{ + DockerEndpoint: c.Docker, + DockerTLS: c.DockerTLS, + DockerCert: c.DockerTLSCert, + DockerKey: c.DockerTLSKey, + DockerCA: c.DockerTLSCA, + }), + "systemd": systemd.NewPlugin(), + } // Only using in-memory storage, with no backup storage for cadvisor stats memoryStorage := memory.New(c.StorageDuration, []storage.StorageDriver{}) @@ -115,7 +113,10 @@ func New(logger log.Logger, c *Config) (integrations.Integration, error) { return nil, fmt.Errorf("unable to determine included metrics: %w", err) } - rm, err := manager.New(memoryStorage, sysFs, manager.HousekeepingConfigFlags, includedMetrics, &collectorHTTPClient, c.RawCgroupPrefixAllowlist, c.EnvMetadataAllowlist, c.PerfEventsConfig, time.Duration(c.ResctrlInterval)) + rawOpts := raw.Options{ + DockerOnly: c.DockerOnly, + } + rm, err := manager.New(plugins, memoryStorage, sysFs, manager.HousekeepingConfigFlags, includedMetrics, &collectorHTTPClient, c.RawCgroupPrefixAllowlist, c.EnvMetadataAllowlist, c.PerfEventsConfig, time.Duration(c.ResctrlInterval), rawOpts) if err != nil { return nil, fmt.Errorf("failed to create a manager: %w", err) } diff --git a/pkg/integrations/cadvisor/common.go b/pkg/integrations/cadvisor/common.go index ecf40e61aacd..1ff7bc658f1b 100644 --- a/pkg/integrations/cadvisor/common.go +++ b/pkg/integrations/cadvisor/common.go @@ -53,7 +53,7 @@ type Config struct { PerfEventsConfig string `yaml:"perf_events_config,omitempty"` // ResctrlInterval resctrl mon groups updating interval. Zero value disables updating mon groups. - ResctrlInterval int `yaml:"resctrl_interval,omitempty"` + ResctrlInterval int64 `yaml:"resctrl_interval,omitempty"` // DisableMetrics list of `metrics` to be disabled. DisabledMetrics []string `yaml:"disabled_metrics,omitempty"` diff --git a/pkg/integrations/github_exporter/github_exporter.go b/pkg/integrations/github_exporter/github_exporter.go index 07af8f4fd2d1..580375b78c0a 100644 --- a/pkg/integrations/github_exporter/github_exporter.go +++ b/pkg/integrations/github_exporter/github_exporter.go @@ -4,13 +4,13 @@ import ( "fmt" "net/url" + gh_config "github.com/githubexporter/github-exporter/config" + "github.com/githubexporter/github-exporter/exporter" "github.com/go-kit/log" "github.com/go-kit/log/level" "github.com/grafana/agent/pkg/integrations" integrations_v2 "github.com/grafana/agent/pkg/integrations/v2" "github.com/grafana/agent/pkg/integrations/v2/metricsutils" - gh_config "github.com/infinityworks/github-exporter/config" - "github.com/infinityworks/github-exporter/exporter" config_util "github.com/prometheus/common/config" ) diff --git a/pkg/integrations/node_exporter/config.go b/pkg/integrations/node_exporter/config.go index 6bb1f8281f01..ff16bf966b3b 100644 --- a/pkg/integrations/node_exporter/config.go +++ b/pkg/integrations/node_exporter/config.go @@ -7,12 +7,12 @@ import ( "strings" "time" - "github.com/alecthomas/kingpin/v2" "github.com/go-kit/log" "github.com/grafana/agent/pkg/integrations" integrations_v2 "github.com/grafana/agent/pkg/integrations/v2" "github.com/grafana/agent/pkg/integrations/v2/metricsutils" "github.com/grafana/dskit/flagext" + "github.com/prometheus/node_exporter/collector" "github.com/prometheus/procfs" ) @@ -265,10 +265,8 @@ func init() { integrations_v2.RegisterLegacy(&Config{}, integrations_v2.TypeSingleton, metricsutils.Shim) } -// MapConfigToNodeExporterFlags takes in a node_exporter Config and converts -// it to the set of flags that node_exporter usually expects when running as a -// separate binary. -func MapConfigToNodeExporterFlags(c *Config) (accepted []string, ignored []string) { +func (c *Config) mapConfigToNodeConfig() *collector.NodeCollectorConfig { + validCollectors := make(map[string]bool) collectors := make(map[string]CollectorState, len(Collectors)) for k, v := range Collectors { collectors[k] = v @@ -290,8 +288,12 @@ func MapConfigToNodeExporterFlags(c *Config) (accepted []string, ignored []strin collectors[k] = CollectorStateDisabled } } + } else { + // This gets the default enabled passed in via register. + for k, v := range collector.GetDefaults() { + collectors[k] = CollectorState(v) + } } - // Explicitly disable/enable specific collectors for _, c := range c.DisableCollectors { collectors[c] = CollectorStateDisabled @@ -300,209 +302,232 @@ func MapConfigToNodeExporterFlags(c *Config) (accepted []string, ignored []strin collectors[c] = CollectorStateEnabled } - DisableUnavailableCollectors(collectors) + for k, v := range collectors { + validCollectors[k] = bool(v) + } + + // This removes any collectors not available on the platform. + availableCollectors := collector.GetAvailableCollectors() + for name := range validCollectors { + var found bool + for _, availableName := range availableCollectors { + if name != availableName { + continue + } + found = true + break + } + if !found { + delete(validCollectors, name) + } + } + + // blankString is a hack to emulate the behavior of kingpin, where node_exporter checks for blank string against a pointer + // without first checking the validity of the pointer. + // TODO change node_exporter to check for nil first. + blankString := "" + blankBool := false + blankInt := 0 - var flags flags - flags.accepted = append(flags.accepted, MapCollectorsToFlags(collectors)...) + cfg := &collector.NodeCollectorConfig{} - flags.add( - "--path.procfs", c.ProcFSPath, - "--path.sysfs", c.SysFSPath, - "--path.rootfs", c.RootFSPath, - "--path.udev.data", c.UdevDataPath, - ) + // It is safe to set all these configs these since only collectors that are enabled are used. - if collectors[CollectorBCache] { - flags.addBools(map[*bool]string{ - &c.BcachePriorityStats: "collector.bcache.priorityStats", - }) + cfg.Path = collector.PathConfig{ + ProcPath: &c.ProcFSPath, + SysPath: &c.SysFSPath, + RootfsPath: &c.RootFSPath, + UdevDataPath: &c.UdevDataPath, } - if collectors[CollectorCPU] { - flags.addBools(map[*bool]string{ - &c.CPUEnableCPUGuest: "collector.cpu.guest", - &c.CPUEnableCPUInfo: "collector.cpu.info", - }) - flags.add("--collector.cpu.info.flags-include", c.CPUFlagsInclude) - flags.add("--collector.cpu.info.bugs-include", c.CPUBugsInclude) + cfg.Bcache = collector.BcacheConfig{ + PriorityStats: &c.BcachePriorityStats, } - - if collectors[CollectorDiskstats] { - if c.DiskStatsDeviceInclude != "" { - flags.add("--collector.diskstats.device-include", c.DiskStatsDeviceInclude) - } else { - flags.add("--collector.diskstats.device-exclude", c.DiskStatsDeviceExclude) + cfg.CPU = collector.CPUConfig{ + EnableCPUGuest: &c.CPUEnableCPUGuest, + EnableCPUInfo: &c.CPUEnableCPUInfo, + BugsInclude: &c.CPUBugsInclude, + FlagsInclude: &c.CPUFlagsInclude, + } + if c.DiskStatsDeviceInclude != "" { + cfg.DiskstatsDeviceFilter = collector.DiskstatsDeviceFilterConfig{ + DeviceInclude: &c.DiskStatsDeviceInclude, + OldDeviceExclude: &blankString, + DeviceExclude: &blankString, + DeviceExcludeSet: false, + } + } else { + cfg.DiskstatsDeviceFilter = collector.DiskstatsDeviceFilterConfig{ + DeviceExclude: &c.DiskStatsDeviceExclude, + DeviceExcludeSet: true, + OldDeviceExclude: &blankString, + DeviceInclude: &blankString, } } - if collectors[CollectorEthtool] { - flags.add("--collector.ethtool.device-include", c.EthtoolDeviceInclude) - flags.add("--collector.ethtool.device-exclude", c.EthtoolDeviceExclude) - flags.add("--collector.ethtool.metrics-include", c.EthtoolMetricsInclude) + cfg.Ethtool = collector.EthtoolConfig{ + DeviceInclude: &c.EthtoolDeviceInclude, + DeviceExclude: &c.EthtoolDeviceExclude, + IncludedMetrics: &c.EthtoolMetricsInclude, } - if collectors[CollectorFilesystem] { - flags.add( - "--collector.filesystem.mount-timeout", c.FilesystemMountTimeout.String(), - "--collector.filesystem.mount-points-exclude", c.FilesystemMountPointsExclude, - "--collector.filesystem.fs-types-exclude", c.FilesystemFSTypesExclude, - ) + cfg.Filesystem = collector.FilesystemConfig{ + MountPointsExclude: &c.FilesystemMountPointsExclude, + MountPointsExcludeSet: true, + MountTimeout: &c.FilesystemMountTimeout, + FSTypesExclude: &c.FilesystemFSTypesExclude, + FSTypesExcludeSet: true, + OldFSTypesExcluded: &blankString, + OldMountPointsExcluded: &blankString, + StatWorkerCount: &blankInt, } - if collectors[CollectorIPVS] { - flags.add("--collector.ipvs.backend-labels", strings.Join(c.IPVSBackendLabels, ",")) + var joinedLabels string + if len(c.IPVSBackendLabels) > 0 { + joinedLabels = strings.Join(c.IPVSBackendLabels, ",") + cfg.IPVS = collector.IPVSConfig{ + Labels: &joinedLabels, + LabelsSet: true, + } + } else { + cfg.IPVS = collector.IPVSConfig{ + Labels: &joinedLabels, + LabelsSet: false, + } } - if collectors[CollectorNetclass] { - flags.addBools(map[*bool]string{ - &c.NetclassIgnoreInvalidSpeedDevice: "collector.netclass.ignore-invalid-speed", - }) - - flags.add("--collector.netclass.ignored-devices", c.NetclassIgnoredDevices) + cfg.NetClass = collector.NetClassConfig{ + IgnoredDevices: &c.NetclassIgnoredDevices, + InvalidSpeed: &c.NetclassIgnoreInvalidSpeedDevice, + Netlink: &blankBool, + RTNLWithStats: &blankBool, } - if collectors[CollectorNetdev] { - flags.addBools(map[*bool]string{ - &c.NetdevAddressInfo: "collector.netdev.address-info", - }) - - flags.add( - "--collector.netdev.device-include", c.NetdevDeviceInclude, - "--collector.netdev.device-exclude", c.NetdevDeviceExclude, - ) + cfg.NetDev = collector.NetDevConfig{ + DeviceInclude: &c.NetdevDeviceInclude, + DeviceExclude: &c.NetdevDeviceExclude, + AddressInfo: &c.NetdevAddressInfo, + OldDeviceInclude: &blankString, + OldDeviceExclude: &blankString, + Netlink: &blankBool, + DetailedMetrics: &blankBool, } - if collectors[CollectorNetstat] { - flags.add("--collector.netstat.fields", c.NetstatFields) + cfg.NetStat = collector.NetStatConfig{ + Fields: &c.NetstatFields, } - if collectors[CollectorNTP] { - flags.add( - "--collector.ntp.server", c.NTPServer, - "--collector.ntp.protocol-version", fmt.Sprintf("%d", c.NTPProtocolVersion), - "--collector.ntp.ip-ttl", fmt.Sprintf("%d", c.NTPIPTTL), - "--collector.ntp.max-distance", c.NTPMaxDistance.String(), - "--collector.ntp.local-offset-tolerance", c.NTPLocalOffsetTolerance.String(), - ) - - flags.addBools(map[*bool]string{ - &c.NTPServerIsLocal: "collector.ntp.server-is-local", - }) + defaultPort := 123 + cfg.NTP = collector.NTPConfig{ + Server: &c.NTPServer, + ServerPort: &defaultPort, + ProtocolVersion: &c.NTPProtocolVersion, + IPTTL: &c.NTPIPTTL, + MaxDistance: &c.NTPMaxDistance, + OffsetTolerance: &c.NTPLocalOffsetTolerance, + ServerIsLocal: &c.NTPServerIsLocal, } - if collectors[CollectorPerf] { - flags.add("--collector.perf.cpus", c.PerfCPUS) - - for _, tp := range c.PerfTracepoint { - flags.add("--collector.perf.tracepoint", tp) - } - - flags.addBools(map[*bool]string{ - &c.PerfDisableHardwareProfilers: "collector.perf.disable-hardware-profilers", - &c.PerfDisableSoftwareProfilers: "collector.perf.disable-software-profilers", - &c.PerfDisableCacheProfilers: "collector.perf.disable-cache-profilers", - }) - - for _, hwp := range c.PerfHardwareProfilers { - flags.add("--collector.perf.hardware-profilers", hwp) - } - for _, swp := range c.PerfSoftwareProfilers { - flags.add("--collector.perf.software-profilers", swp) - } - for _, cp := range c.PerfCacheProfilers { - flags.add("--collector.perf.cache-profilers", cp) - } + cfg.Perf = collector.PerfConfig{ + CPUs: &c.PerfCPUS, + Tracepoint: flagSliceToStringSlice(c.PerfTracepoint), + NoHwProfiler: &c.PerfDisableHardwareProfilers, + HwProfiler: flagSliceToStringSlice(c.PerfHardwareProfilers), + NoSwProfiler: &c.PerfDisableSoftwareProfilers, + SwProfiler: flagSliceToStringSlice(c.PerfSoftwareProfilers), + NoCaProfiler: &c.PerfDisableCacheProfilers, + CaProfilerFlag: flagSliceToStringSlice(c.PerfCacheProfilers), } - if collectors[CollectorPowersuppply] { - flags.add("--collector.powersupply.ignored-supplies", c.PowersupplyIgnoredSupplies) + cfg.PowerSupplyClass = collector.PowerSupplyClassConfig{ + IgnoredPowerSupplies: &c.PowersupplyIgnoredSupplies, } - if collectors[CollectorRunit] { - flags.add("--collector.runit.servicedir", c.RunitServiceDir) + cfg.Runit = collector.RunitConfig{ + ServiceDir: &c.RunitServiceDir, } - if collectors[CollectorSupervisord] { - flags.add("--collector.supervisord.url", c.SupervisordURL) + cfg.Supervisord = collector.SupervisordConfig{ + URL: &c.SupervisordURL, } - if collectors[CollectorSysctl] { - for _, numValue := range c.SysctlInclude { - flags.add("--collector.sysctl.include", numValue) - } + cfg.Sysctl = collector.SysctlConfig{ + Include: flagSliceToStringSlice(c.SysctlInclude), + IncludeInfo: flagSliceToStringSlice(c.SysctlIncludeInfo), + } - for _, stringValue := range c.SysctlIncludeInfo { - flags.add("--collector.sysctl.include-info", stringValue) - } + cfg.Systemd = collector.SystemdConfig{ + UnitInclude: &c.SystemdUnitInclude, + UnitIncludeSet: true, + UnitExclude: &c.SystemdUnitExclude, + UnitExcludeSet: true, + EnableTaskMetrics: &c.SystemdEnableTaskMetrics, + EnableRestartsMetrics: &c.SystemdEnableRestartsMetrics, + EnableStartTimeMetrics: &c.SystemdEnableStartTimeMetrics, + OldUnitExclude: &blankString, + OldUnitInclude: &blankString, + Private: &blankBool, } - if collectors[CollectorSystemd] { - flags.add( - "--collector.systemd.unit-include", c.SystemdUnitInclude, - "--collector.systemd.unit-exclude", c.SystemdUnitExclude, - ) + cfg.Tapestats = collector.TapestatsConfig{ + IgnoredDevices: &c.TapestatsIgnoredDevices, + } - flags.addBools(map[*bool]string{ - &c.SystemdEnableTaskMetrics: "collector.systemd.enable-task-metrics", - &c.SystemdEnableRestartsMetrics: "collector.systemd.enable-restarts-metrics", - &c.SystemdEnableStartTimeMetrics: "collector.systemd.enable-start-time-metrics", - }) + cfg.TextFile = collector.TextFileConfig{ + Directory: &c.TextfileDirectory, } - if collectors[CollectorTapestats] { - flags.add("--collector.tapestats.ignored-devices", c.TapestatsIgnoredDevices) + cfg.VmStat = collector.VmStatConfig{ + Fields: &c.VMStatFields, } - if collectors[CollectorTextfile] { - flags.add("--collector.textfile.directory", c.TextfileDirectory) + cfg.Arp = collector.ArpConfig{ + DeviceInclude: &blankString, + DeviceExclude: &blankString, + Netlink: &blankBool, } - if collectors[CollectorVMStat] { - flags.add("--collector.vmstat.fields", c.VMStatFields) + cfg.Stat = collector.StatConfig{ + Softirq: &blankBool, } - return flags.accepted, flags.ignored -} + cfg.HwMon = collector.HwMonConfig{ + ChipInclude: &blankString, + ChipExclude: &blankString, + } -type flags struct { - accepted []string - ignored []string -} + cfg.Qdisc = collector.QdiscConfig{ + Fixtures: &blankString, + DeviceInclude: &blankString, + OldDeviceInclude: &blankString, + DeviceExclude: &blankString, + OldDeviceExclude: &blankString, + } -// add pushes new flags as key value pairs. If the flag isn't registered with kingpin, -// it will be ignored. -func (f *flags) add(kvp ...string) { - if (len(kvp) % 2) != 0 { - panic("missing value for added flag") + cfg.Systemd = collector.SystemdConfig{ + UnitInclude: &c.SystemdUnitInclude, + UnitIncludeSet: true, + UnitExclude: &c.SystemdUnitExclude, + UnitExcludeSet: true, + OldUnitInclude: &blankString, + OldUnitExclude: &blankString, + Private: &blankBool, + EnableTaskMetrics: &c.SystemdEnableTaskMetrics, + EnableRestartsMetrics: &c.SystemdEnableRestartsMetrics, + EnableStartTimeMetrics: &c.SystemdEnableStartTimeMetrics, } - for i := 0; i < len(kvp); i += 2 { - key := kvp[i+0] - value := kvp[i+1] + cfg.Wifi = collector.WifiConfig{ + Fixtures: &blankString, + } - rawFlag := strings.TrimPrefix(key, "--") - if kingpin.CommandLine.GetFlag(rawFlag) == nil { - f.ignored = append(f.ignored, rawFlag) - continue - } + cfg.Collectors = validCollectors - f.accepted = append(f.accepted, key, value) - } + return cfg } -func (f *flags) addBools(m map[*bool]string) { - for setting, key := range m { - // The flag might not exist on this platform, so skip it if it's not - // defined. - if kingpin.CommandLine.GetFlag(key) == nil { - f.ignored = append(f.ignored, key) - continue - } - - if *setting { - f.accepted = append(f.accepted, "--"+key) - } else { - f.accepted = append(f.accepted, "--no-"+key) - } - } +func flagSliceToStringSlice(fl flagext.StringSlice) *[]string { + sl := make([]string, len(fl)) + copy(sl, fl) + return &sl } diff --git a/pkg/integrations/node_exporter/node_exporter.go b/pkg/integrations/node_exporter/node_exporter.go index d4f261b780eb..6e9e8dd3ce2f 100644 --- a/pkg/integrations/node_exporter/node_exporter.go +++ b/pkg/integrations/node_exporter/node_exporter.go @@ -7,9 +7,7 @@ import ( "fmt" "net/http" "sort" - "strings" - "github.com/alecthomas/kingpin/v2" "github.com/go-kit/log" "github.com/go-kit/log/level" "github.com/grafana/agent/pkg/build" @@ -31,23 +29,8 @@ type Integration struct { // New creates a new node_exporter integration. func New(log log.Logger, c *Config) (*Integration, error) { - // NOTE(rfratto): this works as long as node_exporter is the only thing using - // kingpin across the codebase. node_exporter may need a PR eventually to pass - // in a custom kingpin application or expose methods to explicitly enable/disable - // collectors that we can use instead of this command line hack. - flags, _ := MapConfigToNodeExporterFlags(c) - level.Debug(log).Log("msg", "initializing node_exporter with flags converted from agent config", "flags", strings.Join(flags, " ")) - - for _, warn := range c.UnmarshalWarnings { - level.Warn(log).Log("msg", warn) - } - - _, err := kingpin.CommandLine.Parse(flags) - if err != nil { - return nil, fmt.Errorf("failed to parse flags for generating node_exporter configuration: %w", err) - } - - nc, err := collector.NewNodeCollector(log) + cfg := c.mapConfigToNodeConfig() + nc, err := collector.NewNodeCollector(cfg, log) if err != nil { return nil, fmt.Errorf("failed to create node_exporter: %w", err) } diff --git a/pkg/integrations/node_exporter/node_exporter_test.go b/pkg/integrations/node_exporter/node_exporter_test.go index b6b1bdb056bb..3fe58233cf8d 100644 --- a/pkg/integrations/node_exporter/node_exporter_test.go +++ b/pkg/integrations/node_exporter/node_exporter_test.go @@ -6,16 +6,11 @@ import ( "io" "net/http" "net/http/httptest" - "runtime" "testing" - "github.com/alecthomas/kingpin/v2" "github.com/go-kit/log" - "github.com/go-kit/log/level" "github.com/gorilla/mux" - "github.com/grafana/agent/pkg/util" "github.com/prometheus/prometheus/model/textparse" - "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) @@ -69,62 +64,3 @@ func TestNodeExporter(t *testing.T) { require.NoError(t, err) } } - -// TestFTestNodeExporter_IgnoredFlags ensures that flags don't get ignored for -// misspellings. -func TestNodeExporter_IgnoredFlags(t *testing.T) { - l := util.TestLogger(t) - cfg := DefaultConfig - - // Enable all collectors except perf - cfg.SetCollectors = make([]string, 0, len(Collectors)) - for c := range Collectors { - cfg.SetCollectors = append(cfg.SetCollectors, c) - } - cfg.DisableCollectors = []string{CollectorPerf} - - _, ignored := MapConfigToNodeExporterFlags(&cfg) - var expect []string - - switch runtime.GOOS { - case "darwin": - expect = []string{ - "collector.cpu.info", - "collector.cpu.guest", - "collector.cpu.info.flags-include", - "collector.cpu.info.bugs-include", - "collector.filesystem.mount-timeout", - } - } - - if !assert.ElementsMatch(t, expect, ignored) { - level.Debug(l).Log("msg", "printing available flags") - for _, flag := range kingpin.CommandLine.Model().Flags { - level.Debug(l).Log("flag", flag.Name, "hidden", flag.Hidden) - } - } -} - -// TestFlags makes sure that boolean flags and some known non-boolean flags -// work as expected -func TestFlags(t *testing.T) { - var f flags - f.add("--path.rootfs", "/") - require.Equal(t, []string{"--path.rootfs", "/"}, f.accepted) - - // Set up booleans to use as pointers - var ( - truth = true - - // You know, the opposite of truth? - falth = false - ) - - f = flags{} - f.addBools(map[*bool]string{&truth: "collector.textfile"}) - require.Equal(t, []string{"--collector.textfile"}, f.accepted) - - f = flags{} - f.addBools(map[*bool]string{&falth: "collector.textfile"}) - require.Equal(t, []string{"--no-collector.textfile"}, f.accepted) -} diff --git a/pkg/integrations/redis_exporter/redis_exporter.go b/pkg/integrations/redis_exporter/redis_exporter.go index b836a2169ee2..a1baacfd4590 100644 --- a/pkg/integrations/redis_exporter/redis_exporter.go +++ b/pkg/integrations/redis_exporter/redis_exporter.go @@ -27,6 +27,7 @@ var DefaultConfig = Config{ SetClientName: true, CheckKeyGroupsBatchSize: 10000, MaxDistinctKeyGroups: 100, + ExportKeyValues: true, } // Config controls the redis_exporter integration. @@ -51,6 +52,7 @@ type Config struct { CheckSingleKeys string `yaml:"check_single_keys,omitempty"` CheckStreams string `yaml:"check_streams,omitempty"` CheckSingleStreams string `yaml:"check_single_streams,omitempty"` + ExportKeyValues bool `yaml:"export_key_values,omitempty"` CountKeys string `yaml:"count_keys,omitempty"` ScriptPath string `yaml:"script_path,omitempty"` ConnectionTimeout time.Duration `yaml:"connection_timeout,omitempty"` @@ -73,29 +75,30 @@ type Config struct { // we marshal the yaml into Config and then create the re.Options from that. func (c Config) GetExporterOptions() re.Options { return re.Options{ - User: c.RedisUser, - Password: string(c.RedisPassword), - Namespace: c.Namespace, - ConfigCommandName: c.ConfigCommand, - CheckKeys: c.CheckKeys, - CheckKeysBatchSize: c.CheckKeyGroupsBatchSize, - CheckKeyGroups: c.CheckKeyGroups, - CheckSingleKeys: c.CheckSingleKeys, - CheckStreams: c.CheckStreams, - CheckSingleStreams: c.CheckSingleStreams, - CountKeys: c.CountKeys, - InclSystemMetrics: c.InclSystemMetrics, - InclConfigMetrics: false, - RedactConfigMetrics: true, - SkipTLSVerification: c.SkipTLSVerification, - SetClientName: c.SetClientName, - IsTile38: c.IsTile38, - IsCluster: c.IsCluster, - ExportClientList: c.ExportClientList, - ExportClientsInclPort: c.ExportClientPort, - ConnectionTimeouts: c.ConnectionTimeout, - RedisMetricsOnly: c.RedisMetricsOnly, - PingOnConnect: c.PingOnConnect, + User: c.RedisUser, + Password: string(c.RedisPassword), + Namespace: c.Namespace, + ConfigCommandName: c.ConfigCommand, + CheckKeys: c.CheckKeys, + CheckKeysBatchSize: c.CheckKeyGroupsBatchSize, + CheckKeyGroups: c.CheckKeyGroups, + CheckSingleKeys: c.CheckSingleKeys, + CheckStreams: c.CheckStreams, + CheckSingleStreams: c.CheckSingleStreams, + DisableExportingKeyValues: !c.ExportKeyValues, + CountKeys: c.CountKeys, + InclSystemMetrics: c.InclSystemMetrics, + InclConfigMetrics: false, + RedactConfigMetrics: true, + SkipTLSVerification: c.SkipTLSVerification, + SetClientName: c.SetClientName, + IsTile38: c.IsTile38, + IsCluster: c.IsCluster, + ExportClientList: c.ExportClientList, + ExportClientsInclPort: c.ExportClientPort, + ConnectionTimeouts: c.ConnectionTimeout, + RedisMetricsOnly: c.RedisMetricsOnly, + PingOnConnect: c.PingOnConnect, } } diff --git a/pkg/integrations/redis_exporter/redis_exporter_test.go b/pkg/integrations/redis_exporter/redis_exporter_test.go index c14ed6e57c92..196f9bcfc255 100644 --- a/pkg/integrations/redis_exporter/redis_exporter_test.go +++ b/pkg/integrations/redis_exporter/redis_exporter_test.go @@ -9,6 +9,7 @@ import ( "testing" "github.com/grafana/agent/pkg/config" + "gopkg.in/yaml.v2" "github.com/go-kit/log" "github.com/gorilla/mux" @@ -211,6 +212,17 @@ integrations: config.CheckSecret(t, stringCfg, "secret_password") } +func TestConfig_DefaultExportKeyValues(t *testing.T) { + stringCfg := ` +enabled: true +redis_addr: localhost:6379` + + var config Config + err := yaml.Unmarshal([]byte(stringCfg), &config) + require.NoError(t, err) + require.True(t, config.ExportKeyValues) +} + func matchMetricNames(names map[string]bool, p textparse.Parser) { for name := range names { metricName, _ := p.Help() diff --git a/pkg/integrations/snmp_exporter/snmp.go b/pkg/integrations/snmp_exporter/snmp.go index c4d54832b1c7..f746d7ae726b 100644 --- a/pkg/integrations/snmp_exporter/snmp.go +++ b/pkg/integrations/snmp_exporter/snmp.go @@ -13,6 +13,14 @@ import ( snmp_config "github.com/prometheus/snmp_exporter/config" ) +const ( + namespace = "snmp" + // This is the default value for snmp.module-concurrency in snmp_exporter. + // For now we set to 1 as we don't support multi-module handling. + // More info: https://github.com/prometheus/snmp_exporter#multi-module-handling + concurrency = 1 +) + type snmpHandler struct { cfg *Config snmpCfg *snmp_config.Config @@ -79,14 +87,13 @@ func Handler(w http.ResponseWriter, r *http.Request, logger log.Logger, snmpCfg http.Error(w, "'walk_params' parameter must only be specified once", http.StatusBadRequest) return } - if walkParams != "" { zeroRetries := 0 if wp, ok := wParams[walkParams]; ok { if wp.MaxRepetitions != 0 { module.WalkParams.MaxRepetitions = wp.MaxRepetitions } - if wp.Retries != &zeroRetries { + if wp.Retries != nil && wp.Retries != &zeroRetries { module.WalkParams.Retries = wp.Retries } if wp.Timeout != 0 { @@ -100,11 +107,13 @@ func Handler(w http.ResponseWriter, r *http.Request, logger log.Logger, snmpCfg } else { logger = log.With(logger, "module", moduleName, "target", target) } + var nmodules []*collector.NamedModule + nmodules = append(nmodules, collector.NewNamedModule(moduleName, module)) level.Debug(logger).Log("msg", "Starting scrape") start := time.Now() registry := prometheus.NewRegistry() - c := collector.New(r.Context(), target, auth, module, logger, registry) + c := collector.New(r.Context(), target, authName, auth, nmodules, logger, NewSNMPMetrics(registry), concurrency) registry.MustRegister(c) // Delegate http serving to Prometheus client library, which will call collector.Collect. h := promhttp.HandlerFor(registry, promhttp.HandlerOpts{}) diff --git a/pkg/integrations/snmp_exporter/snmp_exporter.go b/pkg/integrations/snmp_exporter/snmp_exporter.go index 699b86eb5628..dce69dd87855 100644 --- a/pkg/integrations/snmp_exporter/snmp_exporter.go +++ b/pkg/integrations/snmp_exporter/snmp_exporter.go @@ -11,6 +11,9 @@ import ( "github.com/grafana/agent/pkg/integrations" "github.com/grafana/agent/pkg/integrations/config" snmp_common "github.com/grafana/agent/pkg/integrations/snmp_exporter/common" + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promauto" + "github.com/prometheus/snmp_exporter/collector" snmp_config "github.com/prometheus/snmp_exporter/config" ) @@ -97,7 +100,7 @@ func New(log log.Logger, c *Config) (integrations.Integration, error) { func LoadSNMPConfig(snmpConfigFile string, snmpCfg *snmp_config.Config) (*snmp_config.Config, error) { var err error if snmpConfigFile != "" { - snmpCfg, err = snmp_config.LoadFile(snmpConfigFile) + snmpCfg, err = snmp_config.LoadFile([]string{snmpConfigFile}) if err != nil { return nil, fmt.Errorf("failed to load snmp config from file %v: %w", snmpConfigFile, err) } @@ -112,6 +115,49 @@ func LoadSNMPConfig(snmpConfigFile string, snmpCfg *snmp_config.Config) (*snmp_c return snmpCfg, nil } +func NewSNMPMetrics(reg prometheus.Registerer) collector.Metrics { + buckets := prometheus.ExponentialBuckets(0.0001, 2, 15) + return collector.Metrics{ + SNMPCollectionDuration: promauto.With(reg).NewHistogramVec( + prometheus.HistogramOpts{ + Namespace: namespace, + Name: "collection_duration_seconds", + Help: "Duration of collections by the SNMP exporter", + }, + []string{"module"}, + ), + SNMPUnexpectedPduType: promauto.With(reg).NewCounter( + prometheus.CounterOpts{ + Namespace: namespace, + Name: "unexpected_pdu_type_total", + Help: "Unexpected Go types in a PDU.", + }, + ), + SNMPDuration: promauto.With(reg).NewHistogram( + prometheus.HistogramOpts{ + Namespace: namespace, + Name: "packet_duration_seconds", + Help: "A histogram of latencies for SNMP packets.", + Buckets: buckets, + }, + ), + SNMPPackets: promauto.With(reg).NewCounter( + prometheus.CounterOpts{ + Namespace: namespace, + Name: "packets_total", + Help: "Number of SNMP packet sent, including retries.", + }, + ), + SNMPRetries: promauto.With(reg).NewCounter( + prometheus.CounterOpts{ + Namespace: namespace, + Name: "packet_retries_total", + Help: "Number of SNMP packet retries.", + }, + ), + } +} + // Integration is the SNMP integration. The integration scrapes metrics // from the host Linux-based system. type Integration struct { diff --git a/pkg/metrics/http.go b/pkg/metrics/http.go index 434996428609..6f7f673066ad 100644 --- a/pkg/metrics/http.go +++ b/pkg/metrics/http.go @@ -151,7 +151,7 @@ func (a *Agent) PushMetricsHandler(w http.ResponseWriter, r *http.Request) { return } - handler := remote.NewWriteHandler(a.logger, managedInstance) + handler := remote.NewWriteHandler(a.logger, a.reg, managedInstance) handler.ServeHTTP(w, r) } diff --git a/pkg/metrics/wal/wal.go b/pkg/metrics/wal/wal.go index 724b08e1cad3..13c0b441d437 100644 --- a/pkg/metrics/wal/wal.go +++ b/pkg/metrics/wal/wal.go @@ -145,7 +145,7 @@ type Storage struct { // NewStorage makes a new Storage. func NewStorage(logger log.Logger, registerer prometheus.Registerer, path string) (*Storage, error) { - w, err := wlog.NewSize(logger, registerer, SubDirectory(path), wlog.DefaultSegmentSize, true) + w, err := wlog.NewSize(logger, registerer, SubDirectory(path), wlog.DefaultSegmentSize, wlog.CompressionSnappy) if err != nil { return nil, err } diff --git a/pkg/operator/defaults.go b/pkg/operator/defaults.go index 911597238284..b67671a3ad77 100644 --- a/pkg/operator/defaults.go +++ b/pkg/operator/defaults.go @@ -2,7 +2,7 @@ package operator // Supported versions of the Grafana Agent. var ( - DefaultAgentVersion = "v0.36.2" + DefaultAgentVersion = "v0.37.0-rc.0" DefaultAgentBaseImage = "grafana/agent" DefaultAgentImage = DefaultAgentBaseImage + ":" + DefaultAgentVersion ) diff --git a/production/kubernetes/agent-bare.yaml b/production/kubernetes/agent-bare.yaml index 3e2837faacf4..44f122059d8f 100644 --- a/production/kubernetes/agent-bare.yaml +++ b/production/kubernetes/agent-bare.yaml @@ -83,7 +83,7 @@ spec: valueFrom: fieldRef: fieldPath: spec.nodeName - image: grafana/agent:v0.36.2 + image: grafana/agent:v0.37.0-rc.0 imagePullPolicy: IfNotPresent name: grafana-agent ports: diff --git a/production/kubernetes/agent-loki.yaml b/production/kubernetes/agent-loki.yaml index cb75ca2ba2f0..291dba300eb2 100644 --- a/production/kubernetes/agent-loki.yaml +++ b/production/kubernetes/agent-loki.yaml @@ -65,7 +65,7 @@ spec: valueFrom: fieldRef: fieldPath: spec.nodeName - image: grafana/agent:v0.36.2 + image: grafana/agent:v0.37.0-rc.0 imagePullPolicy: IfNotPresent name: grafana-agent-logs ports: diff --git a/production/kubernetes/agent-traces.yaml b/production/kubernetes/agent-traces.yaml index 00c4341ca6e7..2bfb103cef27 100644 --- a/production/kubernetes/agent-traces.yaml +++ b/production/kubernetes/agent-traces.yaml @@ -114,7 +114,7 @@ spec: valueFrom: fieldRef: fieldPath: spec.nodeName - image: grafana/agent:v0.36.2 + image: grafana/agent:v0.37.0-rc.0 imagePullPolicy: IfNotPresent name: grafana-agent-traces ports: diff --git a/production/kubernetes/build/lib/version.libsonnet b/production/kubernetes/build/lib/version.libsonnet index e348a90ef97a..29b779e4f518 100644 --- a/production/kubernetes/build/lib/version.libsonnet +++ b/production/kubernetes/build/lib/version.libsonnet @@ -1 +1 @@ -'grafana/agent:v0.36.2' +'grafana/agent:v0.37.0-rc.0' diff --git a/production/kubernetes/build/templates/operator/main.jsonnet b/production/kubernetes/build/templates/operator/main.jsonnet index 4d345b1b3a9a..06888ded5fa0 100644 --- a/production/kubernetes/build/templates/operator/main.jsonnet +++ b/production/kubernetes/build/templates/operator/main.jsonnet @@ -23,8 +23,8 @@ local ksm = import 'kube-state-metrics/kube-state-metrics.libsonnet'; local this = self, _images:: { - agent: 'grafana/agent:v0.36.2', - agent_operator: 'grafana/agent-operator:v0.36.2', + agent: 'grafana/agent:v0.37.0-rc.0', + agent_operator: 'grafana/agent-operator:v0.37.0-rc.0', ksm: 'registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.5.0', }, diff --git a/production/kubernetes/install-bare.sh b/production/kubernetes/install-bare.sh index f17f581101c3..0708d04272d9 100644 --- a/production/kubernetes/install-bare.sh +++ b/production/kubernetes/install-bare.sh @@ -25,7 +25,7 @@ check_installed() { check_installed curl check_installed envsubst -MANIFEST_BRANCH=v0.36.2 +MANIFEST_BRANCH=v0.37.0-rc.0 MANIFEST_URL=${MANIFEST_URL:-https://raw.githubusercontent.com/grafana/agent/${MANIFEST_BRANCH}/production/kubernetes/agent-bare.yaml} NAMESPACE=${NAMESPACE:-default} diff --git a/production/operator/templates/agent-operator.yaml b/production/operator/templates/agent-operator.yaml index 694694c2789d..fe58f901a198 100644 --- a/production/operator/templates/agent-operator.yaml +++ b/production/operator/templates/agent-operator.yaml @@ -372,7 +372,7 @@ spec: containers: - args: - --kubelet-service=default/kubelet - image: grafana/agent-operator:v0.36.2 + image: grafana/agent-operator:v0.37.0-rc.0 imagePullPolicy: IfNotPresent name: grafana-agent-operator serviceAccount: grafana-agent-operator @@ -436,7 +436,7 @@ metadata: name: grafana-agent namespace: ${NAMESPACE} spec: - image: grafana/agent:v0.36.2 + image: grafana/agent:v0.37.0-rc.0 integrations: selector: matchLabels: diff --git a/production/tanka/grafana-agent/v1/main.libsonnet b/production/tanka/grafana-agent/v1/main.libsonnet index de231755a17c..942c92a75763 100644 --- a/production/tanka/grafana-agent/v1/main.libsonnet +++ b/production/tanka/grafana-agent/v1/main.libsonnet @@ -15,8 +15,8 @@ local service = k.core.v1.service; (import './lib/traces.libsonnet') + { _images:: { - agent: 'grafana/agent:v0.36.2', - agentctl: 'grafana/agentctl:v0.36.2', + agent: 'grafana/agent:v0.37.0-rc.0', + agentctl: 'grafana/agentctl:v0.37.0-rc.0', }, // new creates a new DaemonSet deployment of the grafana-agent. By default, diff --git a/production/tanka/grafana-agent/v2/internal/base.libsonnet b/production/tanka/grafana-agent/v2/internal/base.libsonnet index 9ee6015812de..9f59b14ee6db 100644 --- a/production/tanka/grafana-agent/v2/internal/base.libsonnet +++ b/production/tanka/grafana-agent/v2/internal/base.libsonnet @@ -11,8 +11,8 @@ function(name='grafana-agent', namespace='') { local this = self, _images:: { - agent: 'grafana/agent:v0.36.2', - agentctl: 'grafana/agentctl:v0.36.2', + agent: 'grafana/agent:v0.37.0-rc.0', + agentctl: 'grafana/agentctl:v0.37.0-rc.0', }, _config:: { name: name, diff --git a/production/tanka/grafana-agent/v2/internal/syncer.libsonnet b/production/tanka/grafana-agent/v2/internal/syncer.libsonnet index 56b243e440dd..08fd9d874a06 100644 --- a/production/tanka/grafana-agent/v2/internal/syncer.libsonnet +++ b/production/tanka/grafana-agent/v2/internal/syncer.libsonnet @@ -14,7 +14,7 @@ function( ) { local _config = { api: error 'api must be set', - image: 'grafana/agentctl:v0.36.2', + image: 'grafana/agentctl:v0.37.0-rc.0', schedule: '*/5 * * * *', configs: [], } + config,