diff --git a/CHANGELOG.md b/CHANGELOG.md
index a12ef2ad3048..2414b662c539 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,12 +10,11 @@ internal API changes are not present.
Main (unreleased)
-----------------
-### Security fixes
+### Breaking changes
-- Fixes following vulnerabilities (@hainenber)
- - [GO-2023-2409](https://github.com/advisories/GHSA-mhpq-9638-x6pw)
- - [GO-2023-2412](https://github.com/advisories/GHSA-7ww5-4wqc-m92c)
- - [CVE-2023-49568](https://github.com/advisories/GHSA-mw99-9chc-xw7r)
+- Prohibit the configuration of services within modules. (@wildum)
+
+- For `otelcol.exporter` components, change the default value of `disable_high_cardinality_metrics` to `true`. (@ptodev)
### Features
@@ -23,8 +22,12 @@ Main (unreleased)
- A new `pyroscope.java` component for profiling Java processes using async-profiler. (@korniltsev)
+- A new `otelcol.processor.resourcedetection` component which inserts resource attributes
+ to OTLP telemetry based on the host on which Grafana Agent is running. (@ptodev)
+
### Enhancements
+- Include line numbers in profiles produced by `pyrsocope.java` component. (@korniltsev)
- Add an option to the windows static mode installer for expanding environment vars in the yaml config. (@erikbaranowski)
- Add authentication support to `loki.source.awsfirehose` (@sberz)
@@ -33,6 +36,21 @@ Main (unreleased)
- Expose `physical_disk` collector from `windows_exporter` v0.24.0 to
Flow configuration. (@hainenber)
+- Renamed Grafana Agent Mixin's "prometheus.remote_write" dashboard to
+ "Prometheus Components" and added charts for `prometheus.scrape` success rate
+ and duration metrics. (@thampiotr)
+
+- Removed `ClusterLamportClockDrift` and `ClusterLamportClockStuck` alerts from
+ Grafana Agent Mixin to focus on alerting on symptoms. (@thampiotr)
+
+- Increased clustering alert periods to 10 minutes to improve the
+ signal-to-noise ratio in Grafana Agent Mixin. (@thampiotr)
+
+- `mimir.rules.kubernetes` has a new `prometheus_http_prefix` argument to configure
+ the HTTP endpoint on which to connect to Mimir's API. (@hainenber)
+
+- `service_name` label is inferred from discovery meta labels in `pyroscope.java` (@korniltsev)
+
### Bugfixes
- Fix an issue in `remote.s3` where the exported content of an object would be an empty string if `remote.s3` failed to fully retrieve
@@ -43,6 +61,14 @@ Main (unreleased)
- Fix a duplicate metrics registration panic when sending metrics to an static
mode metric instance's write handler. (@tpaschalis)
+- Fix issue causing duplicate logs when a docker target is restarted. (@captncraig)
+
+- Fix an issue where blocks having the same type and the same label across
+ modules could result in missed updates. (@thampiotr)
+
+- Fix an issue with static integrations-next marshaling where non singletons
+ would cause `/-/config` to fail to marshal. (@erikbaranowski)
+
### Other changes
- Removed support for Windows 2012 in line with Microsoft end of life. (@mattdurham)
@@ -53,6 +79,32 @@ Main (unreleased)
- Use Go 1.21.6 for builds. (@hainenber)
+v0.39.2 (2024-1-31)
+--------------------
+
+### Bugfixes
+
+- Fix error introduced in v0.39.0 preventing remote write to Amazon Managed Prometheus. (@captncraig)
+
+- An error will be returned in the converter from Static to Flow when `scrape_integration` is set
+ to `true` but no `remote_write` is defined. (@erikbaranowski)
+
+
+v0.39.1 (2024-01-19)
+--------------------
+
+### Security fixes
+
+- Fixes following vulnerabilities (@hainenber)
+ - [GO-2023-2409](https://github.com/advisories/GHSA-mhpq-9638-x6pw)
+ - [GO-2023-2412](https://github.com/advisories/GHSA-7ww5-4wqc-m92c)
+ - [CVE-2023-49568](https://github.com/advisories/GHSA-mw99-9chc-xw7r)
+
+### Bugfixes
+
+- Fix issue where installing the Windows Agent Flow installer would hang then crash. (@mattdurham)
+
+
v0.39.0 (2024-01-09)
--------------------
diff --git a/cmd/grafana-agent-operator/DEVELOPERS.md b/cmd/grafana-agent-operator/DEVELOPERS.md
index 9c2453e1f9f9..58f7be9ae8d5 100644
--- a/cmd/grafana-agent-operator/DEVELOPERS.md
+++ b/cmd/grafana-agent-operator/DEVELOPERS.md
@@ -74,7 +74,7 @@ running.
### Apply the CRDs
Generated CRDs used by the operator can be found in [the Production
-folder](../../production/operator/crds). Deploy them from the root of the
+folder](../../operations/agent-static-operator/crds). Deploy them from the root of the
repository with:
```
diff --git a/cmd/internal/flowmode/cmd_run.go b/cmd/internal/flowmode/cmd_run.go
index c8618b928b85..20cb8fb2ab2e 100644
--- a/cmd/internal/flowmode/cmd_run.go
+++ b/cmd/internal/flowmode/cmd_run.go
@@ -360,7 +360,7 @@ func getEnabledComponentsFunc(f *flow.Flow) func() map[string]interface{} {
components := component.GetAllComponents(f, component.InfoOptions{})
componentNames := map[string]struct{}{}
for _, c := range components {
- componentNames[c.Registration.Name] = struct{}{}
+ componentNames[c.ComponentName] = struct{}{}
}
return map[string]interface{}{"enabled-components": maps.Keys(componentNames)}
}
diff --git a/component/all/all.go b/component/all/all.go
index 437a7a07e59b..0bf3da725bbf 100644
--- a/component/all/all.go
+++ b/component/all/all.go
@@ -82,6 +82,7 @@ import (
_ "github.com/grafana/agent/component/otelcol/processor/k8sattributes" // Import otelcol.processor.k8sattributes
_ "github.com/grafana/agent/component/otelcol/processor/memorylimiter" // Import otelcol.processor.memory_limiter
_ "github.com/grafana/agent/component/otelcol/processor/probabilistic_sampler" // Import otelcol.processor.probabilistic_sampler
+ _ "github.com/grafana/agent/component/otelcol/processor/resourcedetection" // Import otelcol.processor.resourcedetection
_ "github.com/grafana/agent/component/otelcol/processor/span" // Import otelcol.processor.span
_ "github.com/grafana/agent/component/otelcol/processor/tail_sampling" // Import otelcol.processor.tail_sampling
_ "github.com/grafana/agent/component/otelcol/processor/transform" // Import otelcol.processor.transform
diff --git a/component/component_provider.go b/component/component_provider.go
index 90454b5b04c3..630961d8f6db 100644
--- a/component/component_provider.go
+++ b/component/component_provider.go
@@ -93,8 +93,8 @@ type Info struct {
// this component depends on, or is depended on by, respectively.
References, ReferencedBy []string
- Registration Registration // Component registration.
- Health Health // Current component health.
+ ComponentName string // Name of the component.
+ Health Health // Current component health.
Arguments Arguments // Current arguments value of the component.
Exports Exports // Current exports value of the component.
@@ -157,7 +157,7 @@ func (info *Info) MarshalJSON() ([]byte, error) {
}
return json.Marshal(&componentDetailJSON{
- Name: info.Registration.Name,
+ Name: info.ComponentName,
Type: "block",
ModuleID: info.ID.ModuleID,
LocalID: info.ID.LocalID,
diff --git a/component/loki/source/docker/internal/dockertarget/target.go b/component/loki/source/docker/internal/dockertarget/target.go
index b410d42b9cf2..25acdefa5e57 100644
--- a/component/loki/source/docker/internal/dockertarget/target.go
+++ b/component/loki/source/docker/internal/dockertarget/target.go
@@ -219,6 +219,7 @@ func (t *Target) process(r io.Reader, logStreamLset model.LabelSet) {
// labels (e.g. duplicated and relabeled), but this shouldn't be the
// case anyway.
t.positions.Put(positions.CursorKey(t.containerName), t.labelsStr, ts.Unix())
+ t.since = ts.Unix()
}
}
diff --git a/component/loki/source/docker/internal/dockertarget/target_test.go b/component/loki/source/docker/internal/dockertarget/target_test.go
index a2d2053e2c9a..979f15ffb751 100644
--- a/component/loki/source/docker/internal/dockertarget/target_test.go
+++ b/component/loki/source/docker/internal/dockertarget/target_test.go
@@ -9,7 +9,6 @@ import (
"net/http"
"net/http/httptest"
"os"
- "sort"
"strings"
"testing"
"time"
@@ -24,6 +23,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/relabel"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -31,7 +31,13 @@ func TestDockerTarget(t *testing.T) {
h := func(w http.ResponseWriter, r *http.Request) {
switch path := r.URL.Path; {
case strings.HasSuffix(path, "/logs"):
- dat, err := os.ReadFile("testdata/flog.log")
+ var filePath string
+ if strings.Contains(r.URL.RawQuery, "since=0") {
+ filePath = "testdata/flog.log"
+ } else {
+ filePath = "testdata/flog_after_restart.log"
+ }
+ dat, err := os.ReadFile(filePath)
require.NoError(t, err)
_, err = w.Write(dat)
require.NoError(t, err)
@@ -76,15 +82,6 @@ func TestDockerTarget(t *testing.T) {
require.NoError(t, err)
tgt.StartIfNotRunning()
- require.Eventually(t, func() bool {
- return len(entryHandler.Received()) >= 5
- }, 5*time.Second, 100*time.Millisecond)
-
- received := entryHandler.Received()
- sort.Slice(received, func(i, j int) bool {
- return received[i].Timestamp.Before(received[j].Timestamp)
- })
-
expectedLines := []string{
"5.3.69.55 - - [09/Dec/2021:09:15:02 +0000] \"HEAD /brand/users/clicks-and-mortar/front-end HTTP/2.0\" 503 27087",
"101.54.183.185 - - [09/Dec/2021:09:15:03 +0000] \"POST /next-generation HTTP/1.0\" 416 11468",
@@ -92,9 +89,49 @@ func TestDockerTarget(t *testing.T) {
"28.104.242.74 - - [09/Dec/2021:09:15:03 +0000] \"PATCH /value-added/cultivate/systems HTTP/2.0\" 405 11843",
"150.187.51.54 - satterfield1852 [09/Dec/2021:09:15:03 +0000] \"GET /incentivize/deliver/innovative/cross-platform HTTP/1.1\" 301 13032",
}
- actualLines := make([]string, 0, 5)
- for _, entry := range received[:5] {
- actualLines = append(actualLines, entry.Line)
+
+ assert.EventuallyWithT(t, func(c *assert.CollectT) {
+ assertExpectedLog(c, entryHandler, expectedLines)
+ }, 5*time.Second, 100*time.Millisecond, "Expected log lines were not found within the time limit.")
+
+ tgt.Stop()
+ entryHandler.Clear()
+ // restart target to simulate container restart
+ tgt.StartIfNotRunning()
+ expectedLinesAfterRestart := []string{
+ "243.115.12.215 - - [09/Dec/2023:09:16:57 +0000] \"DELETE /morph/exploit/granular HTTP/1.0\" 500 26468",
+ "221.41.123.237 - - [09/Dec/2023:09:16:57 +0000] \"DELETE /user-centric/whiteboard HTTP/2.0\" 205 22487",
+ "89.111.144.144 - - [09/Dec/2023:09:16:57 +0000] \"DELETE /open-source/e-commerce HTTP/1.0\" 401 11092",
+ "62.180.191.187 - - [09/Dec/2023:09:16:57 +0000] \"DELETE /cultivate/integrate/technologies HTTP/2.0\" 302 12979",
+ "156.249.2.192 - - [09/Dec/2023:09:16:57 +0000] \"POST /revolutionize/mesh/metrics HTTP/2.0\" 401 5297",
+ }
+ assert.EventuallyWithT(t, func(c *assert.CollectT) {
+ assertExpectedLog(c, entryHandler, expectedLinesAfterRestart)
+ }, 5*time.Second, 100*time.Millisecond, "Expected log lines after restart were not found within the time limit.")
+}
+
+// assertExpectedLog will verify that all expectedLines were received, in any order, without duplicates.
+func assertExpectedLog(c *assert.CollectT, entryHandler *fake.Client, expectedLines []string) {
+ logLines := entryHandler.Received()
+ testLogLines := make(map[string]int)
+ for _, l := range logLines {
+ if containsString(expectedLines, l.Line) {
+ testLogLines[l.Line] += 1
+ }
+ }
+ // assert that all log lines were received
+ assert.Len(c, testLogLines, len(expectedLines))
+ // assert that there are no duplicated log lines
+ for _, v := range testLogLines {
+ assert.Equal(c, v, 1)
+ }
+}
+
+func containsString(slice []string, str string) bool {
+ for _, item := range slice {
+ if item == str {
+ return true
+ }
}
- require.ElementsMatch(t, actualLines, expectedLines)
+ return false
}
diff --git a/component/loki/source/docker/internal/dockertarget/testdata/flog_after_restart.log b/component/loki/source/docker/internal/dockertarget/testdata/flog_after_restart.log
new file mode 100644
index 000000000000..59afb576805e
Binary files /dev/null and b/component/loki/source/docker/internal/dockertarget/testdata/flog_after_restart.log differ
diff --git a/component/mimir/rules/kubernetes/rules.go b/component/mimir/rules/kubernetes/rules.go
index 016a888d9104..14765a865095 100644
--- a/component/mimir/rules/kubernetes/rules.go
+++ b/component/mimir/rules/kubernetes/rules.go
@@ -261,10 +261,11 @@ func (c *Component) init() error {
httpClient := c.args.HTTPClientConfig.Convert()
c.mimirClient, err = mimirClient.New(c.log, mimirClient.Config{
- ID: c.args.TenantID,
- Address: c.args.Address,
- UseLegacyRoutes: c.args.UseLegacyRoutes,
- HTTPClientConfig: *httpClient,
+ ID: c.args.TenantID,
+ Address: c.args.Address,
+ UseLegacyRoutes: c.args.UseLegacyRoutes,
+ PrometheusHTTPPrefix: c.args.PrometheusHTTPPrefix,
+ HTTPClientConfig: *httpClient,
}, c.metrics.mimirClientTiming)
if err != nil {
return err
diff --git a/component/mimir/rules/kubernetes/types.go b/component/mimir/rules/kubernetes/types.go
index d8e2445e5bf2..390a4f6a4124 100644
--- a/component/mimir/rules/kubernetes/types.go
+++ b/component/mimir/rules/kubernetes/types.go
@@ -11,6 +11,7 @@ type Arguments struct {
Address string `river:"address,attr"`
TenantID string `river:"tenant_id,attr,optional"`
UseLegacyRoutes bool `river:"use_legacy_routes,attr,optional"`
+ PrometheusHTTPPrefix string `river:"prometheus_http_prefix,attr,optional"`
HTTPClientConfig config.HTTPClientConfig `river:",squash"`
SyncInterval time.Duration `river:"sync_interval,attr,optional"`
MimirNameSpacePrefix string `river:"mimir_namespace_prefix,attr,optional"`
@@ -23,6 +24,7 @@ var DefaultArguments = Arguments{
SyncInterval: 30 * time.Second,
MimirNameSpacePrefix: "agent",
HTTPClientConfig: config.DefaultHTTPClientConfig,
+ PrometheusHTTPPrefix: "/prometheus",
}
// SetToDefault implements river.Defaulter.
diff --git a/component/module/git/git.go b/component/module/git/git.go
index dfe17ef2cb4a..607fcd4577a6 100644
--- a/component/module/git/git.go
+++ b/component/module/git/git.go
@@ -12,7 +12,7 @@ import (
"github.com/go-kit/log"
"github.com/grafana/agent/component"
"github.com/grafana/agent/component/module"
- "github.com/grafana/agent/component/module/git/internal/vcs"
+ "github.com/grafana/agent/internal/vcs"
"github.com/grafana/agent/pkg/flow/logging/level"
)
diff --git a/component/otelcol/config_debug_metrics.go b/component/otelcol/config_debug_metrics.go
index ca8575bee6de..f387f64cbfdf 100644
--- a/component/otelcol/config_debug_metrics.go
+++ b/component/otelcol/config_debug_metrics.go
@@ -7,7 +7,7 @@ type DebugMetricsArguments struct {
// DefaultDebugMetricsArguments holds default settings for DebugMetricsArguments.
var DefaultDebugMetricsArguments = DebugMetricsArguments{
- DisableHighCardinalityMetrics: false,
+ DisableHighCardinalityMetrics: true,
}
// SetToDefault implements river.Defaulter.
diff --git a/component/otelcol/config_k8s.go b/component/otelcol/config_k8s.go
new file mode 100644
index 000000000000..b20407fd41fb
--- /dev/null
+++ b/component/otelcol/config_k8s.go
@@ -0,0 +1,35 @@
+package otelcol
+
+import "fmt"
+
+const (
+ KubernetesAPIConfig_AuthType_None = "none"
+ KubernetesAPIConfig_AuthType_ServiceAccount = "serviceAccount"
+ KubernetesAPIConfig_AuthType_KubeConfig = "kubeConfig"
+ KubernetesAPIConfig_AuthType_TLS = "tls"
+)
+
+// KubernetesAPIConfig contains options relevant to connecting to the K8s API
+type KubernetesAPIConfig struct {
+ // How to authenticate to the K8s API server. This can be one of `none`
+ // (for no auth), `serviceAccount` (to use the standard service account
+ // token provided to the agent pod), or `kubeConfig` to use credentials
+ // from `~/.kube/config`.
+ AuthType string `river:"auth_type,attr,optional"`
+
+ // When using auth_type `kubeConfig`, override the current context.
+ Context string `river:"context,attr,optional"`
+}
+
+// Validate returns an error if the config is invalid.
+func (c *KubernetesAPIConfig) Validate() error {
+ switch c.AuthType {
+ case KubernetesAPIConfig_AuthType_None,
+ KubernetesAPIConfig_AuthType_ServiceAccount,
+ KubernetesAPIConfig_AuthType_KubeConfig,
+ KubernetesAPIConfig_AuthType_TLS:
+ return nil
+ default:
+ return fmt.Errorf("invalid auth_type %q", c.AuthType)
+ }
+}
diff --git a/component/otelcol/exporter/loadbalancing/loadbalancing.go b/component/otelcol/exporter/loadbalancing/loadbalancing.go
index 3455318fef38..d4b8a87cf5f6 100644
--- a/component/otelcol/exporter/loadbalancing/loadbalancing.go
+++ b/component/otelcol/exporter/loadbalancing/loadbalancing.go
@@ -59,7 +59,8 @@ var (
Protocol: Protocol{
OTLP: DefaultOTLPConfig,
},
- RoutingKey: "traceID",
+ RoutingKey: "traceID",
+ DebugMetrics: otelcol.DefaultDebugMetricsArguments,
}
DefaultOTLPConfig = OtlpConfig{
diff --git a/component/otelcol/exporter/loadbalancing/loadbalancing_test.go b/component/otelcol/exporter/loadbalancing/loadbalancing_test.go
index 5e528dd373a3..abc37bc1703d 100644
--- a/component/otelcol/exporter/loadbalancing/loadbalancing_test.go
+++ b/component/otelcol/exporter/loadbalancing/loadbalancing_test.go
@@ -4,6 +4,7 @@ import (
"testing"
"time"
+ "github.com/grafana/agent/component/otelcol"
"github.com/grafana/agent/component/otelcol/exporter/loadbalancing"
"github.com/grafana/river"
"github.com/open-telemetry/opentelemetry-collector-contrib/exporter/loadbalancingexporter"
@@ -268,3 +269,83 @@ func TestConfigConversion(t *testing.T) {
})
}
}
+
+func TestDebugMetricsConfig(t *testing.T) {
+ tests := []struct {
+ testName string
+ agentCfg string
+ expected otelcol.DebugMetricsArguments
+ }{
+ {
+ testName: "default",
+ agentCfg: `
+ resolver {
+ static {
+ hostnames = ["endpoint-1"]
+ }
+ }
+ protocol {
+ otlp {
+ client {}
+ }
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ {
+ testName: "explicit_false",
+ agentCfg: `
+ resolver {
+ static {
+ hostnames = ["endpoint-1"]
+ }
+ }
+ protocol {
+ otlp {
+ client {}
+ }
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = false
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: false,
+ },
+ },
+ {
+ testName: "explicit_true",
+ agentCfg: `
+ resolver {
+ static {
+ hostnames = ["endpoint-1"]
+ }
+ }
+ protocol {
+ otlp {
+ client {}
+ }
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = true
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ }
+
+ for _, tc := range tests {
+ t.Run(tc.testName, func(t *testing.T) {
+ var args loadbalancing.Arguments
+ require.NoError(t, river.Unmarshal([]byte(tc.agentCfg), &args))
+ _, err := args.Convert()
+ require.NoError(t, err)
+
+ require.Equal(t, tc.expected, args.DebugMetricsConfig())
+ })
+ }
+}
diff --git a/component/otelcol/exporter/logging/logging.go b/component/otelcol/exporter/logging/logging.go
index 13d12fbf312e..3156309ab7cf 100644
--- a/component/otelcol/exporter/logging/logging.go
+++ b/component/otelcol/exporter/logging/logging.go
@@ -41,6 +41,7 @@ var DefaultArguments = Arguments{
Verbosity: configtelemetry.LevelNormal,
SamplingInitial: 2,
SamplingThereafter: 500,
+ DebugMetrics: otelcol.DefaultDebugMetricsArguments,
}
// SetToDefault implements river.Defaulter.
diff --git a/component/otelcol/exporter/otlp/otlp.go b/component/otelcol/exporter/otlp/otlp.go
index 7ca10d2c2c0b..f473c4722571 100644
--- a/component/otelcol/exporter/otlp/otlp.go
+++ b/component/otelcol/exporter/otlp/otlp.go
@@ -43,10 +43,11 @@ var _ exporter.Arguments = Arguments{}
// DefaultArguments holds default values for Arguments.
var DefaultArguments = Arguments{
- Timeout: otelcol.DefaultTimeout,
- Queue: otelcol.DefaultQueueArguments,
- Retry: otelcol.DefaultRetryArguments,
- Client: DefaultGRPCClientArguments,
+ Timeout: otelcol.DefaultTimeout,
+ Queue: otelcol.DefaultQueueArguments,
+ Retry: otelcol.DefaultRetryArguments,
+ Client: DefaultGRPCClientArguments,
+ DebugMetrics: otelcol.DefaultDebugMetricsArguments,
}
// SetToDefault implements river.Defaulter.
diff --git a/component/otelcol/exporter/otlp/otlp_test.go b/component/otelcol/exporter/otlp/otlp_test.go
index 9c256ab94ba2..13bd8e56883d 100644
--- a/component/otelcol/exporter/otlp/otlp_test.go
+++ b/component/otelcol/exporter/otlp/otlp_test.go
@@ -143,3 +143,62 @@ func createTestTraces() ptrace.Traces {
}
return data
}
+
+func TestDebugMetricsConfig(t *testing.T) {
+ tests := []struct {
+ testName string
+ agentCfg string
+ expected otelcol.DebugMetricsArguments
+ }{
+ {
+ testName: "default",
+ agentCfg: `
+ client {
+ endpoint = "tempo-xxx.grafana.net/tempo:443"
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ {
+ testName: "explicit_false",
+ agentCfg: `
+ client {
+ endpoint = "tempo-xxx.grafana.net/tempo:443"
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = false
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: false,
+ },
+ },
+ {
+ testName: "explicit_true",
+ agentCfg: `
+ client {
+ endpoint = "tempo-xxx.grafana.net/tempo:443"
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = true
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ }
+
+ for _, tc := range tests {
+ t.Run(tc.testName, func(t *testing.T) {
+ var args otlp.Arguments
+ require.NoError(t, river.Unmarshal([]byte(tc.agentCfg), &args))
+ _, err := args.Convert()
+ require.NoError(t, err)
+
+ require.Equal(t, tc.expected, args.DebugMetricsConfig())
+ })
+ }
+}
diff --git a/component/otelcol/exporter/otlphttp/otlphttp.go b/component/otelcol/exporter/otlphttp/otlphttp.go
index 0508ec2e6289..b8d3aeaf6956 100644
--- a/component/otelcol/exporter/otlphttp/otlphttp.go
+++ b/component/otelcol/exporter/otlphttp/otlphttp.go
@@ -48,9 +48,10 @@ var _ exporter.Arguments = Arguments{}
// DefaultArguments holds default values for Arguments.
var DefaultArguments = Arguments{
- Queue: otelcol.DefaultQueueArguments,
- Retry: otelcol.DefaultRetryArguments,
- Client: DefaultHTTPClientArguments,
+ Queue: otelcol.DefaultQueueArguments,
+ Retry: otelcol.DefaultRetryArguments,
+ Client: DefaultHTTPClientArguments,
+ DebugMetrics: otelcol.DefaultDebugMetricsArguments,
}
// SetToDefault implements river.Defaulter.
diff --git a/component/otelcol/exporter/otlphttp/otlphttp_test.go b/component/otelcol/exporter/otlphttp/otlphttp_test.go
index 64e6328b2fb5..6a2449db6204 100644
--- a/component/otelcol/exporter/otlphttp/otlphttp_test.go
+++ b/component/otelcol/exporter/otlphttp/otlphttp_test.go
@@ -114,3 +114,62 @@ func createTestTraces() ptrace.Traces {
}
return data
}
+
+func TestDebugMetricsConfig(t *testing.T) {
+ tests := []struct {
+ testName string
+ agentCfg string
+ expected otelcol.DebugMetricsArguments
+ }{
+ {
+ testName: "default",
+ agentCfg: `
+ client {
+ endpoint = "http://tempo:4317"
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ {
+ testName: "explicit_false",
+ agentCfg: `
+ client {
+ endpoint = "http://tempo:4317"
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = false
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: false,
+ },
+ },
+ {
+ testName: "explicit_true",
+ agentCfg: `
+ client {
+ endpoint = "http://tempo:4317"
+ }
+ debug_metrics {
+ disable_high_cardinality_metrics = true
+ }
+ `,
+ expected: otelcol.DebugMetricsArguments{
+ DisableHighCardinalityMetrics: true,
+ },
+ },
+ }
+
+ for _, tc := range tests {
+ t.Run(tc.testName, func(t *testing.T) {
+ var args otlphttp.Arguments
+ require.NoError(t, river.Unmarshal([]byte(tc.agentCfg), &args))
+ _, err := args.Convert()
+ require.NoError(t, err)
+
+ require.Equal(t, tc.expected, args.DebugMetricsConfig())
+ })
+ }
+}
diff --git a/component/otelcol/processor/processortest/compare_signals.go b/component/otelcol/processor/processortest/compare_signals.go
new file mode 100644
index 000000000000..3fdc52cad1e1
--- /dev/null
+++ b/component/otelcol/processor/processortest/compare_signals.go
@@ -0,0 +1,46 @@
+package processortest
+
+import (
+ "testing"
+
+ "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatatest/plogtest"
+ "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatatest/pmetrictest"
+ "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatatest/ptracetest"
+ "github.com/stretchr/testify/require"
+ "go.opentelemetry.io/collector/pdata/plog"
+ "go.opentelemetry.io/collector/pdata/pmetric"
+ "go.opentelemetry.io/collector/pdata/ptrace"
+)
+
+func CompareMetrics(t *testing.T, expected, actual pmetric.Metrics) {
+ err := pmetrictest.CompareMetrics(
+ expected,
+ actual,
+ pmetrictest.IgnoreResourceMetricsOrder(),
+ pmetrictest.IgnoreMetricDataPointsOrder(),
+ pmetrictest.IgnoreMetricsOrder(),
+ pmetrictest.IgnoreScopeMetricsOrder(),
+ pmetrictest.IgnoreSummaryDataPointValueAtQuantileSliceOrder(),
+ pmetrictest.IgnoreTimestamp(),
+ pmetrictest.IgnoreStartTimestamp(),
+ )
+ require.NoError(t, err)
+}
+
+func CompareLogs(t *testing.T, expected, actual plog.Logs) {
+ err := plogtest.CompareLogs(
+ expected,
+ actual,
+ )
+ require.NoError(t, err)
+}
+
+func CompareTraces(t *testing.T, expected, actual ptrace.Traces) {
+ err := ptracetest.CompareTraces(
+ expected,
+ actual,
+ ptracetest.IgnoreResourceSpansOrder(),
+ ptracetest.IgnoreScopeSpansOrder(),
+ )
+ require.NoError(t, err)
+}
diff --git a/component/otelcol/processor/processortest/compare_signals_test.go b/component/otelcol/processor/processortest/compare_signals_test.go
new file mode 100644
index 000000000000..609b1754354c
--- /dev/null
+++ b/component/otelcol/processor/processortest/compare_signals_test.go
@@ -0,0 +1,36 @@
+package processortest
+
+import (
+ "testing"
+
+ "go.opentelemetry.io/collector/pdata/pmetric"
+ "go.opentelemetry.io/collector/pdata/ptrace"
+)
+
+func Test_ScopeMetricsOrder(t *testing.T) {
+ metric1 := pmetric.NewMetrics()
+ metric1_res := metric1.ResourceMetrics().AppendEmpty()
+ metric1_res.ScopeMetrics().AppendEmpty().Scope().SetName("scope1")
+ metric1_res.ScopeMetrics().AppendEmpty().Scope().SetName("scope2")
+
+ metric2 := pmetric.NewMetrics()
+ metric2_res := metric2.ResourceMetrics().AppendEmpty()
+ metric2_res.ScopeMetrics().AppendEmpty().Scope().SetName("scope2")
+ metric2_res.ScopeMetrics().AppendEmpty().Scope().SetName("scope1")
+
+ CompareMetrics(t, metric1, metric2)
+}
+
+func Test_ScopeSpansAttributesOrder(t *testing.T) {
+ trace1 := ptrace.NewTraces()
+ trace1_span_attr := trace1.ResourceSpans().AppendEmpty().ScopeSpans().AppendEmpty().Scope().Attributes()
+ trace1_span_attr.PutStr("key1", "val1")
+ trace1_span_attr.PutStr("key2", "val2")
+
+ trace2 := ptrace.NewTraces()
+ trace2_span_attr := trace2.ResourceSpans().AppendEmpty().ScopeSpans().AppendEmpty().Scope().Attributes()
+ trace2_span_attr.PutStr("key2", "val2")
+ trace2_span_attr.PutStr("key1", "val1")
+
+ CompareTraces(t, trace1, trace2)
+}
diff --git a/component/otelcol/processor/processortest/processortest.go b/component/otelcol/processor/processortest/processortest.go
index 0298f8e9250b..e9a99ec65024 100644
--- a/component/otelcol/processor/processortest/processortest.go
+++ b/component/otelcol/processor/processortest/processortest.go
@@ -75,16 +75,16 @@ func TestRunProcessor(c ProcessorRunConfig) {
//
type traceToLogSignal struct {
- logCh chan plog.Logs
- inputTrace ptrace.Traces
- expectedOuutputLog plog.Logs
+ logCh chan plog.Logs
+ inputTrace ptrace.Traces
+ expectedOutputLog plog.Logs
}
func NewTraceToLogSignal(inputJson string, expectedOutputJson string) Signal {
return &traceToLogSignal{
- logCh: make(chan plog.Logs),
- inputTrace: CreateTestTraces(inputJson),
- expectedOuutputLog: CreateTestLogs(expectedOutputJson),
+ logCh: make(chan plog.Logs),
+ inputTrace: CreateTestTraces(inputJson),
+ expectedOutputLog: CreateTestLogs(expectedOutputJson),
}
}
@@ -101,10 +101,8 @@ func (s traceToLogSignal) CheckOutput(t *testing.T) {
select {
case <-time.After(time.Second):
require.FailNow(t, "failed waiting for logs")
- case tr := <-s.logCh:
- trStr := marshalLogs(tr)
- expStr := marshalLogs(s.expectedOuutputLog)
- require.JSONEq(t, expStr, trStr)
+ case actualLog := <-s.logCh:
+ CompareLogs(t, s.expectedOutputLog, actualLog)
}
}
@@ -113,17 +111,17 @@ func (s traceToLogSignal) CheckOutput(t *testing.T) {
//
type traceToMetricSignal struct {
- metricCh chan pmetric.Metrics
- inputTrace ptrace.Traces
- expectedOuutputMetric pmetric.Metrics
+ metricCh chan pmetric.Metrics
+ inputTrace ptrace.Traces
+ expectedOutputMetric pmetric.Metrics
}
// Any timestamps inside expectedOutputJson should be set to 0.
func NewTraceToMetricSignal(inputJson string, expectedOutputJson string) Signal {
return &traceToMetricSignal{
- metricCh: make(chan pmetric.Metrics),
- inputTrace: CreateTestTraces(inputJson),
- expectedOuutputMetric: CreateTestMetrics(expectedOutputJson),
+ metricCh: make(chan pmetric.Metrics),
+ inputTrace: CreateTestTraces(inputJson),
+ expectedOutputMetric: CreateTestMetrics(expectedOutputJson),
}
}
@@ -135,57 +133,6 @@ func (s traceToMetricSignal) ConsumeInput(ctx context.Context, consumer otelcol.
return consumer.ConsumeTraces(ctx, s.inputTrace)
}
-// Set the timestamp of all data points to 0.
-// This helps avoid flaky tests due to timestamps.
-func setMetricTimestampToZero(metrics pmetric.Metrics) {
- // Loop over all resource metrics
- for i := 0; i < metrics.ResourceMetrics().Len(); i++ {
- rm := metrics.ResourceMetrics().At(i)
- // Loop over all metric scopes.
- for j := 0; j < rm.ScopeMetrics().Len(); j++ {
- sm := rm.ScopeMetrics().At(j)
- // Loop over all metrics.
- for k := 0; k < sm.Metrics().Len(); k++ {
- m := sm.Metrics().At(k)
- switch m.Type() {
- case pmetric.MetricTypeSum:
- // Loop over all data points.
- for l := 0; l < m.Sum().DataPoints().Len(); l++ {
- // Set the timestamp to 0 to avoid flaky tests.
- dp := m.Sum().DataPoints().At(l)
- dp.SetTimestamp(0)
- dp.SetStartTimestamp(0)
- }
- case pmetric.MetricTypeGauge:
- // Loop over all data points.
- for l := 0; l < m.Gauge().DataPoints().Len(); l++ {
- // Set the timestamp to 0 to avoid flaky tests.
- dp := m.Gauge().DataPoints().At(l)
- dp.SetTimestamp(0)
- dp.SetStartTimestamp(0)
- }
- case pmetric.MetricTypeHistogram:
- // Loop over all data points.
- for l := 0; l < m.Histogram().DataPoints().Len(); l++ {
- // Set the timestamp to 0 to avoid flaky tests.
- dp := m.Histogram().DataPoints().At(l)
- dp.SetTimestamp(0)
- dp.SetStartTimestamp(0)
- }
- case pmetric.MetricTypeSummary:
- // Loop over all data points.
- for l := 0; l < m.Summary().DataPoints().Len(); l++ {
- // Set the timestamp to 0 to avoid flaky tests.
- dp := m.Summary().DataPoints().At(l)
- dp.SetTimestamp(0)
- dp.SetStartTimestamp(0)
- }
- }
- }
- }
- }
-}
-
// Wait for the component to finish and check its output.
func (s traceToMetricSignal) CheckOutput(t *testing.T) {
// Set the timeout to a few seconds so that all components have finished.
@@ -196,14 +143,8 @@ func (s traceToMetricSignal) CheckOutput(t *testing.T) {
select {
case <-time.After(timeout):
require.FailNow(t, "failed waiting for metrics")
- case tr := <-s.metricCh:
- setMetricTimestampToZero(tr)
- trStr := marshalMetrics(tr)
-
- expStr := marshalMetrics(s.expectedOuutputMetric)
- // Set a field from the json to an empty string to avoid flaky tests containing timestamps.
-
- require.JSONEq(t, expStr, trStr)
+ case actualMetric := <-s.metricCh:
+ CompareMetrics(t, s.expectedOutputMetric, actualMetric)
}
}
@@ -212,16 +153,16 @@ func (s traceToMetricSignal) CheckOutput(t *testing.T) {
//
type traceSignal struct {
- traceCh chan ptrace.Traces
- inputTrace ptrace.Traces
- expectedOuutputTrace ptrace.Traces
+ traceCh chan ptrace.Traces
+ inputTrace ptrace.Traces
+ expectedOutputTrace ptrace.Traces
}
func NewTraceSignal(inputJson string, expectedOutputJson string) Signal {
return &traceSignal{
- traceCh: make(chan ptrace.Traces),
- inputTrace: CreateTestTraces(inputJson),
- expectedOuutputTrace: CreateTestTraces(expectedOutputJson),
+ traceCh: make(chan ptrace.Traces),
+ inputTrace: CreateTestTraces(inputJson),
+ expectedOutputTrace: CreateTestTraces(expectedOutputJson),
}
}
@@ -238,10 +179,8 @@ func (s traceSignal) CheckOutput(t *testing.T) {
select {
case <-time.After(time.Second):
require.FailNow(t, "failed waiting for traces")
- case tr := <-s.traceCh:
- trStr := marshalTraces(tr)
- expStr := marshalTraces(s.expectedOuutputTrace)
- require.JSONEq(t, expStr, trStr)
+ case actualTrace := <-s.traceCh:
+ CompareTraces(t, s.expectedOutputTrace, actualTrace)
}
}
@@ -256,15 +195,6 @@ func CreateTestTraces(traceJson string) ptrace.Traces {
return data
}
-func marshalTraces(trace ptrace.Traces) string {
- marshaler := &ptrace.JSONMarshaler{}
- data, err := marshaler.MarshalTraces(trace)
- if err != nil {
- panic(err)
- }
- return string(data)
-}
-
// makeTracesOutput returns ConsumerArguments which will forward traces to the
// provided channel.
func makeTracesOutput(ch chan ptrace.Traces) *otelcol.ConsumerArguments {
@@ -289,16 +219,16 @@ func makeTracesOutput(ch chan ptrace.Traces) *otelcol.ConsumerArguments {
//
type logSignal struct {
- logCh chan plog.Logs
- inputLog plog.Logs
- expectedOuutputLog plog.Logs
+ logCh chan plog.Logs
+ inputLog plog.Logs
+ expectedOutputLog plog.Logs
}
func NewLogSignal(inputJson string, expectedOutputJson string) Signal {
return &logSignal{
- logCh: make(chan plog.Logs),
- inputLog: CreateTestLogs(inputJson),
- expectedOuutputLog: CreateTestLogs(expectedOutputJson),
+ logCh: make(chan plog.Logs),
+ inputLog: CreateTestLogs(inputJson),
+ expectedOutputLog: CreateTestLogs(expectedOutputJson),
}
}
@@ -315,10 +245,8 @@ func (s logSignal) CheckOutput(t *testing.T) {
select {
case <-time.After(time.Second):
require.FailNow(t, "failed waiting for logs")
- case tr := <-s.logCh:
- trStr := marshalLogs(tr)
- expStr := marshalLogs(s.expectedOuutputLog)
- require.JSONEq(t, expStr, trStr)
+ case actualLog := <-s.logCh:
+ CompareLogs(t, s.expectedOutputLog, actualLog)
}
}
@@ -352,30 +280,21 @@ func CreateTestLogs(logJson string) plog.Logs {
return data
}
-func marshalLogs(log plog.Logs) string {
- marshaler := &plog.JSONMarshaler{}
- data, err := marshaler.MarshalLogs(log)
- if err != nil {
- panic(err)
- }
- return string(data)
-}
-
//
// Metrics
//
type metricSignal struct {
- metricCh chan pmetric.Metrics
- inputMetric pmetric.Metrics
- expectedOuutputMetric pmetric.Metrics
+ metricCh chan pmetric.Metrics
+ inputMetric pmetric.Metrics
+ expectedOutputMetric pmetric.Metrics
}
func NewMetricSignal(inputJson string, expectedOutputJson string) Signal {
return &metricSignal{
- metricCh: make(chan pmetric.Metrics),
- inputMetric: CreateTestMetrics(inputJson),
- expectedOuutputMetric: CreateTestMetrics(expectedOutputJson),
+ metricCh: make(chan pmetric.Metrics),
+ inputMetric: CreateTestMetrics(inputJson),
+ expectedOutputMetric: CreateTestMetrics(expectedOutputJson),
}
}
@@ -392,10 +311,8 @@ func (s metricSignal) CheckOutput(t *testing.T) {
select {
case <-time.After(time.Second):
require.FailNow(t, "failed waiting for logs")
- case tr := <-s.metricCh:
- trStr := marshalMetrics(tr)
- expStr := marshalMetrics(s.expectedOuutputMetric)
- require.JSONEq(t, expStr, trStr)
+ case actualMetric := <-s.metricCh:
+ CompareMetrics(t, s.expectedOutputMetric, actualMetric)
}
}
@@ -428,12 +345,3 @@ func CreateTestMetrics(metricJson string) pmetric.Metrics {
}
return data
}
-
-func marshalMetrics(metrics pmetric.Metrics) string {
- marshaler := &pmetric.JSONMarshaler{}
- data, err := marshaler.MarshalMetrics(metrics)
- if err != nil {
- panic(err)
- }
- return string(data)
-}
diff --git a/component/otelcol/processor/resourcedetection/internal/aws/ec2/config.go b/component/otelcol/processor/resourcedetection/internal/aws/ec2/config.go
new file mode 100644
index 000000000000..9b715eac4a12
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/aws/ec2/config.go
@@ -0,0 +1,72 @@
+package ec2
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "ec2"
+
+// Config defines user-specified configurations unique to the EC2 detector
+type Config struct {
+ // Tags is a list of regex's to match ec2 instance tag keys that users want
+ // to add as resource attributes to processed data
+ Tags []string `river:"tags,attr,optional"`
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudAccountID: rac.ResourceAttributeConfig{Enabled: true},
+ CloudAvailabilityZone: rac.ResourceAttributeConfig{Enabled: true},
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ HostID: rac.ResourceAttributeConfig{Enabled: true},
+ HostImageID: rac.ResourceAttributeConfig{Enabled: true},
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ HostType: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "tags": append([]string{}, args.Tags...),
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config to enable and disable resource attributes.
+type ResourceAttributesConfig struct {
+ CloudAccountID rac.ResourceAttributeConfig `river:"cloud.account.id,block,optional"`
+ CloudAvailabilityZone rac.ResourceAttributeConfig `river:"cloud.availability_zone,block,optional"`
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ HostID rac.ResourceAttributeConfig `river:"host.id,block,optional"`
+ HostImageID rac.ResourceAttributeConfig `river:"host.image.id,block,optional"`
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+ HostType rac.ResourceAttributeConfig `river:"host.type,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.account.id": r.CloudAccountID.Convert(),
+ "cloud.availability_zone": r.CloudAvailabilityZone.Convert(),
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ "host.id": r.HostID.Convert(),
+ "host.image.id": r.HostImageID.Convert(),
+ "host.name": r.HostName.Convert(),
+ "host.type": r.HostType.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/aws/ecs/config.go b/component/otelcol/processor/resourcedetection/internal/aws/ecs/config.go
new file mode 100644
index 000000000000..1532bd376567
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/aws/ecs/config.go
@@ -0,0 +1,86 @@
+package ecs
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "ecs"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ AwsEcsClusterArn: rac.ResourceAttributeConfig{Enabled: true},
+ AwsEcsLaunchtype: rac.ResourceAttributeConfig{Enabled: true},
+ AwsEcsTaskArn: rac.ResourceAttributeConfig{Enabled: true},
+ AwsEcsTaskFamily: rac.ResourceAttributeConfig{Enabled: true},
+ AwsEcsTaskRevision: rac.ResourceAttributeConfig{Enabled: true},
+ AwsLogGroupArns: rac.ResourceAttributeConfig{Enabled: true},
+ AwsLogGroupNames: rac.ResourceAttributeConfig{Enabled: true},
+ AwsLogStreamArns: rac.ResourceAttributeConfig{Enabled: true},
+ AwsLogStreamNames: rac.ResourceAttributeConfig{Enabled: true},
+ CloudAccountID: rac.ResourceAttributeConfig{Enabled: true},
+ CloudAvailabilityZone: rac.ResourceAttributeConfig{Enabled: true},
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args *Config) Convert() map[string]interface{} {
+ if args == nil {
+ return nil
+ }
+
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for ecs resource attributes.
+type ResourceAttributesConfig struct {
+ AwsEcsClusterArn rac.ResourceAttributeConfig `river:"aws.ecs.cluster.arn,block,optional"`
+ AwsEcsLaunchtype rac.ResourceAttributeConfig `river:"aws.ecs.launchtype,block,optional"`
+ AwsEcsTaskArn rac.ResourceAttributeConfig `river:"aws.ecs.task.arn,block,optional"`
+ AwsEcsTaskFamily rac.ResourceAttributeConfig `river:"aws.ecs.task.family,block,optional"`
+ AwsEcsTaskRevision rac.ResourceAttributeConfig `river:"aws.ecs.task.revision,block,optional"`
+ AwsLogGroupArns rac.ResourceAttributeConfig `river:"aws.log.group.arns,block,optional"`
+ AwsLogGroupNames rac.ResourceAttributeConfig `river:"aws.log.group.names,block,optional"`
+ AwsLogStreamArns rac.ResourceAttributeConfig `river:"aws.log.stream.arns,block,optional"`
+ AwsLogStreamNames rac.ResourceAttributeConfig `river:"aws.log.stream.names,block,optional"`
+ CloudAccountID rac.ResourceAttributeConfig `river:"cloud.account.id,block,optional"`
+ CloudAvailabilityZone rac.ResourceAttributeConfig `river:"cloud.availability_zone,block,optional"`
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "aws.ecs.cluster.arn": r.AwsEcsClusterArn.Convert(),
+ "aws.ecs.launchtype": r.AwsEcsLaunchtype.Convert(),
+ "aws.ecs.task.arn": r.AwsEcsTaskArn.Convert(),
+ "aws.ecs.task.family": r.AwsEcsTaskFamily.Convert(),
+ "aws.ecs.task.revision": r.AwsEcsTaskRevision.Convert(),
+ "aws.log.group.arns": r.AwsLogGroupArns.Convert(),
+ "aws.log.group.names": r.AwsLogGroupNames.Convert(),
+ "aws.log.stream.arns": r.AwsLogStreamArns.Convert(),
+ "aws.log.stream.names": r.AwsLogStreamNames.Convert(),
+ "cloud.account.id": r.CloudAccountID.Convert(),
+ "cloud.availability_zone": r.CloudAvailabilityZone.Convert(),
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/aws/eks/config.go b/component/otelcol/processor/resourcedetection/internal/aws/eks/config.go
new file mode 100644
index 000000000000..6290180b3086
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/aws/eks/config.go
@@ -0,0 +1,46 @@
+package eks
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "eks"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for eks resource attributes.
+type ResourceAttributesConfig struct {
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/aws/elasticbeanstalk/config.go b/component/otelcol/processor/resourcedetection/internal/aws/elasticbeanstalk/config.go
new file mode 100644
index 000000000000..dd670372cee7
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/aws/elasticbeanstalk/config.go
@@ -0,0 +1,55 @@
+package elasticbeanstalk
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "elasticbeanstalk"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ DeploymentEnvironment: rac.ResourceAttributeConfig{Enabled: true},
+ ServiceInstanceID: rac.ResourceAttributeConfig{Enabled: true},
+ ServiceVersion: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for elastic_beanstalk resource attributes.
+type ResourceAttributesConfig struct {
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ DeploymentEnvironment rac.ResourceAttributeConfig `river:"deployment.environment,block,optional"`
+ ServiceInstanceID rac.ResourceAttributeConfig `river:"service.instance.id,block,optional"`
+ ServiceVersion rac.ResourceAttributeConfig `river:"service.version,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "deployment.environment": r.DeploymentEnvironment.Convert(),
+ "service.instance.id": r.ServiceInstanceID.Convert(),
+ "service.version": r.ServiceVersion.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/aws/lambda/config.go b/component/otelcol/processor/resourcedetection/internal/aws/lambda/config.go
new file mode 100644
index 000000000000..19a4cc7b4e80
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/aws/lambda/config.go
@@ -0,0 +1,67 @@
+package lambda
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "lambda"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ AwsLogGroupNames: rac.ResourceAttributeConfig{Enabled: true},
+ AwsLogStreamNames: rac.ResourceAttributeConfig{Enabled: true},
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ FaasInstance: rac.ResourceAttributeConfig{Enabled: true},
+ FaasMaxMemory: rac.ResourceAttributeConfig{Enabled: true},
+ FaasName: rac.ResourceAttributeConfig{Enabled: true},
+ FaasVersion: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for lambda resource attributes.
+type ResourceAttributesConfig struct {
+ AwsLogGroupNames rac.ResourceAttributeConfig `river:"aws.log.group.names,block,optional"`
+ AwsLogStreamNames rac.ResourceAttributeConfig `river:"aws.log.stream.names,block,optional"`
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ FaasInstance rac.ResourceAttributeConfig `river:"faas.instance,block,optional"`
+ FaasMaxMemory rac.ResourceAttributeConfig `river:"faas.max_memory,block,optional"`
+ FaasName rac.ResourceAttributeConfig `river:"faas.name,block,optional"`
+ FaasVersion rac.ResourceAttributeConfig `river:"faas.version,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "aws.log.group.names": r.AwsLogGroupNames.Convert(),
+ "aws.log.stream.names": r.AwsLogStreamNames.Convert(),
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ "faas.instance": r.FaasInstance.Convert(),
+ "faas.max_memory": r.FaasMaxMemory.Convert(),
+ "faas.name": r.FaasName.Convert(),
+ "faas.version": r.FaasVersion.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/azure/aks/config.go b/component/otelcol/processor/resourcedetection/internal/azure/aks/config.go
new file mode 100644
index 000000000000..4501c4e33a6f
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/azure/aks/config.go
@@ -0,0 +1,46 @@
+package aks
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "aks"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for aks resource attributes.
+type ResourceAttributesConfig struct {
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/azure/config.go b/component/otelcol/processor/resourcedetection/internal/azure/config.go
new file mode 100644
index 000000000000..05e612d1d2d0
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/azure/config.go
@@ -0,0 +1,70 @@
+package azure
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "azure"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ AzureResourcegroupName: rac.ResourceAttributeConfig{Enabled: true},
+ AzureVMName: rac.ResourceAttributeConfig{Enabled: true},
+ AzureVMScalesetName: rac.ResourceAttributeConfig{Enabled: true},
+ AzureVMSize: rac.ResourceAttributeConfig{Enabled: true},
+ CloudAccountID: rac.ResourceAttributeConfig{Enabled: true},
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ HostID: rac.ResourceAttributeConfig{Enabled: true},
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for azure resource attributes.
+type ResourceAttributesConfig struct {
+ AzureResourcegroupName rac.ResourceAttributeConfig `river:"azure.resourcegroup.name,block,optional"`
+ AzureVMName rac.ResourceAttributeConfig `river:"azure.vm.name,block,optional"`
+ AzureVMScalesetName rac.ResourceAttributeConfig `river:"azure.vm.scaleset.name,block,optional"`
+ AzureVMSize rac.ResourceAttributeConfig `river:"azure.vm.size,block,optional"`
+ CloudAccountID rac.ResourceAttributeConfig `river:"cloud.account.id,block,optional"`
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ HostID rac.ResourceAttributeConfig `river:"host.id,block,optional"`
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "azure.resourcegroup.name": r.AzureResourcegroupName.Convert(),
+ "azure.vm.name": r.AzureVMName.Convert(),
+ "azure.vm.scaleset.name": r.AzureVMScalesetName.Convert(),
+ "azure.vm.size": r.AzureVMSize.Convert(),
+ "cloud.account.id": r.CloudAccountID.Convert(),
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ "host.id": r.HostID.Convert(),
+ "host.name": r.HostName.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/consul/config.go b/component/otelcol/processor/resourcedetection/internal/consul/config.go
new file mode 100644
index 000000000000..4cc2e9b5beb3
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/consul/config.go
@@ -0,0 +1,94 @@
+package consul
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+ "github.com/grafana/river/rivertypes"
+ "go.opentelemetry.io/collector/config/configopaque"
+)
+
+const Name = "consul"
+
+// The struct requires no user-specified fields by default as consul agent's default
+// configuration will be provided to the API client.
+// See `consul.go#NewDetector` for more information.
+type Config struct {
+ // Address is the address of the Consul server
+ Address string `river:"address,attr,optional"`
+
+ // Datacenter to use. If not provided, the default agent datacenter is used.
+ Datacenter string `river:"datacenter,attr,optional"`
+
+ // Token is used to provide a per-request ACL token which overrides the
+ // agent's default (empty) token. Token is only required if
+ // [Consul's ACL System](https://www.consul.io/docs/security/acl/acl-system)
+ // is enabled.
+ Token rivertypes.Secret `river:"token,attr,optional"`
+
+ // TokenFile is not necessary in River because users can use the local.file
+ // Flow component instead.
+ //
+ // TokenFile string `river:"token_file"`
+
+ // Namespace is the name of the namespace to send along for the request
+ // when no other Namespace is present in the QueryOptions
+ Namespace string `river:"namespace,attr,optional"`
+
+ // Allowlist of [Consul Metadata](https://www.consul.io/docs/agent/options#node_meta)
+ // keys to use as resource attributes.
+ MetaLabels []string `river:"meta,attr,optional"`
+
+ // ResourceAttributes configuration for Consul detector
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ HostID: rac.ResourceAttributeConfig{Enabled: true},
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ //TODO(ptodev): Change the OTel Collector's "meta" param to be a slice instead of a map.
+ var metaLabels map[string]string
+ if args.MetaLabels != nil {
+ metaLabels = make(map[string]string, len(args.MetaLabels))
+ for _, label := range args.MetaLabels {
+ metaLabels[label] = ""
+ }
+ }
+
+ return map[string]interface{}{
+ "address": args.Address,
+ "datacenter": args.Datacenter,
+ "token": configopaque.String(args.Token),
+ "namespace": args.Namespace,
+ "meta": metaLabels,
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for consul resource attributes.
+type ResourceAttributesConfig struct {
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ HostID rac.ResourceAttributeConfig `river:"host.id,block,optional"`
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+}
+
+func (r *ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.region": r.CloudRegion.Convert(),
+ "host.id": r.HostID.Convert(),
+ "host.name": r.HostName.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/docker/config.go b/component/otelcol/processor/resourcedetection/internal/docker/config.go
new file mode 100644
index 000000000000..f8c1bdc39b82
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/docker/config.go
@@ -0,0 +1,46 @@
+package docker
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "docker"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ OsType: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for docker resource attributes.
+type ResourceAttributesConfig struct {
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+ OsType rac.ResourceAttributeConfig `river:"os.type,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "host.name": r.HostName.Convert(),
+ "os.type": r.OsType.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/gcp/config.go b/component/otelcol/processor/resourcedetection/internal/gcp/config.go
new file mode 100644
index 000000000000..76395828a97c
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/gcp/config.go
@@ -0,0 +1,91 @@
+package gcp
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "gcp"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudAccountID: rac.ResourceAttributeConfig{Enabled: true},
+ CloudAvailabilityZone: rac.ResourceAttributeConfig{Enabled: true},
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ FaasID: rac.ResourceAttributeConfig{Enabled: true},
+ FaasInstance: rac.ResourceAttributeConfig{Enabled: true},
+ FaasName: rac.ResourceAttributeConfig{Enabled: true},
+ FaasVersion: rac.ResourceAttributeConfig{Enabled: true},
+ GcpCloudRunJobExecution: rac.ResourceAttributeConfig{Enabled: true},
+ GcpCloudRunJobTaskIndex: rac.ResourceAttributeConfig{Enabled: true},
+ GcpGceInstanceHostname: rac.ResourceAttributeConfig{Enabled: false},
+ GcpGceInstanceName: rac.ResourceAttributeConfig{Enabled: false},
+ HostID: rac.ResourceAttributeConfig{Enabled: true},
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ HostType: rac.ResourceAttributeConfig{Enabled: true},
+ K8sClusterName: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for gcp resource attributes.
+type ResourceAttributesConfig struct {
+ CloudAccountID rac.ResourceAttributeConfig `river:"cloud.account.id,block,optional"`
+ CloudAvailabilityZone rac.ResourceAttributeConfig `river:"cloud.availability_zone,block,optional"`
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ FaasID rac.ResourceAttributeConfig `river:"faas.id,block,optional"`
+ FaasInstance rac.ResourceAttributeConfig `river:"faas.instance,block,optional"`
+ FaasName rac.ResourceAttributeConfig `river:"faas.name,block,optional"`
+ FaasVersion rac.ResourceAttributeConfig `river:"faas.version,block,optional"`
+ GcpCloudRunJobExecution rac.ResourceAttributeConfig `river:"gcp.cloud_run.job.execution,block,optional"`
+ GcpCloudRunJobTaskIndex rac.ResourceAttributeConfig `river:"gcp.cloud_run.job.task_index,block,optional"`
+ GcpGceInstanceHostname rac.ResourceAttributeConfig `river:"gcp.gce.instance.hostname,block,optional"`
+ GcpGceInstanceName rac.ResourceAttributeConfig `river:"gcp.gce.instance.name,block,optional"`
+ HostID rac.ResourceAttributeConfig `river:"host.id,block,optional"`
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+ HostType rac.ResourceAttributeConfig `river:"host.type,block,optional"`
+ K8sClusterName rac.ResourceAttributeConfig `river:"k8s.cluster.name,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.account.id": r.CloudAccountID.Convert(),
+ "cloud.availability_zone": r.CloudAvailabilityZone.Convert(),
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ "faas.id": r.FaasID.Convert(),
+ "faas.instance": r.FaasInstance.Convert(),
+ "faas.name": r.FaasName.Convert(),
+ "faas.version": r.FaasVersion.Convert(),
+ "gcp.cloud_run.job.execution": r.GcpCloudRunJobExecution.Convert(),
+ "gcp.cloud_run.job.task_index": r.GcpCloudRunJobTaskIndex.Convert(),
+ "gcp.gce.instance.hostname": r.GcpGceInstanceHostname.Convert(),
+ "gcp.gce.instance.name": r.GcpGceInstanceName.Convert(),
+ "host.id": r.HostID.Convert(),
+ "host.name": r.HostName.Convert(),
+ "host.type": r.HostType.Convert(),
+ "k8s.cluster.name": r.K8sClusterName.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/heroku/config.go b/component/otelcol/processor/resourcedetection/internal/heroku/config.go
new file mode 100644
index 000000000000..6e7681269abb
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/heroku/config.go
@@ -0,0 +1,64 @@
+package heroku
+
+import (
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "heroku"
+
+type Config struct {
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ HerokuAppID: rac.ResourceAttributeConfig{Enabled: true},
+ HerokuDynoID: rac.ResourceAttributeConfig{Enabled: true},
+ HerokuReleaseCommit: rac.ResourceAttributeConfig{Enabled: true},
+ HerokuReleaseCreationTimestamp: rac.ResourceAttributeConfig{Enabled: true},
+ ServiceInstanceID: rac.ResourceAttributeConfig{Enabled: true},
+ ServiceName: rac.ResourceAttributeConfig{Enabled: true},
+ ServiceVersion: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for heroku resource attributes.
+type ResourceAttributesConfig struct {
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ HerokuAppID rac.ResourceAttributeConfig `river:"heroku.app.id,block,optional"`
+ HerokuDynoID rac.ResourceAttributeConfig `river:"heroku.dyno.id,block,optional"`
+ HerokuReleaseCommit rac.ResourceAttributeConfig `river:"heroku.release.commit,block,optional"`
+ HerokuReleaseCreationTimestamp rac.ResourceAttributeConfig `river:"heroku.release.creation_timestamp,block,optional"`
+ ServiceInstanceID rac.ResourceAttributeConfig `river:"service.instance.id,block,optional"`
+ ServiceName rac.ResourceAttributeConfig `river:"service.name,block,optional"`
+ ServiceVersion rac.ResourceAttributeConfig `river:"service.version,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.provider": r.CloudProvider.Convert(),
+ "heroku.app.id": r.HerokuAppID.Convert(),
+ "heroku.dyno.id": r.HerokuDynoID.Convert(),
+ "heroku.release.commit": r.HerokuReleaseCommit.Convert(),
+ "heroku.release.creation_timestamp": r.HerokuReleaseCreationTimestamp.Convert(),
+ "service.instance.id": r.ServiceInstanceID.Convert(),
+ "service.name": r.ServiceName.Convert(),
+ "service.version": r.ServiceVersion.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/k8snode/config.go b/component/otelcol/processor/resourcedetection/internal/k8snode/config.go
new file mode 100644
index 000000000000..8d47362eecb6
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/k8snode/config.go
@@ -0,0 +1,75 @@
+package k8snode
+
+import (
+ "github.com/grafana/agent/component/otelcol"
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "kubernetes_node"
+
+type Config struct {
+ KubernetesAPIConfig otelcol.KubernetesAPIConfig `river:",squash"`
+ // NodeFromEnv can be used to extract the node name from an environment
+ // variable. The value must be the name of the environment variable.
+ // This is useful when the node where an Agent will run on cannot be
+ // predicted. In such cases, the Kubernetes downward API can be used to
+ // add the node name to each pod as an environment variable. K8s tagger
+ // can then read this value and filter pods by it.
+ //
+ // For example, node name can be passed to each agent with the downward API as follows
+ //
+ // env:
+ // - name: K8S_NODE_NAME
+ // valueFrom:
+ // fieldRef:
+ // fieldPath: spec.nodeName
+ //
+ // Then the NodeFromEnv field can be set to `K8S_NODE_NAME` to filter all pods by the node that
+ // the agent is running on.
+ //
+ // More on downward API here: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
+ NodeFromEnvVar string `river:"node_from_env_var,attr,optional"`
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+var DefaultArguments = Config{
+ KubernetesAPIConfig: otelcol.KubernetesAPIConfig{
+ AuthType: otelcol.KubernetesAPIConfig_AuthType_None,
+ },
+ NodeFromEnvVar: "K8S_NODE_NAME",
+ ResourceAttributes: ResourceAttributesConfig{
+ K8sNodeName: rac.ResourceAttributeConfig{Enabled: true},
+ K8sNodeUID: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (c *Config) SetToDefault() {
+ *c = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ //TODO: K8sAPIConfig is squashed - is there a better way to "convert" it?
+ "auth_type": args.KubernetesAPIConfig.AuthType,
+ "context": args.KubernetesAPIConfig.Context,
+ "node_from_env_var": args.NodeFromEnvVar,
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for k8snode resource attributes.
+type ResourceAttributesConfig struct {
+ K8sNodeName rac.ResourceAttributeConfig `river:"k8s.node.name,block,optional"`
+ K8sNodeUID rac.ResourceAttributeConfig `river:"k8s.node.uid,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "k8s.node.name": r.K8sNodeName.Convert(),
+ "k8s.node.uid": r.K8sNodeUID.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/openshift/config.go b/component/otelcol/processor/resourcedetection/internal/openshift/config.go
new file mode 100644
index 000000000000..362cd9bff459
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/openshift/config.go
@@ -0,0 +1,68 @@
+package openshift
+
+import (
+ "github.com/grafana/agent/component/otelcol"
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "openshift"
+
+// Config can contain user-specified inputs to overwrite default values.
+// See `openshift.go#NewDetector` for more information.
+type Config struct {
+ // Address is the address of the openshift api server
+ Address string `river:"address,attr,optional"`
+
+ // Token is used to identify against the openshift api server
+ Token string `river:"token,attr,optional"`
+
+ // TLSSettings contains TLS configurations that are specific to client
+ // connection used to communicate with the Openshift API.
+ TLSSettings otelcol.TLSClientArguments `river:"tls,block,optional"`
+
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+// DefaultArguments holds default settings for Config.
+var DefaultArguments = Config{
+ ResourceAttributes: ResourceAttributesConfig{
+ CloudPlatform: rac.ResourceAttributeConfig{Enabled: true},
+ CloudProvider: rac.ResourceAttributeConfig{Enabled: true},
+ CloudRegion: rac.ResourceAttributeConfig{Enabled: true},
+ K8sClusterName: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (args *Config) SetToDefault() {
+ *args = DefaultArguments
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "address": args.Address,
+ "token": args.Token,
+ "tls": args.TLSSettings.Convert(),
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for openshift resource attributes.
+type ResourceAttributesConfig struct {
+ CloudPlatform rac.ResourceAttributeConfig `river:"cloud.platform,block,optional"`
+ CloudProvider rac.ResourceAttributeConfig `river:"cloud.provider,block,optional"`
+ CloudRegion rac.ResourceAttributeConfig `river:"cloud.region,block,optional"`
+ K8sClusterName rac.ResourceAttributeConfig `river:"k8s.cluster.name,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "cloud.platform": r.CloudPlatform.Convert(),
+ "cloud.provider": r.CloudProvider.Convert(),
+ "cloud.region": r.CloudRegion.Convert(),
+ "k8s.cluster.name": r.K8sClusterName.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/resource_attribute_config/resource_attribute_config.go b/component/otelcol/processor/resourcedetection/internal/resource_attribute_config/resource_attribute_config.go
new file mode 100644
index 000000000000..ff5540a2f539
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/resource_attribute_config/resource_attribute_config.go
@@ -0,0 +1,12 @@
+package resource_attribute_config
+
+// Configures whether a resource attribute should be enabled or not.
+type ResourceAttributeConfig struct {
+ Enabled bool `river:"enabled,attr"`
+}
+
+func (r ResourceAttributeConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "enabled": r.Enabled,
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/internal/system/config.go b/component/otelcol/processor/resourcedetection/internal/system/config.go
new file mode 100644
index 000000000000..82e25cb45e97
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/internal/system/config.go
@@ -0,0 +1,95 @@
+package system
+
+import (
+ "fmt"
+
+ rac "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/resource_attribute_config"
+ "github.com/grafana/river"
+)
+
+const Name = "system"
+
+// Config defines user-specified configurations unique to the system detector
+type Config struct {
+ // The HostnameSources is a priority list of sources from which hostname will be fetched.
+ // In case of the error in fetching hostname from source,
+ // the next source from the list will be considered.
+ HostnameSources []string `river:"hostname_sources,attr,optional"`
+
+ ResourceAttributes ResourceAttributesConfig `river:"resource_attributes,block,optional"`
+}
+
+var DefaultArguments = Config{
+ HostnameSources: []string{"dns", "os"},
+ ResourceAttributes: ResourceAttributesConfig{
+ HostArch: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUCacheL2Size: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUFamily: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUModelID: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUModelName: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUStepping: rac.ResourceAttributeConfig{Enabled: false},
+ HostCPUVendorID: rac.ResourceAttributeConfig{Enabled: false},
+ HostID: rac.ResourceAttributeConfig{Enabled: false},
+ HostName: rac.ResourceAttributeConfig{Enabled: true},
+ OsDescription: rac.ResourceAttributeConfig{Enabled: false},
+ OsType: rac.ResourceAttributeConfig{Enabled: true},
+ },
+}
+
+var _ river.Defaulter = (*Config)(nil)
+
+// SetToDefault implements river.Defaulter.
+func (c *Config) SetToDefault() {
+ *c = DefaultArguments
+}
+
+// Validate config
+func (cfg *Config) Validate() error {
+ for _, hostnameSource := range cfg.HostnameSources {
+ switch hostnameSource {
+ case "os", "dns", "cname", "lookup":
+ // Valid option - nothing to do
+ default:
+ return fmt.Errorf("invalid hostname source: %s", hostnameSource)
+ }
+ }
+ return nil
+}
+
+func (args Config) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "hostname_sources": args.HostnameSources,
+ "resource_attributes": args.ResourceAttributes.Convert(),
+ }
+}
+
+// ResourceAttributesConfig provides config for system resource attributes.
+type ResourceAttributesConfig struct {
+ HostArch rac.ResourceAttributeConfig `river:"host.arch,block,optional"`
+ HostCPUCacheL2Size rac.ResourceAttributeConfig `river:"host.cpu.cache.l2.size,block,optional"`
+ HostCPUFamily rac.ResourceAttributeConfig `river:"host.cpu.family,block,optional"`
+ HostCPUModelID rac.ResourceAttributeConfig `river:"host.cpu.model.id,block,optional"`
+ HostCPUModelName rac.ResourceAttributeConfig `river:"host.cpu.model.name,block,optional"`
+ HostCPUStepping rac.ResourceAttributeConfig `river:"host.cpu.stepping,block,optional"`
+ HostCPUVendorID rac.ResourceAttributeConfig `river:"host.cpu.vendor.id,block,optional"`
+ HostID rac.ResourceAttributeConfig `river:"host.id,block,optional"`
+ HostName rac.ResourceAttributeConfig `river:"host.name,block,optional"`
+ OsDescription rac.ResourceAttributeConfig `river:"os.description,block,optional"`
+ OsType rac.ResourceAttributeConfig `river:"os.type,block,optional"`
+}
+
+func (r ResourceAttributesConfig) Convert() map[string]interface{} {
+ return map[string]interface{}{
+ "host.arch": r.HostArch.Convert(),
+ "host.cpu.cache.l2.size": r.HostCPUCacheL2Size.Convert(),
+ "host.cpu.family": r.HostCPUFamily.Convert(),
+ "host.cpu.model.id": r.HostCPUModelID.Convert(),
+ "host.cpu.model.name": r.HostCPUModelName.Convert(),
+ "host.cpu.stepping": r.HostCPUStepping.Convert(),
+ "host.cpu.vendor.id": r.HostCPUVendorID.Convert(),
+ "host.id": r.HostID.Convert(),
+ "host.name": r.HostName.Convert(),
+ "os.description": r.OsDescription.Convert(),
+ "os.type": r.OsType.Convert(),
+ }
+}
diff --git a/component/otelcol/processor/resourcedetection/resourcedetection.go b/component/otelcol/processor/resourcedetection/resourcedetection.go
new file mode 100644
index 000000000000..806d72c9d2e5
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/resourcedetection.go
@@ -0,0 +1,247 @@
+package resourcedetection
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/grafana/agent/component"
+ "github.com/grafana/agent/component/otelcol"
+ "github.com/grafana/agent/component/otelcol/processor"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/ec2"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/ecs"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/eks"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/elasticbeanstalk"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/lambda"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/azure"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/azure/aks"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/consul"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/docker"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/gcp"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/heroku"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/k8snode"
+ kubernetes_node "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/k8snode"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/openshift"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/system"
+ "github.com/grafana/river"
+ "github.com/mitchellh/mapstructure"
+ "github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourcedetectionprocessor"
+ otelcomponent "go.opentelemetry.io/collector/component"
+ otelextension "go.opentelemetry.io/collector/extension"
+)
+
+func init() {
+ component.Register(component.Registration{
+ Name: "otelcol.processor.resourcedetection",
+ Args: Arguments{},
+ Exports: otelcol.ConsumerExports{},
+
+ Build: func(opts component.Options, args component.Arguments) (component.Component, error) {
+ fact := resourcedetectionprocessor.NewFactory()
+ return processor.New(opts, fact, args.(Arguments))
+ },
+ })
+}
+
+// Arguments configures the otelcol.processor.resourcedetection component.
+type Arguments struct {
+ // Detectors is an ordered list of named detectors that should be
+ // run to attempt to detect resource information.
+ Detectors []string `river:"detectors,attr,optional"`
+
+ // Override indicates whether any existing resource attributes
+ // should be overridden or preserved. Defaults to true.
+ Override bool `river:"override,attr,optional"`
+
+ // DetectorConfig is a list of settings specific to all detectors
+ DetectorConfig DetectorConfig `river:",squash"`
+
+ // HTTP client settings for the detector
+ // Timeout default is 5s
+ Timeout time.Duration `river:"timeout,attr,optional"`
+ // Client otelcol.HTTPClientArguments `river:",squash"`
+ //TODO: Uncomment this later, and remove Timeout?
+ // Can we just get away with a timeout, or do we need all the http client settings?
+ // It seems that HTTP client settings are only used in the ec2 detection via ClientFromContext.
+ // This seems like a very niche use case, so for now I won't implement it in the Agent.
+ // If we do implement it in the Agent, I am not sure how to document the HTTP client settings.
+ // We'd have to mention that they're only for a very specific use case.
+
+ // Output configures where to send processed data. Required.
+ Output *otelcol.ConsumerArguments `river:"output,block"`
+}
+
+// DetectorConfig contains user-specified configurations unique to all individual detectors
+type DetectorConfig struct {
+ // EC2Config contains user-specified configurations for the EC2 detector
+ EC2Config ec2.Config `river:"ec2,block,optional"`
+
+ // ECSConfig contains user-specified configurations for the ECS detector
+ ECSConfig ecs.Config `river:"ecs,block,optional"`
+
+ // EKSConfig contains user-specified configurations for the EKS detector
+ EKSConfig eks.Config `river:"eks,block,optional"`
+
+ // Elasticbeanstalk contains user-specified configurations for the elasticbeanstalk detector
+ ElasticbeanstalkConfig elasticbeanstalk.Config `river:"elasticbeanstalk,block,optional"`
+
+ // Lambda contains user-specified configurations for the lambda detector
+ LambdaConfig lambda.Config `river:"lambda,block,optional"`
+
+ // Azure contains user-specified configurations for the azure detector
+ AzureConfig azure.Config `river:"azure,block,optional"`
+
+ // Aks contains user-specified configurations for the aks detector
+ AksConfig aks.Config `river:"aks,block,optional"`
+
+ // ConsulConfig contains user-specified configurations for the Consul detector
+ ConsulConfig consul.Config `river:"consul,block,optional"`
+
+ // DockerConfig contains user-specified configurations for the docker detector
+ DockerConfig docker.Config `river:"docker,block,optional"`
+
+ // GcpConfig contains user-specified configurations for the gcp detector
+ GcpConfig gcp.Config `river:"gcp,block,optional"`
+
+ // HerokuConfig contains user-specified configurations for the heroku detector
+ HerokuConfig heroku.Config `river:"heroku,block,optional"`
+
+ // SystemConfig contains user-specified configurations for the System detector
+ SystemConfig system.Config `river:"system,block,optional"`
+
+ // OpenShift contains user-specified configurations for the Openshift detector
+ OpenShiftConfig openshift.Config `river:"openshift,block,optional"`
+
+ // KubernetesNode contains user-specified configurations for the K8SNode detector
+ KubernetesNodeConfig kubernetes_node.Config `river:"kubernetes_node,block,optional"`
+}
+
+var (
+ _ processor.Arguments = Arguments{}
+ _ river.Validator = (*Arguments)(nil)
+ _ river.Defaulter = (*Arguments)(nil)
+)
+
+// DefaultArguments holds default settings for Arguments.
+var DefaultArguments = Arguments{
+ Detectors: []string{"env"},
+ Override: true,
+ Timeout: 5 * time.Second,
+ DetectorConfig: DetectorConfig{
+ EC2Config: ec2.DefaultArguments,
+ ECSConfig: ecs.DefaultArguments,
+ EKSConfig: eks.DefaultArguments,
+ ElasticbeanstalkConfig: elasticbeanstalk.DefaultArguments,
+ LambdaConfig: lambda.DefaultArguments,
+ AzureConfig: azure.DefaultArguments,
+ AksConfig: aks.DefaultArguments,
+ ConsulConfig: consul.DefaultArguments,
+ DockerConfig: docker.DefaultArguments,
+ GcpConfig: gcp.DefaultArguments,
+ HerokuConfig: heroku.DefaultArguments,
+ SystemConfig: system.DefaultArguments,
+ OpenShiftConfig: openshift.DefaultArguments,
+ KubernetesNodeConfig: kubernetes_node.DefaultArguments,
+ },
+}
+
+// SetToDefault implements river.Defaulter.
+func (args *Arguments) SetToDefault() {
+ *args = DefaultArguments
+}
+
+// Validate implements river.Validator.
+func (args *Arguments) Validate() error {
+ if len(args.Detectors) == 0 {
+ return fmt.Errorf("at least one detector must be specified")
+ }
+
+ for _, detector := range args.Detectors {
+ switch detector {
+ case "env",
+ ec2.Name,
+ ecs.Name,
+ eks.Name,
+ elasticbeanstalk.Name,
+ lambda.Name,
+ azure.Name,
+ aks.Name,
+ consul.Name,
+ docker.Name,
+ gcp.Name,
+ heroku.Name,
+ system.Name,
+ openshift.Name,
+ k8snode.Name:
+ // Valid option - nothing to do
+ default:
+ return fmt.Errorf("invalid detector: %s", detector)
+ }
+ }
+
+ return nil
+}
+
+func (args Arguments) ConvertDetectors() []string {
+ if args.Detectors == nil {
+ return nil
+ }
+
+ res := make([]string, 0, len(args.Detectors))
+ for _, detector := range args.Detectors {
+ switch detector {
+ case k8snode.Name:
+ res = append(res, "k8snode")
+ default:
+ res = append(res, detector)
+ }
+ }
+ return res
+}
+
+// Convert implements processor.Arguments.
+func (args Arguments) Convert() (otelcomponent.Config, error) {
+ input := make(map[string]interface{})
+
+ input["detectors"] = args.ConvertDetectors()
+ input["override"] = args.Override
+ input["timeout"] = args.Timeout
+
+ input["ec2"] = args.DetectorConfig.EC2Config.Convert()
+ input["ecs"] = args.DetectorConfig.ECSConfig.Convert()
+ input["eks"] = args.DetectorConfig.EKSConfig.Convert()
+ input["elasticbeanstalk"] = args.DetectorConfig.ElasticbeanstalkConfig.Convert()
+ input["lambda"] = args.DetectorConfig.LambdaConfig.Convert()
+ input["azure"] = args.DetectorConfig.AzureConfig.Convert()
+ input["aks"] = args.DetectorConfig.AksConfig.Convert()
+ input["consul"] = args.DetectorConfig.ConsulConfig.Convert()
+ input["docker"] = args.DetectorConfig.DockerConfig.Convert()
+ input["gcp"] = args.DetectorConfig.GcpConfig.Convert()
+ input["heroku"] = args.DetectorConfig.HerokuConfig.Convert()
+ input["system"] = args.DetectorConfig.SystemConfig.Convert()
+ input["openshift"] = args.DetectorConfig.OpenShiftConfig.Convert()
+ input["k8snode"] = args.DetectorConfig.KubernetesNodeConfig.Convert()
+
+ var result resourcedetectionprocessor.Config
+ err := mapstructure.Decode(input, &result)
+
+ if err != nil {
+ return nil, err
+ }
+
+ return &result, nil
+}
+
+// Extensions implements processor.Arguments.
+func (args Arguments) Extensions() map[otelcomponent.ID]otelextension.Extension {
+ return nil
+}
+
+// Exporters implements processor.Arguments.
+func (args Arguments) Exporters() map[otelcomponent.DataType]map[otelcomponent.ID]otelcomponent.Component {
+ return nil
+}
+
+// NextConsumers implements processor.Arguments.
+func (args Arguments) NextConsumers() *otelcol.ConsumerArguments {
+ return args.Output
+}
diff --git a/component/otelcol/processor/resourcedetection/resourcedetection_test.go b/component/otelcol/processor/resourcedetection/resourcedetection_test.go
new file mode 100644
index 000000000000..6fbbf0280e06
--- /dev/null
+++ b/component/otelcol/processor/resourcedetection/resourcedetection_test.go
@@ -0,0 +1,1527 @@
+package resourcedetection_test
+
+import (
+ "testing"
+ "time"
+
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/ec2"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/ecs"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/eks"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/elasticbeanstalk"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/aws/lambda"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/azure"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/azure/aks"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/consul"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/docker"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/gcp"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/heroku"
+ kubernetes_node "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/k8snode"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/openshift"
+ "github.com/grafana/agent/component/otelcol/processor/resourcedetection/internal/system"
+ "github.com/grafana/river"
+ "github.com/mitchellh/mapstructure"
+ "github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourcedetectionprocessor"
+ "github.com/stretchr/testify/require"
+)
+
+func TestArguments_UnmarshalRiver(t *testing.T) {
+ tests := []struct {
+ testName string
+ cfg string
+ expected map[string]interface{}
+ errorMsg string
+ }{
+ {
+ testName: "err_no_detector",
+ cfg: `
+ detectors = []
+ output {}
+ `,
+ errorMsg: "at least one detector must be specified",
+ },
+ {
+ testName: "invalid_detector",
+ cfg: `
+ detectors = ["non-existent-detector"]
+ output {}
+ `,
+ errorMsg: "invalid detector: non-existent-detector",
+ },
+ {
+ testName: "invalid_detector_and_all_valid_ones",
+ cfg: `
+ detectors = ["non-existent-detector2", "env", "ec2", "ecs", "eks", "elasticbeanstalk", "lambda", "azure", "aks", "consul", "docker", "gcp", "heroku", "system", "openshift", "kubernetes_node"]
+ output {}
+ `,
+ errorMsg: "invalid detector: non-existent-detector2",
+ },
+ {
+ testName: "all_detectors_with_defaults",
+ cfg: `
+ detectors = ["env", "ec2", "ecs", "eks", "elasticbeanstalk", "lambda", "azure", "aks", "consul", "docker", "gcp", "heroku", "system", "openshift", "kubernetes_node"]
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"env", "ec2", "ecs", "eks", "elasticbeanstalk", "lambda", "azure", "aks", "consul", "docker", "gcp", "heroku", "system", "openshift", "k8snode"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "default_detector",
+ cfg: `
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"env"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "ec2_defaults",
+ cfg: `
+ detectors = ["ec2"]
+ ec2 {
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"ec2"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.account.id": map[string]interface{}{"enabled": true},
+ "cloud.availability_zone": map[string]interface{}{"enabled": true},
+ "cloud.platform": map[string]interface{}{"enabled": true},
+ "cloud.provider": map[string]interface{}{"enabled": true},
+ "cloud.region": map[string]interface{}{"enabled": true},
+ "host.id": map[string]interface{}{"enabled": true},
+ "host.image.id": map[string]interface{}{"enabled": true},
+ "host.name": map[string]interface{}{"enabled": true},
+ "host.type": map[string]interface{}{"enabled": true},
+ },
+ },
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "ec2_defaults_empty_resource_attributes",
+ cfg: `
+ detectors = ["ec2"]
+ ec2 {
+ resource_attributes {}
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"ec2"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.account.id": map[string]interface{}{"enabled": true},
+ "cloud.availability_zone": map[string]interface{}{"enabled": true},
+ "cloud.platform": map[string]interface{}{"enabled": true},
+ "cloud.provider": map[string]interface{}{"enabled": true},
+ "cloud.region": map[string]interface{}{"enabled": true},
+ "host.id": map[string]interface{}{"enabled": true},
+ "host.image.id": map[string]interface{}{"enabled": true},
+ "host.name": map[string]interface{}{"enabled": true},
+ "host.type": map[string]interface{}{"enabled": true},
+ },
+ },
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "ec2_explicit",
+ cfg: `
+ detectors = ["ec2"]
+ ec2 {
+ tags = ["^tag1$", "^tag2$", "^label.*$"]
+ resource_attributes {
+ cloud.account.id { enabled = true }
+ cloud.availability_zone { enabled = true }
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = true }
+ cloud.region { enabled = true }
+ host.id { enabled = true }
+ host.image.id { enabled = false }
+ host.name { enabled = false }
+ host.type { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"ec2"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": map[string]interface{}{
+ "tags": []string{"^tag1$", "^tag2$", "^label.*$"},
+ "resource_attributes": map[string]interface{}{
+ "cloud.account.id": map[string]interface{}{"enabled": true},
+ "cloud.availability_zone": map[string]interface{}{"enabled": true},
+ "cloud.platform": map[string]interface{}{"enabled": true},
+ "cloud.provider": map[string]interface{}{"enabled": true},
+ "cloud.region": map[string]interface{}{"enabled": true},
+ "host.id": map[string]interface{}{"enabled": true},
+ "host.image.id": map[string]interface{}{"enabled": false},
+ "host.name": map[string]interface{}{"enabled": false},
+ "host.type": map[string]interface{}{"enabled": false},
+ },
+ },
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "ecs_defaults",
+ cfg: `
+ detectors = ["ecs"]
+ ecs {
+ resource_attributes {
+ aws.ecs.cluster.arn { enabled = true }
+ aws.ecs.launchtype { enabled = true }
+ aws.ecs.task.arn { enabled = true }
+ aws.ecs.task.family { enabled = true }
+ aws.ecs.task.revision { enabled = true }
+ aws.log.group.arns { enabled = true }
+ aws.log.group.names { enabled = false }
+ // aws.log.stream.arns { enabled = true }
+ // aws.log.stream.names { enabled = true }
+ // cloud.account.id { enabled = true }
+ // cloud.availability_zone { enabled = true }
+ // cloud.platform { enabled = true }
+ // cloud.provider { enabled = true }
+ // cloud.region { enabled = true }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"ecs"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ecs": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "aws.ecs.cluster.arn": map[string]interface{}{"enabled": true},
+ "aws.ecs.launchtype": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.arn": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.family": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.revision": map[string]interface{}{"enabled": true},
+ "aws.log.group.arns": map[string]interface{}{"enabled": true},
+ "aws.log.group.names": map[string]interface{}{"enabled": false},
+ "aws.log.stream.arns": map[string]interface{}{"enabled": true},
+ "aws.log.stream.names": map[string]interface{}{"enabled": true},
+ "cloud.account.id": map[string]interface{}{"enabled": true},
+ "cloud.availability_zone": map[string]interface{}{"enabled": true},
+ "cloud.platform": map[string]interface{}{"enabled": true},
+ "cloud.provider": map[string]interface{}{"enabled": true},
+ "cloud.region": map[string]interface{}{"enabled": true},
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "ecs_explicit",
+ cfg: `
+ detectors = ["ecs"]
+ ecs {
+ resource_attributes {
+ aws.ecs.cluster.arn { enabled = true }
+ aws.ecs.launchtype { enabled = true }
+ aws.ecs.task.arn { enabled = true }
+ aws.ecs.task.family { enabled = true }
+ aws.ecs.task.revision { enabled = true }
+ aws.log.group.arns { enabled = true }
+ aws.log.group.names { enabled = false }
+ // aws.log.stream.arns { enabled = true }
+ // aws.log.stream.names { enabled = true }
+ // cloud.account.id { enabled = true }
+ // cloud.availability_zone { enabled = true }
+ // cloud.platform { enabled = true }
+ // cloud.provider { enabled = true }
+ // cloud.region { enabled = true }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"ecs"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ecs": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "aws.ecs.cluster.arn": map[string]interface{}{"enabled": true},
+ "aws.ecs.launchtype": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.arn": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.family": map[string]interface{}{"enabled": true},
+ "aws.ecs.task.revision": map[string]interface{}{"enabled": true},
+ "aws.log.group.arns": map[string]interface{}{"enabled": true},
+ "aws.log.group.names": map[string]interface{}{"enabled": false},
+ "aws.log.stream.arns": map[string]interface{}{"enabled": true},
+ "aws.log.stream.names": map[string]interface{}{"enabled": true},
+ "cloud.account.id": map[string]interface{}{"enabled": true},
+ "cloud.availability_zone": map[string]interface{}{"enabled": true},
+ "cloud.platform": map[string]interface{}{"enabled": true},
+ "cloud.provider": map[string]interface{}{"enabled": true},
+ "cloud.region": map[string]interface{}{"enabled": true},
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "eks_defaults",
+ cfg: `
+ detectors = ["eks"]
+ eks {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"eks"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "eks": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "eks_explicit",
+ cfg: `
+ detectors = ["eks"]
+ eks {
+ resource_attributes {
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"eks"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "eks": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": false,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "azure_defaults",
+ cfg: `
+ detectors = ["azure"]
+ azure {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"azure"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "azure": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "tags": []string{},
+ "azure.resourcegroup.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.scaleset.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.size": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.account.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.region": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.name": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "azure_explicit",
+ cfg: `
+ detectors = ["azure"]
+ azure {
+ resource_attributes {
+ azure.resourcegroup.name { enabled = true }
+ azure.vm.name { enabled = true }
+ azure.vm.scaleset.name { enabled = true }
+ azure.vm.size { enabled = true }
+ cloud.account.id { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"azure"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "azure": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "tags": []string{},
+ "azure.resourcegroup.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.scaleset.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "azure.vm.size": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.account.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.region": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.name": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "aks_defaults",
+ cfg: `
+ detectors = ["aks"]
+ aks {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"aks"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "aks": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "aks_explicit",
+ cfg: `
+ detectors = ["aks"]
+ aks {
+ resource_attributes {
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"aks"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "aks": map[string]interface{}{
+ "tags": []string{},
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": false,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "gcp_defaults",
+ cfg: `
+ detectors = ["gcp"]
+ gcp {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"gcp"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "gcp_explicit",
+ cfg: `
+ detectors = ["gcp"]
+ gcp {
+ resource_attributes {
+ cloud.account.id { enabled = true }
+ cloud.availability_zone { enabled = true }
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = true }
+ cloud.region { enabled = false }
+ faas.id { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"gcp"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "gcp": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "cloud.account.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.availability_zone": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.region": map[string]interface{}{
+ "enabled": false,
+ },
+ "faas.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "faas.instance": map[string]interface{}{
+ "enabled": true,
+ },
+ "faas.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "faas.version": map[string]interface{}{
+ "enabled": true,
+ },
+ "gcp.cloud_run.job.execution": map[string]interface{}{
+ "enabled": true,
+ },
+ "gcp.cloud_run.job.task_index": map[string]interface{}{
+ "enabled": true,
+ },
+ "gcp.gce.instance.hostname": map[string]interface{}{
+ "enabled": false,
+ },
+ "gcp.gce.instance.name": map[string]interface{}{
+ "enabled": false,
+ },
+ "host.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.type": map[string]interface{}{
+ "enabled": true,
+ },
+ "k8s.cluster.name": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "docker_defaults",
+ cfg: `
+ detectors = ["docker"]
+ docker {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"docker"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "docker_explicit",
+ cfg: `
+ detectors = ["docker"]
+ docker {
+ resource_attributes {
+ host.name { enabled = true }
+ os.type { enabled = false }
+
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"docker"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "docker": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "host.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "os.type": map[string]interface{}{
+ "enabled": false,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "lambda_defaults",
+ cfg: `
+ detectors = ["lambda"]
+ lambda {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"lambda"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "lambda_explicit",
+ cfg: `
+ detectors = ["lambda"]
+ lambda {
+ resource_attributes {
+ aws.log.group.names { enabled = true }
+ aws.log.stream.names { enabled = true }
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = false }
+ cloud.region { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"lambda"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "lambda": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "aws.log.group.names": map[string]interface{}{
+ "enabled": true,
+ },
+ "aws.log.stream.names": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": false,
+ },
+ "cloud.region": map[string]interface{}{
+ "enabled": false,
+ },
+ "faas.instance": map[string]interface{}{
+ "enabled": true,
+ },
+ "faas.max_memory": map[string]interface{}{
+ "enabled": true,
+ },
+ "faas.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "faas.version": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "elasticbeanstalk_defaults",
+ cfg: `
+ detectors = ["elasticbeanstalk"]
+ elasticbeanstalk {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"elasticbeanstalk"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "elasticbeanstalk_explicit",
+ cfg: `
+ detectors = ["elasticbeanstalk"]
+ elasticbeanstalk {
+ resource_attributes {
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = true }
+ deployment.environment { enabled = true }
+ service.instance.id { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"elasticbeanstalk"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "elasticbeanstalk": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "deployment.environment": map[string]interface{}{
+ "enabled": true,
+ },
+ "service.instance.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "service.version": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "consul_defaults",
+ cfg: `
+ detectors = ["consul"]
+ consul {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"consul"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "consul_explicit",
+ cfg: `
+ detectors = ["consul"]
+ consul {
+ address = "localhost:8500"
+ datacenter = "dc1"
+ token = "secret_token"
+ namespace = "test_namespace"
+ meta = ["test"]
+ resource_attributes {
+ cloud.region { enabled = false }
+ host.id { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"consul"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "consul": map[string]interface{}{
+ "address": "localhost:8500",
+ "datacenter": "dc1",
+ "token": "secret_token",
+ "namespace": "test_namespace",
+ "meta": map[string]string{"test": ""},
+ "resource_attributes": map[string]interface{}{
+ "cloud.region": map[string]interface{}{
+ "enabled": false,
+ },
+ "host.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "host.name": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "heroku_defaults",
+ cfg: `
+ detectors = ["heroku"]
+ heroku {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"heroku"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "heroku_explicit",
+ cfg: `
+ detectors = ["heroku"]
+ heroku {
+ resource_attributes {
+ cloud.provider { enabled = true }
+ heroku.app.id { enabled = true }
+ heroku.dyno.id { enabled = true }
+ heroku.release.commit { enabled = true }
+ heroku.release.creation_timestamp { enabled = false }
+ service.instance.id { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"heroku"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "heroku": map[string]interface{}{
+ "resource_attributes": map[string]interface{}{
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "heroku.app.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "heroku.dyno.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "heroku.release.commit": map[string]interface{}{
+ "enabled": true,
+ },
+ "heroku.release.creation_timestamp": map[string]interface{}{
+ "enabled": false,
+ },
+ "service.instance.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "service.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "service.version": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "kubernetes_node_defaults",
+ cfg: `
+ detectors = ["kubernetes_node"]
+ kubernetes_node {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"k8snode"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "kubernetes_node_explicit",
+ cfg: `
+ detectors = ["kubernetes_node"]
+ kubernetes_node {
+ auth_type = "kubeConfig"
+ context = "fake_ctx"
+ node_from_env_var = "MY_CUSTOM_VAR"
+ resource_attributes {
+ k8s.node.name { enabled = true }
+ k8s.node.uid { enabled = false }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"k8snode"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "k8snode": map[string]interface{}{
+ "auth_type": "kubeConfig",
+ "context": "fake_ctx",
+ "node_from_env_var": "MY_CUSTOM_VAR",
+ "resource_attributes": map[string]interface{}{
+ "k8s.node.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "k8s.node.uid": map[string]interface{}{
+ "enabled": false,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "system_invalid_hostname_source",
+ cfg: `
+ detectors = ["system"]
+ system {
+ hostname_sources = ["asdf"]
+ resource_attributes { }
+ }
+ output {}
+ `,
+ errorMsg: "invalid hostname source: asdf",
+ },
+ {
+ testName: "system_defaults",
+ cfg: `
+ detectors = ["system"]
+ system {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"system"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "system_explicit",
+ cfg: `
+ detectors = ["system"]
+ system {
+ hostname_sources = ["cname","lookup"]
+ resource_attributes {
+ host.arch { enabled = true }
+ host.cpu.cache.l2.size { enabled = true }
+ host.cpu.family { enabled = true }
+ host.cpu.model.id { enabled = true }
+ host.cpu.model.name { enabled = true }
+ host.cpu.stepping { enabled = true }
+ host.cpu.vendor.id { enabled = false }
+ host.id { enabled = false }
+ host.name { enabled = false }
+ // os.description { enabled = false }
+ // os.type { enabled = true }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"system"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "system": map[string]interface{}{
+ "hostname_sources": []string{"cname", "lookup"},
+ "resource_attributes": map[string]interface{}{
+ "host.arch": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.cache.l2.size": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.family": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.model.id": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.model.name": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.stepping": map[string]interface{}{
+ "enabled": true,
+ },
+ "host.cpu.vendor.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "host.id": map[string]interface{}{
+ "enabled": false,
+ },
+ "host.name": map[string]interface{}{
+ "enabled": false,
+ },
+ "os.description": map[string]interface{}{
+ "enabled": false,
+ },
+ "os.type": map[string]interface{}{
+ "enabled": true,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "openshift_default",
+ cfg: `
+ detectors = ["openshift"]
+ openshift {}
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"openshift"},
+ "timeout": 5 * time.Second,
+ "override": true,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "openshift_explicit",
+ cfg: `
+ detectors = ["openshift"]
+ timeout = "7s"
+ override = false
+ openshift {
+ address = "127.0.0.1:4444"
+ token = "some_token"
+ tls {
+ insecure = true
+ }
+ resource_attributes {
+ cloud.platform {
+ enabled = true
+ }
+ cloud.provider {
+ enabled = true
+ }
+ cloud.region {
+ enabled = false
+ }
+ k8s.cluster.name {
+ enabled = false
+ }
+ }
+ }
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"openshift"},
+ "timeout": 7 * time.Second,
+ "override": false,
+ "openshift": map[string]interface{}{
+ "address": "127.0.0.1:4444",
+ "token": "some_token",
+ "tls": map[string]interface{}{
+ "insecure": true,
+ },
+ "resource_attributes": map[string]interface{}{
+ "cloud.platform": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.provider": map[string]interface{}{
+ "enabled": true,
+ },
+ "cloud.region": map[string]interface{}{
+ "enabled": false,
+ },
+ "k8s.cluster.name": map[string]interface{}{
+ "enabled": false,
+ },
+ },
+ },
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ {
+ testName: "env",
+ cfg: `
+ detectors = ["env"]
+ timeout = "7s"
+ override = false
+ output {}
+ `,
+ expected: map[string]interface{}{
+ "detectors": []string{"env"},
+ "timeout": 7 * time.Second,
+ "override": false,
+ "ec2": ec2.DefaultArguments.Convert(),
+ "ecs": ecs.DefaultArguments.Convert(),
+ "eks": eks.DefaultArguments.Convert(),
+ "elasticbeanstalk": elasticbeanstalk.DefaultArguments.Convert(),
+ "lambda": lambda.DefaultArguments.Convert(),
+ "azure": azure.DefaultArguments.Convert(),
+ "aks": aks.DefaultArguments.Convert(),
+ "consul": consul.DefaultArguments.Convert(),
+ "docker": docker.DefaultArguments.Convert(),
+ "gcp": gcp.DefaultArguments.Convert(),
+ "heroku": heroku.DefaultArguments.Convert(),
+ "system": system.DefaultArguments.Convert(),
+ "openshift": openshift.DefaultArguments.Convert(),
+ "k8snode": kubernetes_node.DefaultArguments.Convert(),
+ },
+ },
+ }
+
+ for _, tc := range tests {
+ t.Run(tc.testName, func(t *testing.T) {
+ var args resourcedetection.Arguments
+ err := river.Unmarshal([]byte(tc.cfg), &args)
+ if tc.errorMsg != "" {
+ require.ErrorContains(t, err, tc.errorMsg)
+ return
+ }
+
+ require.NoError(t, err)
+
+ actualPtr, err := args.Convert()
+ require.NoError(t, err)
+
+ actual := actualPtr.(*resourcedetectionprocessor.Config)
+
+ var expected resourcedetectionprocessor.Config
+ err = mapstructure.Decode(tc.expected, &expected)
+ require.NoError(t, err)
+
+ require.Equal(t, expected, *actual)
+ })
+ }
+}
diff --git a/component/pyroscope/ebpf/args.go b/component/pyroscope/ebpf/args.go
index facf9129d6ba..c4c444b917f2 100644
--- a/component/pyroscope/ebpf/args.go
+++ b/component/pyroscope/ebpf/args.go
@@ -10,8 +10,6 @@ import (
type Arguments struct {
ForwardTo []pyroscope.Appendable `river:"forward_to,attr"`
Targets []discovery.Target `river:"targets,attr,optional"`
- DefaultTarget discovery.Target `river:"default_target,attr,optional"` // undocumented, keeping it until we have other sd
- TargetsOnly bool `river:"targets_only,attr,optional"` // undocumented, keeping it until we have other sd
CollectInterval time.Duration `river:"collect_interval,attr,optional"`
SampleRate int `river:"sample_rate,attr,optional"`
PidCacheSize int `river:"pid_cache_size,attr,optional"`
diff --git a/component/pyroscope/ebpf/ebpf_linux.go b/component/pyroscope/ebpf/ebpf_linux.go
index b8b1afbecf59..8d201ac488f1 100644
--- a/component/pyroscope/ebpf/ebpf_linux.go
+++ b/component/pyroscope/ebpf/ebpf_linux.go
@@ -82,7 +82,6 @@ func defaultArguments() Arguments {
CacheRounds: 3,
CollectUserProfile: true,
CollectKernelProfile: true,
- TargetsOnly: true,
Demangle: "none",
PythonEnabled: true,
}
@@ -226,8 +225,7 @@ func targetsOptionFromArgs(args Arguments) sd.TargetsOptions {
}
return sd.TargetsOptions{
Targets: targets,
- DefaultTarget: sd.DiscoveryTarget(args.DefaultTarget),
- TargetsOnly: args.TargetsOnly,
+ TargetsOnly: true,
ContainerCacheSize: args.ContainerIDCacheSize,
}
}
diff --git a/component/pyroscope/java/asprof/asprof_linux_amd64.go b/component/pyroscope/java/asprof/asprof_linux_amd64.go
index 6b7f0a6f74ca..7d405539cda6 100644
--- a/component/pyroscope/java/asprof/asprof_linux_amd64.go
+++ b/component/pyroscope/java/asprof/asprof_linux_amd64.go
@@ -6,7 +6,7 @@ import (
_ "embed"
)
-//go:embed async-profiler-3.0-ea-linux-x64.tar.gz
+//go:embed async-profiler-3.0-linux-x64.tar.gz
var embededArchiveData []byte
// asprof
diff --git a/component/pyroscope/java/asprof/asprof_linux_arm64.go b/component/pyroscope/java/asprof/asprof_linux_arm64.go
index ce55bdb7ffe8..e6978f02b995 100644
--- a/component/pyroscope/java/asprof/asprof_linux_arm64.go
+++ b/component/pyroscope/java/asprof/asprof_linux_arm64.go
@@ -6,7 +6,7 @@ import (
_ "embed"
)
-//go:embed async-profiler-3.0-ea-linux-arm64.tar.gz
+//go:embed async-profiler-3.0-linux-arm64.tar.gz
var embededArchiveData []byte
// asprof
diff --git a/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-arm64.tar.gz b/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-arm64.tar.gz
deleted file mode 100644
index 425600954162..000000000000
Binary files a/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-arm64.tar.gz and /dev/null differ
diff --git a/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-x64.tar.gz b/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-x64.tar.gz
deleted file mode 100644
index a9b70fdf87d9..000000000000
Binary files a/component/pyroscope/java/asprof/async-profiler-3.0-ea-linux-x64.tar.gz and /dev/null differ
diff --git a/component/pyroscope/java/asprof/async-profiler-3.0-linux-arm64.tar.gz b/component/pyroscope/java/asprof/async-profiler-3.0-linux-arm64.tar.gz
new file mode 100644
index 000000000000..fcab1a963d7a
Binary files /dev/null and b/component/pyroscope/java/asprof/async-profiler-3.0-linux-arm64.tar.gz differ
diff --git a/component/pyroscope/java/asprof/async-profiler-3.0-linux-x64.tar.gz b/component/pyroscope/java/asprof/async-profiler-3.0-linux-x64.tar.gz
new file mode 100644
index 000000000000..c4386b482792
Binary files /dev/null and b/component/pyroscope/java/asprof/async-profiler-3.0-linux-x64.tar.gz differ
diff --git a/component/pyroscope/java/loop.go b/component/pyroscope/java/loop.go
index aee4b8554770..918e97751563 100644
--- a/component/pyroscope/java/loop.go
+++ b/component/pyroscope/java/loop.go
@@ -152,6 +152,10 @@ func (p *profilingLoop) push(jfrBytes []byte, startTime time.Time, endTime time.
for _, l := range jfrpprofPyroscope.Labels(target, profiles.JFREvent, req.Metric, "", spyName) {
ls.Set(l.Name, l.Value)
}
+ if ls.Get(labelServiceName) == "" {
+ ls.Set(labelServiceName, inferServiceName(target))
+ }
+
profile, err := req.Profile.MarshalVT()
if err != nil {
_ = l.Log("msg", "failed to marshal profile", "err", err)
diff --git a/component/pyroscope/java/target.go b/component/pyroscope/java/target.go
new file mode 100644
index 000000000000..25a1defebd54
--- /dev/null
+++ b/component/pyroscope/java/target.go
@@ -0,0 +1,35 @@
+package java
+
+import (
+ "fmt"
+
+ "github.com/grafana/agent/component/discovery"
+)
+
+const (
+ labelServiceName = "service_name"
+ labelServiceNameK8s = "__meta_kubernetes_pod_annotation_pyroscope_io_service_name"
+)
+
+func inferServiceName(target discovery.Target) string {
+ k8sServiceName := target[labelServiceNameK8s]
+ if k8sServiceName != "" {
+ return k8sServiceName
+ }
+ k8sNamespace := target["__meta_kubernetes_namespace"]
+ k8sContainer := target["__meta_kubernetes_pod_container_name"]
+ if k8sNamespace != "" && k8sContainer != "" {
+ return fmt.Sprintf("java/%s/%s", k8sNamespace, k8sContainer)
+ }
+ dockerContainer := target["__meta_docker_container_name"]
+ if dockerContainer != "" {
+ return dockerContainer
+ }
+ if swarmService := target["__meta_dockerswarm_container_label_service_name"]; swarmService != "" {
+ return swarmService
+ }
+ if swarmService := target["__meta_dockerswarm_service_name"]; swarmService != "" {
+ return swarmService
+ }
+ return "unspecified"
+}
diff --git a/component/pyroscope/scrape/target.go b/component/pyroscope/scrape/target.go
index 703a93dd63be..736d75b43f78 100644
--- a/component/pyroscope/scrape/target.go
+++ b/component/pyroscope/scrape/target.go
@@ -430,5 +430,11 @@ func inferServiceName(lset labels.Labels) string {
if dockerContainer != "" {
return dockerContainer
}
+ if swarmService := lset.Get("__meta_dockerswarm_container_label_service_name"); swarmService != "" {
+ return swarmService
+ }
+ if swarmService := lset.Get("__meta_dockerswarm_service_name"); swarmService != "" {
+ return swarmService
+ }
return "unspecified"
}
diff --git a/converter/internal/staticconvert/internal/build/builder.go b/converter/internal/staticconvert/internal/build/builder.go
index 58fedf6225c2..dadc4ae3fd96 100644
--- a/converter/internal/staticconvert/internal/build/builder.go
+++ b/converter/internal/staticconvert/internal/build/builder.go
@@ -204,6 +204,10 @@ func (b *IntegrationsConfigBuilder) appendExporter(commonConfig *int_config.Comm
RemoteWriteConfigs: b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite,
}
+ if len(b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite) == 0 {
+ b.diags.Add(diag.SeverityLevelError, "The converter does not support handling integrations which are not connected to a remote_write.")
+ }
+
jobNameToCompLabelsFunc := func(jobName string) string {
return b.jobNameToCompLabel(jobName)
}
diff --git a/converter/internal/staticconvert/testdata/integrations_no_rw.diags b/converter/internal/staticconvert/testdata/integrations_no_rw.diags
new file mode 100644
index 000000000000..1f0d463ede34
--- /dev/null
+++ b/converter/internal/staticconvert/testdata/integrations_no_rw.diags
@@ -0,0 +1,2 @@
+(Error) The converter does not support handling integrations which are not connected to a remote_write.
+(Warning) Please review your agent command line flags and ensure they are set in your Flow mode config file where necessary.
\ No newline at end of file
diff --git a/converter/internal/staticconvert/testdata/integrations_no_rw.yaml b/converter/internal/staticconvert/testdata/integrations_no_rw.yaml
new file mode 100644
index 000000000000..76e4848e56b5
--- /dev/null
+++ b/converter/internal/staticconvert/testdata/integrations_no_rw.yaml
@@ -0,0 +1,4 @@
+integrations:
+ node_exporter:
+ scrape_integration: true
+ enabled: true
\ No newline at end of file
diff --git a/docs/generator/links_to_types.go b/docs/generator/links_to_types.go
index 867654e1648d..8de89bfd1321 100644
--- a/docs/generator/links_to_types.go
+++ b/docs/generator/links_to_types.go
@@ -38,12 +38,10 @@ func (l *LinksToTypesGenerator) Generate() (string, error) {
}
note := `
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
`
return heading + acceptingSection + outputSection + note, nil
diff --git a/docs/make-docs b/docs/make-docs
index 25176a37f051..d5d861ca83b4 100755
--- a/docs/make-docs
+++ b/docs/make-docs
@@ -6,7 +6,13 @@
# [Semantic versioning](https://semver.org/) is used to help the reader identify the significance of changes.
# Changes are relevant to this script and the support docs.mk GNU Make interface.
#
-
+# ## 5.2.0 (2024-01-18)
+#
+# ### Changed
+#
+# - Updated `make vale` to use latest Vale style and configuration.
+# - Updated `make vale` to use platform appropriate image.
+#
# ## 5.1.2 (2023-11-08)
#
# ### Added
@@ -704,14 +710,14 @@ case "${image}" in
"${PODMAN}" run \
--init \
--interactive \
- --platform linux/amd64 \
--rm \
+ --workdir /etc/vale \
--tty \
${volumes} \
"${DOCS_IMAGE}" \
"--minAlertLevel=${VALE_MINALERTLEVEL}" \
- --config=/etc/vale/.vale.ini \
- --output=line \
+ '--glob=*.md' \
+ --output=/etc/vale/rdjsonl.tmpl \
/hugo/content/docs | sed "s#$(proj_dst "${proj}")#sources#"
;;
*)
diff --git a/docs/rfcs/0000-template.md b/docs/rfcs/0000-template.md
index bbc01019c3ab..c565ea04e584 100644
--- a/docs/rfcs/0000-template.md
+++ b/docs/rfcs/0000-template.md
@@ -3,4 +3,3 @@
* Date: YYYY-MM-DD
* Author: Full Name (@github_username)
* PR: [grafana/agent#XXXX](https://github.com/grafana/agent/pull/XXXX)
-* Status: Draft
diff --git a/docs/rfcs/0001-designing-in-the-open.md b/docs/rfcs/0001-designing-in-the-open.md
index 8f73f5d7a8d2..7419b060a375 100644
--- a/docs/rfcs/0001-designing-in-the-open.md
+++ b/docs/rfcs/0001-designing-in-the-open.md
@@ -3,7 +3,6 @@
* Date: 2021-11-02
* Author: Robert Fratto (@rfratto)
* PR: [grafana/agent#1055](https://github.com/grafana/agent/pull/1055)
-* Status: Implemented
## Summary
diff --git a/docs/rfcs/0002-integrations-in-operator.md b/docs/rfcs/0002-integrations-in-operator.md
index 8003606d3d46..ed54d40de6bd 100644
--- a/docs/rfcs/0002-integrations-in-operator.md
+++ b/docs/rfcs/0002-integrations-in-operator.md
@@ -3,7 +3,6 @@
* Date: 2022-01-04
* Author: Robert Fratto (@rfratto)
* PR: [grafana/agent#1224](https://github.com/grafana/agent/pull/1224)
-* Status: Draft
## Background
diff --git a/docs/rfcs/0003-new-metrics-subsystem.md b/docs/rfcs/0003-new-metrics-subsystem.md
index 961cd983e14a..336c0e4cc475 100644
--- a/docs/rfcs/0003-new-metrics-subsystem.md
+++ b/docs/rfcs/0003-new-metrics-subsystem.md
@@ -3,7 +3,7 @@
* Date: 2021-11-29
* Author: Robert Fratto (@rfratto)
* PR: [grafana/agent#1140](https://github.com/grafana/agent/pull/1140)
-* Status: Draft
+* Status: Abandoned
## Background
diff --git a/docs/rfcs/0004-agent-flow.md b/docs/rfcs/0004-agent-flow.md
index db061af7a16b..3c1052e926ad 100644
--- a/docs/rfcs/0004-agent-flow.md
+++ b/docs/rfcs/0004-agent-flow.md
@@ -1,14 +1,13 @@
-# This provided the basis for Agent Flow, and though not all the concepts/ideas will make it into flow, it is good to have the historical context for why we started down this path.
+# This provided the basis for Agent Flow, and though not all the concepts/ideas will make it into flow, it is good to have the historical context for why we started down this path.
-# Agent Flow - Agent Utilizing Components
+# Agent Flow - Agent Utilizing Components
* Date: 2022-03-30
* Author: Matt Durham (@mattdurham)
-* PRs:
- * [grafana/agent#1538](https://github.com/grafana/agent/pull/1538) - Problem Statement
+* PRs:
+ * [grafana/agent#1538](https://github.com/grafana/agent/pull/1538) - Problem Statement
* [grafana/agent#1546](https://github.com/grafana/agent/pull/1546) - Messages and Expressions
-* Status: Draft
## Overarching Problem Statement
@@ -17,7 +16,7 @@ The Agents configuration and onboarding is difficult to use. Viewing the effect
## Description
-Agent Flow is intended to solve real world needs that the Grafana Agent team have identified in conversations with users and developers.
+Agent Flow is intended to solve real world needs that the Grafana Agent team have identified in conversations with users and developers.
These broadly include:
@@ -32,13 +31,13 @@ These broadly include:
- Lack of understanding how telemetry data moves through agent
- Other systems use pipeline/extensions to allow users to understand how data moves through the system
-# 1. Introduction and Goals
+# 1. Introduction and Goals
-This design document outlines Agent Flow, a system for describing a programmable pipeline for telemetry data.
+This design document outlines Agent Flow, a system for describing a programmable pipeline for telemetry data.
Agent Flow refers to both the execution, configuration and visual configurator of data flow.
-### Goals
+### Goals
* Allow users to more easily understand the impact of their configuration
* Allow users to collect integration metrics across a set of agents
@@ -55,43 +54,43 @@ Agent Flow refers to both the execution, configuration and visual configurator o
At a high level, Agent Flow:
-* Breaks apart the existing hierarchical configuration file into reusable components
+* Breaks apart the existing hierarchical configuration file into reusable components
* Allows components to be connected, resulting in a programmable pipeline of telemetry data
-This document considers three potential approaches to allow users to connect components together:
+This document considers three potential approaches to allow users to connect components together:
-1. Message passing (i.e., an actor model)
+1. Message passing (i.e., an actor model)
2. Expressions (i.e., directly referencing the output of another component)
-3. A hybrid of both messages and expressions
+3. A hybrid of both messages and expressions
-The Flow Should in general resemble a flowchart or node graph. The data flow diagram would conceptually look like the below, with each node being composable and connecting with other nodes.
+The Flow Should in general resemble a flowchart or node graph. The data flow diagram would conceptually look like the below, with each node being composable and connecting with other nodes.
```
-┌─────────────────────────┐ ┌──────────────────┐ ┌─────────────────────┐ ┌───────────────────┐
-│ │ ┌─────▶│ Target Filter │─────────▶│ Redis Integration │──────▶│ Metric Filter │──┐
-│ │ │ └──────────────────┘ └─────────────────────┘ └───────────────────┘ │
-│ Service Discovery │──────┤ │
-│ │ │ │
-│ │ │ │
-└─────────────────────────┘ │ ┌─────────────────┐ ┌──────────────────────┐ ┌────────┘
- ├─────▶│ Target Filter │──────────▶│ MySQL Integrations │───────────┐ │
- │ └─────────────────┘ └──────────────────────┘ │ │
- │ │ │
- │ ┌─────────────────┐ ┌─────────────┐ │ │
+┌─────────────────────────┐ ┌──────────────────┐ ┌─────────────────────┐ ┌───────────────────┐
+│ │ ┌─────▶│ Target Filter │─────────▶│ Redis Integration │──────▶│ Metric Filter │──┐
+│ │ │ └──────────────────┘ └─────────────────────┘ └───────────────────┘ │
+│ Service Discovery │──────┤ │
+│ │ │ │
+│ │ │ │
+└─────────────────────────┘ │ ┌─────────────────┐ ┌──────────────────────┐ ┌────────┘
+ ├─────▶│ Target Filter │──────────▶│ MySQL Integrations │───────────┐ │
+ │ └─────────────────┘ └──────────────────────┘ │ │
+ │ │ │
+ │ ┌─────────────────┐ ┌─────────────┐ │ │
└──────▶│ Target Filter │─────────────▶│ Scraper │─────────────┐ │ │ ┌────────────────┐
└─────────────────┘ └─────────────┘ └──┴┬───────┴─▶│ Remote Write │
│ └────────────────┘
- │
- │
-┌──────────────────────────┐ │
-│ Remote Write Receiver │─────┐ ┌───────────────────────┐ │
-└──────────────────────────┘ │ ┌────▶│ Metric Transformer │─────────┘
- │ │ └───────────────────────┘
- │ │
-┌─────────────────────────┐ │ ┌────────────────────┐ │
-│ HTTP Receiver │──────┴─────▶│ Metric Filter │────┘ ┌──────────────────────────────────┐
-└─────────────────────────┘ └────────────────────┘ │ Global and Server Settings │
- └──────────────────────────────────┘
+ │
+ │
+┌──────────────────────────┐ │
+│ Remote Write Receiver │─────┐ ┌───────────────────────┐ │
+└──────────────────────────┘ │ ┌────▶│ Metric Transformer │─────────┘
+ │ │ └───────────────────────┘
+ │ │
+┌─────────────────────────┐ │ ┌────────────────────┐ │
+│ HTTP Receiver │──────┴─────▶│ Metric Filter │────┘ ┌──────────────────────────────────┐
+└─────────────────────────┘ └────────────────────┘ │ Global and Server Settings │
+ └──────────────────────────────────┘
```
**Note: Consider all examples pseudoconfig**
@@ -107,14 +106,14 @@ Expression based is writing expressions that allow referencing other components
**Cons**
* Harder for users to wire things together
- * References to components are more complex, which may be harder to understand
+ * References to components are more complex, which may be harder to understand
* Harder to build a GUI for
* Every field of a component is potentially dynamic, making it harder to represent visually
## 2.2 Message Based
-Message based is where components have no knowledge of other components and information is passed strictly via input and output streams.
+Message based is where components have no knowledge of other components and information is passed strictly via input and output streams.
**Pros**
@@ -122,7 +121,7 @@ Message based is where components have no knowledge of other components and info
* Easier to build a GUI for
* Inputs and Outputs are well defined and less granular
* Connections are made by connecting two components directly, compared to expressions which connect subsets of a component's output
-* References between components are no more than strings, making the text-based representation language agnostic (e.g., it could be YAML, JSON, or any language)
+* References between components are no more than strings, making the text-based representation language agnostic (e.g., it could be YAML, JSON, or any language)
**Cons**
@@ -130,16 +129,16 @@ Message based is where components have no knowledge of other components and info
* Larger type system needed
* More structured to keep the amount of types down
-Messages require a more rigid type structure to minimize the number of total components.
+Messages require a more rigid type structure to minimize the number of total components.
For example, it would be preferable to have a single `Credential` type that can be emitted by an s3, Vault, or Consul component. These components would then need to set a field that marks their output as a specific kind of Credential (such as Basic Auth or Bearer Auth).
If, instead, you had multiple Credential types, like `MySQLCredentials` and `RedisCredentials`, you would have the following components:
-* Vault component for MySQL credentials
-* Vault component for Redis credentials
-* S3 component for MySQL credentials
-* S3 component for Redis credentials
+* Vault component for MySQL credentials
+* Vault component for Redis credentials
+* S3 component for MySQL credentials
+* S3 component for Redis credentials
* (and so on)
## 2.3 Hybrid
@@ -157,10 +156,10 @@ discovery "mysql_pods" {
integration "mysql" {
- # Create one mysql integration for every element in the array here
+ # Create one mysql integration for every element in the array here
for_each = discovery.mysql_pods.targets
- # Each spawned mysql integration has its data_source_name derived from
+ # Each spawned mysql integration has its data_source_name derived from
# the address label of the input target.
data_source_name = "root@(${each.labels["__address__"]})"
}
diff --git a/docs/rfcs/0005-river.md b/docs/rfcs/0005-river.md
index 8f4e3e12299b..3fa82a5f7eb2 100644
--- a/docs/rfcs/0005-river.md
+++ b/docs/rfcs/0005-river.md
@@ -3,7 +3,6 @@
* Date: 2022-06-27
* Author: Robert Fratto (@rfratto), Matt Durham (@mattdurham)
* PR: [grafana/agent#1839](https://github.com/grafana/agent/pull/1839)
-* Status: Draft
## Summary
diff --git a/docs/rfcs/0006-clustering.md b/docs/rfcs/0006-clustering.md
index b6c08b2bc210..b29070410e47 100644
--- a/docs/rfcs/0006-clustering.md
+++ b/docs/rfcs/0006-clustering.md
@@ -3,7 +3,6 @@
* Date: 2023-03-02
* Author: Paschalis Tsilias (@tpaschalis)
* PR: [grafana/agent#3151](https://github.com/grafana/agent/pull/3151)
-* Status: Draft
## Summary - Background
We routinely run agents with 1-10 million active series; we regularly see
@@ -98,7 +97,7 @@ presented in the next section.
## Use cases
In the first iteration of agent clustering, we would like to start with the
following use-cases. These two are distinct in the way that they make use of
-scheduling.
+scheduling.
The first one makes sure that we have a way of notifying components of cluster
changes and calling their Update method and continuously re-evaluate ownership
@@ -112,9 +111,9 @@ it is scraping/reading logs from. Components that use the Flow concept of a
“target” as their Arguments should be able to distribute the target load
between themselves. To do that we can introduce a layer of abstraction over
the Targets definition that can interact with the Sharder provided by the
-clusterer and provide a simple API, for example:
+clusterer and provide a simple API, for example:
```go
-type Targets interface {
+type Targets interface {
Get() []Target
}
```
@@ -136,9 +135,9 @@ I propose that we start with the following set of components that make use of
this functionality: prometheus.scrape, loki.source.file,
loki.source.kubernetes, and pyroscope.scrape.
-Here’s how the configuration for a component could look like:
+Here’s how the configuration for a component could look like:
```river
-prometheus.scrape "pods" {
+prometheus.scrape "pods" {
clustering {
node_updates = true
}
@@ -200,7 +199,7 @@ information.
On a more practical note, we’ll have to choose how components might use to
opt-in to the component scheduling.
-For example, we could implement either:
+For example, we could implement either:
* Implicitly adding a new Argument block that is implicitly present by default on
_all_ components:
```
diff --git a/docs/rfcs/0006-future-of-agent-operator.md b/docs/rfcs/0006-future-of-agent-operator.md
index e0ed4bef9304..3a5c3d2e5611 100644
--- a/docs/rfcs/0006-future-of-agent-operator.md
+++ b/docs/rfcs/0006-future-of-agent-operator.md
@@ -3,7 +3,6 @@
* Date: 2022-08-17
* Author: Craig Peterson (@captncraig)
* PR: [grafana/agent#2046](https://github.com/grafana/agent/pull/2046)
-* Status: Draft
## Summary
@@ -31,6 +30,6 @@ The operator is a fairly complex piece of code, and has been slower than some ot
## Beta status
-The Grafana Agent Operator is still considered beta software. It has received a better reception than anticipated, and is now an important part of the Agent project. We are committed to supporting the Operator into the future, but are going to leave the beta designation in place while making larger refactorings as described above. We make efforts to avoid breaking changes, and hope that custom resource definitions will remain compatible, but it is possible some changes will be necessary. We will make every effort to justify and communicate such scenarios as they arise.
+The Grafana Agent Operator is still considered beta software. It has received a better reception than anticipated, and is now an important part of the Agent project. We are committed to supporting the Operator into the future, but are going to leave the beta designation in place while making larger refactorings as described above. We make efforts to avoid breaking changes, and hope that custom resource definitions will remain compatible, but it is possible some changes will be necessary. We will make every effort to justify and communicate such scenarios as they arise.
Once we are confident we have an Operator we are happy with and that the resource definitions are stable, we will revisit the beta status as soon as we can.
diff --git a/docs/rfcs/0007-flow-modules.md b/docs/rfcs/0007-flow-modules.md
index f08fb74c0f3b..5058663dd3ba 100644
--- a/docs/rfcs/0007-flow-modules.md
+++ b/docs/rfcs/0007-flow-modules.md
@@ -3,7 +3,6 @@
* Date: 2023-01-27
* Author: Matt Durham @mattdurham
* PR: [grafana/agent#2898](https://github.com/grafana/agent/pull/2898)
-* Status: Draft
[Formatted Link for ease of user](https://github.com/grafana/agent/blob/rfc_modules/docs/rfcs/0007-flow-modules.md)
@@ -30,7 +29,7 @@ During this time the Agent team saw a lot of potential in the idea of "modules."
### Enable re-use of common patterns
-Common functionality can be wrapped in a set of common components that form a module. These shared modules can then be used instead of reinventing use cases.
+Common functionality can be wrapped in a set of common components that form a module. These shared modules can then be used instead of reinventing use cases.
### Allow loading a module from a string
@@ -42,11 +41,11 @@ Modules will be able to load other modules, with reasonable safe guards. There w
### Modules should be sandboxed except via arguments and exports
-Modules cannot directly access children or parent modules except through predefined arguments and exports.
+Modules cannot directly access children or parent modules except through predefined arguments and exports.
## Non Goals
-Non goals represent capabilities that are not going to be done in the initial release of modules but may come in later versions.
+Non goals represent capabilities that are not going to be done in the initial release of modules but may come in later versions.
* Add additional capabilities to load strings
* Any type of versioning
@@ -66,7 +65,7 @@ Modules will not contain any sort of versioning nor will check for compatibility
### Any user interface work beyond ensuring it works as the UI currently does
-Users will not be able to drill into modules, they will be represented as any other normal component.
+Users will not be able to drill into modules, they will be represented as any other normal component.
## Example
@@ -122,7 +121,7 @@ prometheus.scrape "scraper" {
* A module cannot directly or indirectly load itself, this will not be enforced by the system
* Singleton components are not supported at this time. Example [node_exporter](https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.integration.node_exporter/).
* Modules will not prevent competing resources, such as starting a server on the same port
-* [Configuration blocks](https://grafana.com/docs/agent/latest/flow/reference/config-blocks/#configuration-blocks) will not be supported.
+* [Configuration blocks](https://grafana.com/docs/agent/latest/flow/reference/config-blocks/#configuration-blocks) will not be supported.
* Names of arguments and exports within a module must be unique across that module.
## Proposal
diff --git a/docs/rfcs/0008-backwards-compatibility.md b/docs/rfcs/0008-backwards-compatibility.md
index 147490f41e40..56d4bac647a3 100644
--- a/docs/rfcs/0008-backwards-compatibility.md
+++ b/docs/rfcs/0008-backwards-compatibility.md
@@ -3,7 +3,6 @@
* Date: 2023-05-25
* Author: Robert Fratto (@rfratto)
* PR: [grafana/agent#3981](https://github.com/grafana/agent/pull/3981)
-* Status: Draft
Grafana Agent has been following [semantic versioning](https://semver.org/) since its inception.
After three years of development and 33 minor releases, the project is on trajectory to have a 1.0 release.
diff --git a/docs/sources/_index.md b/docs/sources/_index.md
index 780a3800da31..a902be317bab 100644
--- a/docs/sources/_index.md
+++ b/docs/sources/_index.md
@@ -9,7 +9,7 @@ title: Grafana Agent
description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector
weight: 350
cascade:
- AGENT_RELEASE: v0.39.0
+ AGENT_RELEASE: v0.39.2
OTEL_VERSION: v0.87.0
---
@@ -24,11 +24,11 @@ Grafana Agent is based around **components**. Components are wired together to
form programmable observability **pipelines** for telemetry collection,
processing, and delivery.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
This page focuses mainly on [Flow mode](https://grafana.com/docs/agent/
Sets `enabled` to `true` by default. | no
+[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.image.id][res-attr-cfg] | Toggles the `host.image.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### ecs
+
+The `ecs` block queries the Task Metadata Endpoint (TMDE) to record information about the current ECS Task. Only TMDE V4 and V3 are supported.
+
+[Task Metadata Endpoint]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html
+
+The `ecs` block supports the following blocks:
+
+Block | Description | Required
+-------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#ecs--resource_attributes) | Configures which resource attributes to add. | no
+
+#### ecs > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+--------------------------------------- | --------------------------------------------------------------------------------------------------- | --------
+[aws.ecs.cluster.arn][res-attr-cfg] | Toggles the `aws.ecs.cluster.arn` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.ecs.launchtype][res-attr-cfg] | Toggles the `aws.ecs.launchtype` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.ecs.task.arn][res-attr-cfg] | Toggles the `aws.ecs.task.arn` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.ecs.task.family][res-attr-cfg] | Toggles the `aws.ecs.task.family` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.ecs.task.revision][res-attr-cfg] | Toggles the `aws.ecs.task.revision` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.log.group.arns][res-attr-cfg] | Toggles the `aws.log.group.arns` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.log.stream.arns][res-attr-cfg] | Toggles the `aws.log.stream.arns` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### eks
+
+The `eks` block adds resource attributes for Amazon EKS.
+
+The `eks` block supports the following blocks:
+
+Block | Description | Required
+-------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#eks--resource_attributes) | Configures which resource attributes to add. | no
+
+#### eks > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+------------------------------- | ------------------------------------------------------------------------------------------- | --------
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+
+Example values:
+* `cloud.provider`: `"aws"`
+* `cloud.platform`: `"aws_eks"`
+
+### elasticbeanstalk
+
+The `elasticbeanstalk` block reads the AWS X-Ray configuration file available on all Beanstalk instances with [X-Ray Enabled][].
+
+[X-Ray Enabled]: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration-debugging.html
+
+The `elasticbeanstalk` block supports the following blocks:
+
+Block | Description | Required
+--------------------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#elasticbeanstalk--resource_attributes) | Configures which resource attributes to add. | no
+
+#### elasticbeanstalk > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+--------------------------------- | --------------------------------------------------------------------------------------------- | --------
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[deployment.envir][res-attr-cfg] | Toggles the `deployment.envir` resource attribute.
Sets `enabled` to `true` by default. | no
+[service.instance][res-attr-cfg] | Toggles the `service.instance` resource attribute.
Sets `enabled` to `true` by default. | no
+[service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no
+
+Example values:
+* `cloud.provider`: `"aws"`
+* `cloud.platform`: `"aws_elastic_beanstalk"`
+
+### lambda
+
+The `lambda` block uses the AWS Lambda [runtime environment variables][lambda-env-vars] to retrieve various resource attributes.
+
+[lambda-env-vars]: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime
+
+The `lambda` block supports the following blocks:
+
+Block | Description | Required
+----------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#lambda--resource_attributes) | Configures which resource attributes to add. | no
+
+#### lambda > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+------------------------------------- | --------------------------------------------------------------------------------------------------- | --------
+[aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no
+[aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.max_memory][res-attr-cfg] | Toggles the `faas.max_memory` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no
+
+[Cloud semantic conventions][]:
+* `cloud.provider`: `"aws"`
+* `cloud.platform`: `"aws_lambda"`
+* `cloud.region`: `$AWS_REGION`
+
+[Function as a Service semantic conventions][] and [AWS Lambda semantic conventions][]:
+* `faas.name`: `$AWS_LAMBDA_FUNCTION_NAME`
+* `faas.version`: `$AWS_LAMBDA_FUNCTION_VERSION`
+* `faas.instance`: `$AWS_LAMBDA_LOG_STREAM_NAME`
+* `faas.max_memory`: `$AWS_LAMBDA_FUNCTION_MEMORY_SIZE`
+
+[AWS Logs semantic conventions][]:
+* `aws.log.group.names`: `$AWS_LAMBDA_LOG_GROUP_NAME`
+* `aws.log.stream.names`: `$AWS_LAMBDA_LOG_STREAM_NAME`
+
+[Cloud semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/cloud.md
+[Function as a Service semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/faas.md
+[AWS Lambda semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/instrumentation/aws-lambda.md#resource-detector
+[AWS Logs semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/cloud_provider/aws/logs.md
+
+### azure
+
+The `azure` block queries the [Azure Instance Metadata Service][] to retrieve various resource attributes.
+
+[Azure Instance Metadata Service]: https://aka.ms/azureimds
+
+The `azure` block supports the following blocks:
+
+Block | Description | Required
+---------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#azure--resource_attributes) | Configures which resource attributes to add. | no
+
+#### azure > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+-----------------------------------------|------------------------------------------------------------------------------------------------------|---------
+[azure.resourcegroup.name][res-attr-cfg] | Toggles the `azure.resourcegroup.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[azure.vm.name][res-attr-cfg] | Toggles the `azure.vm.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[azure.vm.scaleset.name][res-attr-cfg] | Toggles the `azure.vm.scaleset.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[azure.vm.size][res-attr-cfg] | Toggles the `azure.vm.size` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+
+Example values:
+* `cloud.provider`: `"azure"`
+* `cloud.platform`: `"azure_vm"`
+
+### aks
+
+The `aks` block adds resource attributes related to Azure AKS.
+
+The `aks` block supports the following blocks:
+
+Block | Description | Required
+-------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#aks--resource_attributes) | Configures which resource attributes to add. | no
+
+#### aks > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+------------------------------- | ------------------------------------------------------------------------------------------- | --------
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+
+Example values:
+* `cloud.provider`: `"azure"`
+* `cloud.platform`: `"azure_vm"`
+
+### consul
+
+The `consul` block queries a Consul agent and reads its configuration endpoint to retrieve values for resource attributes.
+
+The `consul` block supports the following attributes:
+
+Attribute | Type | Description | Default | Required
+-------------|----------------|-----------------------------------------------------------------------------------|---------|---------
+`address` | `string` | The address of the Consul server | `""` | no
+`datacenter` | `string` | Datacenter to use. If not provided, the default agent datacenter is used. | `""` | no
+`token` | `secret` | A per-request ACL token which overrides the Consul agent's default (empty) token. | `""` | no
+`namespace` | `string` | The name of the namespace to send along for the request. | `""` | no
+`meta` | `list(string)` | Allowlist of [Consul Metadata][] keys to use as resource attributes. | `[]` | no
+
+`token` is only required if [Consul's ACL System][] is enabled.
+
+[Consul Metadata]: https://www.consul.io/docs/agent/options#node_meta
+[Consul's ACL System]: https://www.consul.io/docs/security/acl/acl-system
+
+The `consul` block supports the following blocks:
+
+Block | Description | Required
+----------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#consul--resource_attributes) | Configures which resource attributes to add. | no
+
+#### consul > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+-----------------------------|------------------------------------------------------------------------------------------|---------
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### docker
+
+The `docker` block queries the Docker daemon to retrieve various resource attributes from the host machine.
+
+You need to mount the Docker socket (`/var/run/docker.sock` on Linux) to contact the Docker daemon.
+Docker detection does not work on MacOS.
+
+The `docker` block supports the following blocks:
+
+Block | Description | Required
+----------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#docker--resource_attributes) | Configures which resource attributes to add. | no
+
+#### docker > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+--------------------------|---------------------------------------------------------------------------------------|---------
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### gcp
+
+The `gcp` block detects resource attributes using the [Google Cloud Client Libraries for Go][], which reads resource information from the [GCP metadata server][].
+The detector also uses environment variables to identify which GCP platform the application is running on, and assigns appropriate resource attributes for that platform.
+
+Use the `gcp` detector regardless of the GCP platform {{< param "PRODUCT_ROOT_NAME" >}} is running on.
+
+[Google Cloud Client Libraries for Go]: https://github.com/googleapis/google-cloud-go
+[GCP metadata server]: https://cloud.google.com/compute/docs/storing-retrieving-metadata
+
+The `gcp` block supports the following blocks:
+
+Block | Description | Required
+-------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#gcp--resource_attributes) | Configures which resource attributes to add. | no
+
+#### gcp > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+---------------------------------------------|----------------------------------------------------------------------------------------------------------|---------
+[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.id][res-attr-cfg] | Toggles the `faas.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no
+[gcp.cloud_run.job.execution][res-attr-cfg] | Toggles the `gcp.cloud_run.job.execution` resource attribute.
Sets `enabled` to `true` by default. | no
+[gcp.cloud_run.job.task_index][res-attr-cfg] | Toggles the `gcp.cloud_run.job.task_index` resource attribute.
Sets `enabled` to `true` by default. | no
+[gcp.gce.instance.hostname][res-attr-cfg] | Toggles the `gcp.gce.instance.hostname` resource attribute.
Sets `enabled` to `false` by default. | no
+[gcp.gce.instance.name][res-attr-cfg] | Toggles the `gcp.gce.instance.name` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no
+[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no
+
+#### Google Compute Engine (GCE) metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_compute_engine"`
+* `cloud.account.id`: project id
+* `cloud.region`: e.g. `"us-central1"`
+* `cloud.availability_zone`: e.g. `"us-central1-c"`
+* `host.id`: instance id
+* `host.name`: instance name
+* `host.type`: machine type
+* (optional) `gcp.gce.instance.hostname`
+* (optional) `gcp.gce.instance.name`
+
+#### Google Kubernetes Engine (GKE) metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_kubernetes_engine"`
+* `cloud.account.id`: project id
+* `cloud.region`: only for regional GKE clusters; e.g. `"us-central1"`
+* `cloud.availability_zone`: only for zonal GKE clusters; e.g. `"us-central1-c"`
+* `k8s.cluster.name`
+* `host.id`: instance id
+* `host.name`: instance name; only when workload identity is disabled
+
+One known issue happens when GKE workload identity is enabled. The GCE metadata endpoints won't be available,
+and the GKE resource detector won't be able to determine `host.name`.
+If this happens, you can set `host.name` from one of the following resources:
+- Get the `node.name` through the [downward API][] with the `env` detector.
+- Get the Kubernetes node name from the Kubernetes API (with `k8s.io/client-go`).
+
+[downward API]: https://kubernetes.io/docs/concepts/workloads/pods/downward-api/
+
+#### Google Cloud Run Services metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_cloud_run"`
+* `cloud.account.id`: project id
+* `cloud.region`: e.g. `"us-central1"`
+* `faas.id`: instance id
+* `faas.name`: service name
+* `faas.version`: service revision
+
+#### Cloud Run Jobs metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_cloud_run"`
+* `cloud.account.id`: project id
+* `cloud.region`: e.g. `"us-central1"`
+* `faas.id`: instance id
+* `faas.name`: service name
+* `gcp.cloud_run.job.execution`: e.g. `"my-service-ajg89"`
+* `gcp.cloud_run.job.task_index`: e.g. `"0"`
+
+#### Google Cloud Functions metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_cloud_functions"`
+* `cloud.account.id`: project id
+* `cloud.region`: e.g. `"us-central1"`
+* `faas.id`: instance id
+* `faas.name`: function name
+* `faas.version`: function version
+
+#### Google App Engine metadata
+
+* `cloud.provider`: `"gcp"`
+* `cloud.platform`: `"gcp_app_engine"`
+* `cloud.account.id`: project id
+* `cloud.region`: e.g. `"us-central1"`
+* `cloud.availability_zone`: e.g. `"us-central1-c"`
+* `faas.id`: instance id
+* `faas.name`: service name
+* `faas.version`: service version
+
+### heroku
+
+The `heroku` block adds resource attributes derived from [Heroku dyno metadata][].
+
+The `heroku` block supports the following blocks:
+
+Block | Description | Required
+----------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#heroku--resource_attributes) | Configures which resource attributes to add. | no
+
+#### heroku > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+--------------------------------------------------|---------------------------------------------------------------------------------------------------------------|---------
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[heroku.app.id][res-attr-cfg] | Toggles the `heroku.app.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[heroku.dyno.id][res-attr-cfg] | Toggles the `heroku.dyno.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[heroku.release.commit][res-attr-cfg] | Toggles the `heroku.release.commit` resource attribute.
Sets `enabled` to `true` by default. | no
+[heroku.release.creation_timestamp][res-attr-cfg] | Toggles the `heroku.release.creation_timestamp` resource attribute.
Sets `enabled` to `true` by default. | no
+[service.instance.id][res-attr-cfg] | Toggles the `service.instance.id` resource attribute.
Sets `enabled` to `true` by default. | no
+[service.name][res-attr-cfg] | Toggles the `service.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no
+
+When [Heroku dyno metadata][] is active, Heroku applications publish information through environment variables.
+We map these environment variables to resource attributes as follows:
+
+| Dyno metadata environment variable | Resource attribute |
+|------------------------------------|-------------------------------------|
+| `HEROKU_APP_ID` | `heroku.app.id` |
+| `HEROKU_APP_NAME` | `service.name` |
+| `HEROKU_DYNO_ID` | `service.instance.id` |
+| `HEROKU_RELEASE_CREATED_AT` | `heroku.release.creation_timestamp` |
+| `HEROKU_RELEASE_VERSION` | `service.version` |
+| `HEROKU_SLUG_COMMIT` | `heroku.release.commit` |
+
+For more information, see the [Heroku cloud provider documentation][] under the [OpenTelemetry specification semantic conventions][].
+
+[Heroku dyno metadata]: https://devcenter.heroku.com/articles/dyno-metadata
+[Heroku cloud provider documentation]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/cloud_provider/heroku.md
+[OpenTelemetry specification semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification
+
+### system
+
+The `system` block queries the host machine to retrieve various resource attributes.
+
+{{< admonition type="note" >}}
+
+Use the [Docker](#docker) detector if running {{< param "PRODUCT_ROOT_NAME" >}} as a Docker container.
+
+{{< /admonition >}}
+
+The `system` block supports the following attributes:
+
+Attribute | Type | Description | Default | Required
+------------------ | --------------- | --------------------------------------------------------------------------- |---------------- | --------
+`hostname_sources` | `list(string)` | A priority list of sources from which the hostname will be fetched. | `["dns", "os"]` | no
+
+The valid options for `hostname_sources` are:
+* `"dns"`: Uses multiple sources to get the fully qualified domain name.
+Firstly, it looks up the host name in the local machine's `hosts` file. If that fails, it looks up the CNAME.
+Lastly, if that fails, it does a reverse DNS query. Note: this hostname source may produce unreliable results on Windows.
+To produce a FQDN, Windows hosts might have better results using the "lookup" hostname source, which is mentioned below.
+* `"os"`: Provides the hostname provided by the local machine's kernel.
+* `"cname"`: Provides the canonical name, as provided by `net.LookupCNAME` in the Go standard library.
+Note: this hostname source may produce unreliable results on Windows.
+* `"lookup"`: Does a reverse DNS lookup of the current host's IP address.
+
+In case of an error in fetching a hostname from a source, the next source from the list of `hostname_sources` will be considered.
+
+The `system` block supports the following blocks:
+
+Block | Description | Required
+----------------------------------------------------|----------------------------------------------|---------
+[resource_attributes](#system--resource_attributes) | Configures which resource attributes to add. | no
+
+#### system > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+---------------------------------------|-----------------------------------------------------------------------------------------------------|---------
+[host.arch][res-attr-cfg] | Toggles the `host.arch` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.cache.l2.size][res-attr-cfg] | Toggles the `host.cpu.cache.l2.size` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.family][res-attr-cfg] | Toggles the `host.cpu.family` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.model.id][res-attr-cfg] | Toggles the `host.cpu.model.id` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.model.name][res-attr-cfg] | Toggles the `host.cpu.model.name` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.stepping][res-attr-cfg] | Toggles the `host.cpu.stepping` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.cpu.vendor.id][res-attr-cfg] | Toggles the `host.cpu.vendor.id` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `false` by default. | no
+[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[os.description][res-attr-cfg] | Toggles the `os.description` resource attribute.
Sets `enabled` to `false` by default. | no
+[os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### openshift
+
+The `openshift` block queries the OpenShift and Kubernetes APIs to retrieve various resource attributes.
+
+The `openshift` block supports the following attributes:
+
+Attribute | Type | Description | Default | Required
+---------- |---------- | ------------------------------------------------------- |-------------| --------
+`address` | `string` | Address of the OpenShift API server. | _See below_ | no
+`token` | `string` | Token used to identify against the OpenShift API server.| "" | no
+
+The "get", "watch", and "list" permissions are required:
+
+```yaml
+kind: ClusterRole
+metadata:
+ name: grafana-agent
+rules:
+- apiGroups: ["config.openshift.io"]
+ resources: ["infrastructures", "infrastructures/status"]
+ verbs: ["get", "watch", "list"]
+```
+
+By default, the API address is determined from the environment variables `KUBERNETES_SERVICE_HOST`,
+`KUBERNETES_SERVICE_PORT` and the service token is read from `/var/run/secrets/kubernetes.io/serviceaccount/token`.
+If TLS is not explicitly disabled and no `ca_file` is configured, `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` is used.
+The determination of the API address, `ca_file`, and the service token is skipped if they are set in the configuration.
+
+The `openshift` block supports the following blocks:
+
+Block | Description | Required
+---------------------------------------------- | ---------------------------------------------------- | --------
+[resource_attributes](#openshift--resource_attributes) | Configures which resource attributes to add. | no
+[tls](#openshift--tls) | TLS settings for the connection with the OpenShift API. | yes
+
+#### openshift > tls
+
+The `tls` block configures TLS settings used for the connection to the gRPC
+server.
+
+{{< docs/shared lookup="flow/reference/components/otelcol-tls-config-block.md" source="agent" version="
Sets `enabled` to `true` by default. | no
+[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no
+[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no
+[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no
+
+### kubernetes_node
+
+The `kubernetes_node` block queries the Kubernetes API server to retrieve various node resource attributes.
+
+The `kubernetes_node` block supports the following attributes:
+
+Attribute | Type | Description | Default | Required
+------------------- |--------- | ------------------------------------------------------------------------- |------------------ | --------
+`auth_type` | `string` | Configures how to authenticate to the K8s API server. | `"none"` | no
+`context` | `string` | Override the current context when `auth_type` is set to `"kubeConfig"`. | `""` | no
+`node_from_env_var` | `string` | The name of an environment variable from which to retrieve the node name. | `"K8S_NODE_NAME"` | no
+
+The "get" and "list" permissions are required:
+
+```yaml
+kind: ClusterRole
+metadata:
+ name: grafana-agent
+rules:
+ - apiGroups: [""]
+ resources: ["nodes"]
+ verbs: ["get", "list"]
+```
+
+`auth_type` can be set to one of the following:
+* `none`: no authentication.
+* `serviceAccount`: use the standard service account token provided to the {{< param "PRODUCT_ROOT_NAME" >}} pod.
+* `kubeConfig`: use credentials from `~/.kube/config`.
+
+The `kubernetes_node` block supports the following blocks:
+
+Block | Description | Required
+---------------------------------------------- | ------------------------------------------------- | --------
+[resource_attributes](#kubernetes_node--resource_attributes) | Configures which resource attributes to add. | no
+
+#### kubernetes_node > resource_attributes
+
+The `resource_attributes` block supports the following blocks:
+
+Block | Description | Required
+------------------------------ | ------------------------------------------------------------------------------------------ | --------
+[k8s.node.name][res-attr-cfg] | Toggles the `k8s.node.name` resource attribute.
Sets `enabled` to `true` by default. | no
+[k8s.node.uid][res-attr-cfg] | Toggles the `k8s.node.uid` resource attribute.
Sets `enabled` to `true` by default. | no
+
+## Common configuration
+
+### Resource attribute config
+
+This block describes how to configure resource attributes such as `k8s.node.name` and `azure.vm.name`.
+Every block is configured using the same set of attributes.
+Only the default values for those attributes might differ across resource attributes.
+For example, some resource attributes have `enabled` set to `true` by default, whereas others don't.
+
+The following attributes are supported:
+
+Attribute | Type | Description | Default | Required
+--------- | ------- | ----------------------------------------------------------------------------------- |------------- | --------
+`enabled` | `bool` | Toggles whether to add the resource attribute to the span, log, or metric resource. | _See below_ | no
+
+To see the default value for `enabled`, refer to the tables in the sections above which list the resource attributes blocks.
+The "Description" column will state either...
+
+> Sets `enabled` to `true` by default.
+
+... or:
+
+> Sets `enabled` to `false` by default.
+
+## Exported fields
+
+The following fields are exported and can be referenced by other components:
+
+Name | Type | Description
+---- | ---- | -----------
+`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to.
+
+`input` accepts `otelcol.Consumer` OTLP-formatted data for any telemetry signal of these types:
+* logs
+* metrics
+* traces
+
+## Component health
+
+`otelcol.processor.resourcedetection` is only reported as unhealthy if given an invalid
+configuration.
+
+## Debug information
+
+`otelcol.processor.resourcedetection` doesn't expose any component-specific debug
+information.
+
+## Examples
+
+### env detector
+
+If you set up a `OTEL_RESOURCE_ATTRIBUTES` environment variable with value of `TestKey=TestValue`,
+then all logs, metrics, and traces have a resource attribute with a key `TestKey` and value of `TestValue`.
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["env"]
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+### env and ec2
+
+There is no need to put in an `ec2 {}` River block.
+The `ec2` defaults are applied automatically, as specified in [ec2][].
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["env", "ec2"]
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+### ec2 with default resource attributes
+
+There is no need to put in a `ec2 {}` River block.
+The `ec2` defaults are applied automatically, as specified in [ec2][].
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["ec2"]
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+### ec2 with explicit resource attributes
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["ec2"]
+ ec2 {
+ tags = ["^tag1$", "^tag2$", "^label.*$"]
+ resource_attributes {
+ cloud.account.id { enabled = true }
+ cloud.availability_zone { enabled = true }
+ cloud.platform { enabled = true }
+ cloud.provider { enabled = true }
+ cloud.region { enabled = true }
+ host.id { enabled = true }
+ host.image.id { enabled = false }
+ host.name { enabled = false }
+ host.type { enabled = false }
+ }
+ }
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+### kubernetes_node
+
+This example uses the default `node_from_env_var` option of `K8S_NODE_NAME`.
+
+There is no need to put in a `kubernetes_node {}` River block.
+The `kubernetes_node` defaults are applied automatically, as specified in [kubernetes_node][].
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["kubernetes_node"]
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+You need to add this to your workload:
+
+```yaml
+ env:
+ - name: K8S_NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+```
+
+### kubernetes_node with a custom environment variable
+
+This example uses a custom `node_from_env_var` set to `my_custom_var`.
+
+```river
+otelcol.processor.resourcedetection "default" {
+ detectors = ["kubernetes_node"]
+ kubernetes_node {
+ node_from_env_var = "my_custom_var"
+ }
+
+ output {
+ logs = [otelcol.exporter.otlp.default.input]
+ metrics = [otelcol.exporter.otlp.default.input]
+ traces = [otelcol.exporter.otlp.default.input]
+ }
+}
+```
+
+You need to add this to your workload:
+
+```yaml
+ env:
+ - name: my_custom_var
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+```
+
+
+## Compatible components
+
+`otelcol.processor.resourcedetection` can accept arguments from the following components:
+
+- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
+
+`otelcol.processor.resourcedetection` has exports that can be consumed by the following components:
+
+- Components that consume [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-consumers" >}})
+
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
+
+
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.processor.span.md b/docs/sources/flow/reference/components/otelcol.processor.span.md
index fe6985881007..ac909575cb1a 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.span.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.span.md
@@ -400,11 +400,9 @@ otelcol.processor.span "default" {
- Components that consume [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
index b6c6ccfdc0f7..cb651d67e4f0 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md
@@ -565,11 +565,9 @@ otelcol.exporter.otlp "production" {
- Components that consume [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.processor.transform.md b/docs/sources/flow/reference/components/otelcol.processor.transform.md
index 81967bb11c24..9a70c07e9509 100644
--- a/docs/sources/flow/reference/components/otelcol.processor.transform.md
+++ b/docs/sources/flow/reference/components/otelcol.processor.transform.md
@@ -42,7 +42,7 @@ there is also a set of metrics-only functions:
* `end_time_unix_nano - start_time_unix_nano`
* `sum([1, 2, 3, 4]) + (10 / 1) - 1`
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
There are two ways of inputting strings in River configuration files:
* Using quotation marks ([normal River strings][river-strings]). Characters such as `\` and
`"` must be escaped by preceding them with a `\` character.
@@ -57,17 +57,17 @@ Raw strings are generally more convenient for writing OTTL statements.
[river-strings]: {{< relref "../../concepts/config-language/expressions/types_and_values.md/#strings" >}}
[river-raw-strings]: {{< relref "../../concepts/config-language/expressions/types_and_values.md/#raw-strings" >}}
-{{% /admonition %}}
+{{< /admonition >}}
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
`otelcol.processor.transform` is a wrapper over the upstream
OpenTelemetry Collector `transform` processor. If necessary, bug reports or feature requests
will be redirected to the upstream repository.
-{{% /admonition %}}
+{{< /admonition >}}
You can specify multiple `otelcol.processor.transform` components by giving them different labels.
-{{% admonition type="warning" %}}
+{{< admonition type="warning" >}}
`otelcol.processor.transform` allows you to modify all aspects of your telemetry. Some specific risks are given below,
but this is not an exhaustive list. It is important to understand your data before using this processor.
@@ -88,7 +88,7 @@ to a new metric data type or can be used to create new metrics.
[Orphaned Telemetry]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#orphaned-telemetry
[no-op]: https://en.wikipedia.org/wiki/NOP_(code)
[metrics data model]: https://github.com/open-telemetry/opentelemetry-specification/blob/main//specification/metrics/data-model.md
-{{% /admonition %}}
+{{< /admonition >}}
## Usage
@@ -602,11 +602,9 @@ each `"` with a `\"`, and each `\` with a `\\` inside a [normal][river-strings]
- Components that consume [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md b/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md
index c19bb03dba77..4f584319fb6c 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md
@@ -287,11 +287,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.kafka.md b/docs/sources/flow/reference/components/otelcol.receiver.kafka.md
index 28588420609d..abb89ef82fb3 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.kafka.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.kafka.md
@@ -339,11 +339,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.loki.md b/docs/sources/flow/reference/components/otelcol.receiver.loki.md
index 31d9877da882..c06b82cbe3dc 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.loki.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.loki.md
@@ -112,11 +112,9 @@ otelcol.exporter.otlp "default" {
- Components that consume [Loki `LogsReceiver`]({{< relref "../compatibility/#loki-logsreceiver-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md b/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md
index a6d7a5bb3ae3..ac694d890712 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md
@@ -219,11 +219,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.otlp.md b/docs/sources/flow/reference/components/otelcol.receiver.otlp.md
index 134098ed2de4..862562508afd 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.otlp.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.otlp.md
@@ -257,11 +257,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
index d0723aad80c4..7611b0955a4b 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md
@@ -111,11 +111,9 @@ otelcol.exporter.otlp "default" {
- Components that consume [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md b/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md
index 11e6a0485e09..54891a882da4 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md
@@ -230,11 +230,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md b/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md
index 2dd3d8a9ccfb..5d6c903036d1 100644
--- a/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md
+++ b/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md
@@ -152,11 +152,9 @@ otelcol.exporter.otlp "default" {
- Components that export [OpenTelemetry `otelcol.Consumer`]({{< relref "../compatibility/#opentelemetry-otelcolconsumer-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.agent.md b/docs/sources/flow/reference/components/prometheus.exporter.agent.md
index cb2dd5cda361..a4575bb08c1b 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.agent.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.agent.md
@@ -8,7 +8,8 @@ title: prometheus.exporter.agent
---
# prometheus.exporter.agent
-The `prometheus.exporter.agent` component collects and exposes metrics about the agent itself.
+
+The `prometheus.exporter.agent` component collects and exposes metrics about {{< param "PRODUCT_NAME" >}} itself.
## Usage
@@ -18,6 +19,7 @@ prometheus.exporter.agent "agent" {
```
## Arguments
+
`prometheus.exporter.agent` accepts no arguments.
## Exported fields
@@ -31,12 +33,12 @@ an invalid configuration.
## Debug information
-`prometheus.exporter.agent` does not expose any component-specific
+`prometheus.exporter.agent` doesn't expose any component-specific
debug information.
## Debug metrics
-`prometheus.exporter.agent` does not expose any component-specific
+`prometheus.exporter.agent` doesn't expose any component-specific
debug metrics.
## Example
@@ -80,11 +82,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.apache.md b/docs/sources/flow/reference/components/prometheus.exporter.apache.md
index 08f19fa2d1d9..d3f786083b37 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.apache.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.apache.md
@@ -96,11 +96,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.azure.md b/docs/sources/flow/reference/components/prometheus.exporter.azure.md
index ea8fa08cd912..1835e5e24745 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.azure.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.azure.md
@@ -180,11 +180,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.blackbox.md b/docs/sources/flow/reference/components/prometheus.exporter.blackbox.md
index 23f334b2f1a6..fb2a2653e983 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.blackbox.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.blackbox.md
@@ -204,11 +204,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md
index 02c923ebe898..b6cdf1f98e21 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md
@@ -135,11 +135,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
index 2c1682a5fccc..0aad4bd0d8e7 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md
@@ -147,9 +147,9 @@ You can use the following blocks in`prometheus.exporter.cloudwatch` to configure
| static > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes |
| decoupled_scraping | [decoupled_scraping][] | Configures the decoupled scraping feature to retrieve metrics on a schedule and return the cached metrics. | no |
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
The `static` and `discovery` blocks are marked as not required, but you must configure at least one static or discovery job.
-{{% /admonition %}}
+{{< /admonition >}}
[discovery]: #discovery-block
[static]: #static-block
@@ -463,11 +463,9 @@ discovery job, the `type` field of each `discovery_job` must match either the de
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.consul.md b/docs/sources/flow/reference/components/prometheus.exporter.consul.md
index 81185047459e..6a38931ad0d0 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.consul.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.consul.md
@@ -106,11 +106,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md b/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md
index 2f22e0048807..bf60a1fee166 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md
@@ -96,11 +96,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md b/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md
index 6feb9c683eeb..f7150a3d41b4 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md
@@ -15,10 +15,10 @@ The `prometheus.exporter.elasticsearch` component embeds
[elasticsearch_exporter](https://github.com/prometheus-community/elasticsearch_exporter) for
the collection of metrics from ElasticSearch servers.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
Currently, an Agent can only collect metrics from a single ElasticSearch server.
However, the exporter can collect the metrics from all nodes through that server configured.
-{{% /admonition %}}
+{{< /admonition >}}
We strongly recommend that you configure a separate user for the Agent, and give it only the strictly mandatory
security privileges necessary for monitoring your node, as per the [official documentation](https://github.com/prometheus-community/elasticsearch_exporter#elasticsearch-7x-security-privileges).
@@ -139,11 +139,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
index e9a3d7ab2786..b7ff3158c372 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md
@@ -59,9 +59,9 @@ prometheus.exporter.gcp "pubsub" {
You can use the following arguments to configure the exporter's behavior.
Omitted fields take their default values.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
Please note that if you are supplying a list of strings for the `extra_filters` argument, any string values within a particular filter string must be enclosed in escaped double quotes. For example, `loadbalancing.googleapis.com:resource.labels.backend_target_name="sample-value"` must be encoded as `"loadbalancing.googleapis.com:resource.labels.backend_target_name=\"sample-value\""` in the River config.
-{{% /admonition %}}
+{{< /admonition >}}
| Name | Type | Description | Default | Required |
| ------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- |
@@ -182,11 +182,9 @@ prometheus.exporter.gcp "lb_subset_with_filter" {
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.github.md b/docs/sources/flow/reference/components/prometheus.exporter.github.md
index 753458562ab5..662617299da4 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.github.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.github.md
@@ -104,11 +104,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.kafka.md b/docs/sources/flow/reference/components/prometheus.exporter.kafka.md
index 59400eea67fe..1de06212f557 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.kafka.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.kafka.md
@@ -116,11 +116,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.memcached.md b/docs/sources/flow/reference/components/prometheus.exporter.memcached.md
index bd158d76a996..7e9cc9a53d87 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.memcached.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.memcached.md
@@ -108,11 +108,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
index 1aa855542c06..4301eee4f4d2 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md
@@ -13,9 +13,9 @@ title: prometheus.exporter.mongodb
The `prometheus.exporter.mongodb` component embeds percona's [`mongodb_exporter`](https://github.com/percona/mongodb_exporter).
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
This exporter doesn't collect metrics from multiple nodes. For this integration to work properly, you must have connect each node of your MongoDB cluster to a {{< param "PRODUCT_NAME" >}} instance.
-{{% /admonition %}}
+{{< /admonition >}}
We strongly recommend configuring a separate user for {{< param "PRODUCT_NAME" >}}, giving it only the strictly mandatory security privileges necessary for monitoring your node.
Refer to the [Percona documentation](https://github.com/percona/mongodb_exporter#permissions) for more information.
@@ -97,11 +97,9 @@ prometheus.remote_write "default" {
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mssql.md b/docs/sources/flow/reference/components/prometheus.exporter.mssql.md
index e2bcad76830e..6db00954f332 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.mssql.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.mssql.md
@@ -339,11 +339,9 @@ queries:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mysql.md b/docs/sources/flow/reference/components/prometheus.exporter.mysql.md
index 7c0cb90ae69f..edc1c1a5a49f 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.mysql.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.mysql.md
@@ -221,11 +221,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md b/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md
index 10712ba290d5..4053acc074b0 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md
@@ -109,11 +109,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.postgres.md b/docs/sources/flow/reference/components/prometheus.exporter.postgres.md
index 39cfd8770108..f50e9fd77709 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.postgres.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.postgres.md
@@ -222,11 +222,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.process.md b/docs/sources/flow/reference/components/prometheus.exporter.process.md
index ddd315f28797..da135994fd7b 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.process.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.process.md
@@ -142,11 +142,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.redis.md b/docs/sources/flow/reference/components/prometheus.exporter.redis.md
index cebbbdd02906..ccb114ea8db5 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.redis.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.redis.md
@@ -140,11 +140,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.snmp.md b/docs/sources/flow/reference/components/prometheus.exporter.snmp.md
index 1e69da7fb941..5bd05efed907 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.snmp.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.snmp.md
@@ -14,9 +14,9 @@ title: prometheus.exporter.snmp
The `prometheus.exporter.snmp` component embeds
[`snmp_exporter`](https://github.com/prometheus/snmp_exporter). `snmp_exporter` lets you collect SNMP data and expose them as Prometheus metrics.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
`prometheus.exporter.snmp` uses the latest configuration introduced in version 0.23 of the Prometheus `snmp_exporter`.
-{{% /admonition %}}
+{{< /admonition >}}
## Usage
@@ -40,7 +40,8 @@ Omitted fields take their default values.
| `config_file` | `string` | SNMP configuration file defining custom modules. | | no |
| `config` | `string` or `secret` | SNMP configuration as inline string. | | no |
-The `config_file` argument points to a YAML file defining which snmp_exporter modules to use. See [snmp_exporter](https://github.com/prometheus/snmp_exporter#generating-configuration) for details on how to generate a config file.
+The `config_file` argument points to a YAML file defining which snmp_exporter modules to use.
+Refer to [snmp_exporter](https://github.com/prometheus/snmp_exporter#generating-configuration) for details on how to generate a configuration file.
The `config` argument must be a YAML document as string defining which SNMP modules and auths to use.
`config` is typically loaded by using the exports of another component. For example,
@@ -207,11 +208,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md b/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md
index f384fd1a6805..9211f9424cbe 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md
@@ -110,11 +110,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.squid.md b/docs/sources/flow/reference/components/prometheus.exporter.squid.md
index 49a8639c129d..957297d4af4e 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.squid.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.squid.md
@@ -102,11 +102,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.statsd.md b/docs/sources/flow/reference/components/prometheus.exporter.statsd.md
index 2e00b8db35b0..d7b2e7fc48df 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.statsd.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.statsd.md
@@ -135,11 +135,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.unix.md b/docs/sources/flow/reference/components/prometheus.exporter.unix.md
index ab2d88c8175e..7f3f4ca935cf 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.unix.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.unix.md
@@ -418,11 +418,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md b/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md
index 61c951e9c71d..499805179f11 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md
@@ -98,11 +98,9 @@ prometheus.remote_write "default" {
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.exporter.windows.md b/docs/sources/flow/reference/components/prometheus.exporter.windows.md
index 8042b5458d1c..14e22d13d2b7 100644
--- a/docs/sources/flow/reference/components/prometheus.exporter.windows.md
+++ b/docs/sources/flow/reference/components/prometheus.exporter.windows.md
@@ -14,12 +14,13 @@ The `prometheus.exporter.windows` component embeds
[windows_exporter](https://github.com/prometheus-community/windows_exporter) which exposes a
wide variety of hardware and OS metrics for Windows-based systems.
-The `windows_exporter` itself comprises various _collectors_, which can be
-enabled and disabled at will. For more information on collectors, refer to the
-[`collectors-list`](#collectors-list) section.
+The `windows_exporter` itself comprises various _collectors_, which you can enable and disable as needed.
+For more information on collectors, refer to the [`collectors-list`](#collectors-list) section.
-**Note** The black and white list config options are available for backwards compatibility but are deprecated. The include
-and exclude config options are preferred going forward.
+{{< admonition type="note" >}}
+The black and white list configuration options are available for backwards compatibility but are deprecated.
+The include and exclude configuration options are preferred going forward.
+{{< /admonition >}}
## Usage
@@ -29,17 +30,18 @@ prometheus.exporter.windows "LABEL" {
```
## Arguments
+
The following arguments can be used to configure the exporter's behavior.
All arguments are optional. Omitted fields take their default values.
-| Name | Type | Description | Default | Required |
-|----------------------|------------------|-------------------------------------------|---------|----------|
-| `enabled_collectors` | `list(string)` | List of collectors to enable. | `["cpu","cs","logical_disk","net","os","service","system"]` | no |
-| `timeout` | `duration` | Configure timeout for collecting metrics. | `4m` | no |
+| Name | Type | Description | Default | Required |
+|----------------------|----------------|-------------------------------------------|-------------------------------------------------------------|----------|
+| `enabled_collectors` | `list(string)` | List of collectors to enable. | `["cpu","cs","logical_disk","net","os","service","system"]` | no |
+| `timeout` | `duration` | Configure timeout for collecting metrics. | `4m` | no |
-`enabled_collectors` defines a hand-picked list of enabled-by-default
-collectors. If set, anything not provided in that list is disabled by
-default. See the [Collectors list](#collectors-list) for the default set.
+`enabled_collectors` defines a hand-picked list of enabled-by-default collectors.
+If set, anything not provided in that list is disabled by default.
+Refer to the [Collectors list](#collectors-list) for the default set.
## Blocks
@@ -75,15 +77,17 @@ text_file | [text_file][] | Configures the text_file collector. |
[text_file]: #textfile-block
### dfsr block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
+
+Name | Type | Description | Default | Required
+-----------------|----------------|------------------------------------------------------|------------------------------------|---------
`source_enabled` | `list(string)` | Comma-separated list of DFSR Perflib sources to use. | `["connection","folder","volume"]` | no
### exchange block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`enabled_list` | `string` | Comma-separated list of collectors to use. | `""` | no
+
+Name | Type | Description | Default | Required
+---------------|----------|--------------------------------------------|---------|---------
+`enabled_list` | `string` | Comma-separated list of collectors to use. | `""` | no
The collectors specified by `enabled_list` can include the following:
@@ -101,86 +105,96 @@ For example, `enabled_list` may be set to `"AvailabilityService,OutlookWebAccess
### iis block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`app_exclude` | `string` | Regular expression of applications to ignore. | `""` | no
-`app_include` | `string` | Regular expression of applications to report on. | `".*"` | no
-`site_exclude` | `string` | Regular expression of sites to ignore. | `""` | no
-`site_include` | `string` | Regular expression of sites to report on. | `".*"` | no
+
+Name | Type | Description | Default | Required
+---------------|----------|--------------------------------------------------|---------|---------
+`app_exclude` | `string` | Regular expression of applications to ignore. | `""` | no
+`app_include` | `string` | Regular expression of applications to report on. | `".*"` | no
+`site_exclude` | `string` | Regular expression of sites to ignore. | `""` | no
+`site_include` | `string` | Regular expression of sites to report on. | `".*"` | no
### logical_disk block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`exclude` | `string` | Regular expression of volumes to exclude. | `""` | no
-`include` | `string` | Regular expression of volumes to include. | `".+"` | no
+
+Name | Type | Description | Default | Required
+----------|----------|-------------------------------------------|---------|---------
+`exclude` | `string` | Regular expression of volumes to exclude. | `""` | no
+`include` | `string` | Regular expression of volumes to include. | `".+"` | no
Volume names must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included.
### msmq block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no
+
+Name | Type | Description | Default | Required
+---------------|----------|-------------------------------------------------|---------|---------
+`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no
Specifying `enabled_classes` is useful to limit the response to the MSMQs you specify, reducing the size of the response.
### mssql block
+
Name | Type | Description | Default | Required
---- |----------| ----------- | ------- | --------
`enabled_classes` | `list(string)` | Comma-separated list of MSSQL WMI classes to use. | `["accessmethods", "availreplica", "bufman", "databases", "dbreplica", "genstats", "locks", "memmgr", "sqlstats", "sqlerrorstransactions"]` | no
### network block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`exclude` | `string` | Regular expression of NIC:s to exclude. | `""` | no
-`include` | `string` | Regular expression of NIC:s to include. | `".*"` | no
+
+Name | Type | Description | Default | Required
+----------|----------|-----------------------------------------|---------|---------
+`exclude` | `string` | Regular expression of NIC:s to exclude. | `""` | no
+`include` | `string` | Regular expression of NIC:s to include. | `".*"` | no
NIC names must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included.
### process block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`exclude` | `string` | Regular expression of processes to exclude. | `""` | no
-`include` | `string` | Regular expression of processes to include. | `".*"` | no
+
+Name | Type | Description | Default | Required
+----------|----------|---------------------------------------------|---------|---------
+`exclude` | `string` | Regular expression of processes to exclude. | `""` | no
+`include` | `string` | Regular expression of processes to include. | `".*"` | no
Processes must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included.
### scheduled_task block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`exclude` | `string` | Regexp of tasks to exclude. | `""` | no
-`include` | `string` | Regexp of tasks to include. | `".+"` | no
+
+Name | Type | Description | Default | Required
+----------|----------|-----------------------------|---------|---------
+`exclude` | `string` | Regexp of tasks to exclude. | `""` | no
+`include` | `string` | Regexp of tasks to include. | `".+"` | no
For a server name to be included, it must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude`.
### service block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`use_api` | `string` | Use API calls to collect service data instead of WMI. | `false` | no
-`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no
+
+Name | Type | Description | Default | Required
+---------------|----------|-------------------------------------------------------|---------|---------
+`use_api` | `string` | Use API calls to collect service data instead of WMI. | `false` | no
+`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no
The `where_clause` argument can be used to limit the response to the services you specify, reducing the size of the response.
If `use_api` is enabled, 'where_clause' won't be effective.
### smtp block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
-`exclude` | `string` | Regexp of virtual servers to ignore. | | no
-`include` | `string` | Regexp of virtual servers to include. | `".+"` | no
+
+Name | Type | Description | Default | Required
+----------|----------|---------------------------------------|---------|---------
+`exclude` | `string` | Regexp of virtual servers to ignore. | | no
+`include` | `string` | Regexp of virtual servers to include. | `".+"` | no
For a server name to be included, it must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude`.
### text_file block
-Name | Type | Description | Default | Required
----- |----------| ----------- | ------- | --------
+
+Name | Type | Description | Default | Required
+----------------------|----------|----------------------------------------------------|-------------------------------------------------------|---------
`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\Grafana Agent Flow\textfile_inputs` | no
When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. Each `.prom` file found must end with an empty line feed to work properly.
@@ -270,12 +284,12 @@ Name | Description | Enabled by default
[vmware_blast](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware_blast.md) | VMware Blast session metrics |
[vmware](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware.md) | Performance counters installed by the Vmware Guest agent |
-See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.
+Refer to the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.
-{{% admonition type="caution" %}}
-Certain collectors will cause {{< param "PRODUCT_ROOT_NAME" >}} to crash if those collectors are used and the required infrastructure is not installed.
-These include but are not limited to mscluster_*, vmware, nps, dns, msmq, teradici_pcoip, ad, hyperv, and scheduled_task.
-{{% /admonition %}}
+{{< admonition type="caution" >}}
+Certain collectors will cause {{< param "PRODUCT_ROOT_NAME" >}} to crash if those collectors are used and the required infrastructure isn't installed.
+These include but aren't limited to mscluster_*, vmware, nps, dns, msmq, teradici_pcoip, ad, hyperv, and scheduled_task.
+{{< /admonition >}}
## Example
@@ -317,11 +331,9 @@ Replace the following:
- Components that consume [Targets]({{< relref "../compatibility/#targets-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
index fa324640d0ee..b8ef773567ca 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md
@@ -265,11 +265,9 @@ prometheus.operator.podmonitors "pods" {
- Components that export [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.operator.probes.md b/docs/sources/flow/reference/components/prometheus.operator.probes.md
index 256634a88438..c8fddb96e1dd 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.probes.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.probes.md
@@ -267,11 +267,9 @@ prometheus.operator.probes "probes" {
- Components that export [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
index 8b2e0ce29cdf..29a6414a6339 100644
--- a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
+++ b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md
@@ -267,11 +267,9 @@ prometheus.operator.servicemonitors "services" {
- Components that export [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.receive_http.md b/docs/sources/flow/reference/components/prometheus.receive_http.md
index d48985cc3f18..38d43cef5067 100644
--- a/docs/sources/flow/reference/components/prometheus.receive_http.md
+++ b/docs/sources/flow/reference/components/prometheus.receive_http.md
@@ -138,11 +138,9 @@ prometheus.remote_write "local" {
- Components that export [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.relabel.md b/docs/sources/flow/reference/components/prometheus.relabel.md
index 65cb02394d4a..22d6c0a42d28 100644
--- a/docs/sources/flow/reference/components/prometheus.relabel.md
+++ b/docs/sources/flow/reference/components/prometheus.relabel.md
@@ -181,11 +181,9 @@ The two resulting metrics are then propagated to each receiver defined in the
- Components that consume [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/prometheus.remote_write.md b/docs/sources/flow/reference/components/prometheus.remote_write.md
index f869343e0919..5664cd10aa6e 100644
--- a/docs/sources/flow/reference/components/prometheus.remote_write.md
+++ b/docs/sources/flow/reference/components/prometheus.remote_write.md
@@ -418,11 +418,9 @@ Any labels that start with `__` will be removed before sending to the endpoint.
- Components that consume [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/prometheus.scrape.md b/docs/sources/flow/reference/components/prometheus.scrape.md
index 8adf775687f1..765eb084b32f 100644
--- a/docs/sources/flow/reference/components/prometheus.scrape.md
+++ b/docs/sources/flow/reference/components/prometheus.scrape.md
@@ -298,11 +298,9 @@ Special labels added after a scrape
- Components that export [Prometheus `MetricsReceiver`]({{< relref "../compatibility/#prometheus-metricsreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/pyroscope.ebpf.md b/docs/sources/flow/reference/components/pyroscope.ebpf.md
index a324e71293ab..590ad574baf9 100644
--- a/docs/sources/flow/reference/components/pyroscope.ebpf.md
+++ b/docs/sources/flow/reference/components/pyroscope.ebpf.md
@@ -18,9 +18,9 @@ title: pyroscope.ebpf
`pyroscope.ebpf` configures an ebpf profiling job for the current host. The collected performance profiles are forwarded
to the list of receivers passed in `forward_to`.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
To use the `pyroscope.ebpf` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host pid namespace.
-{{% /admonition %}}
+{{< /admonition >}}
You can specify multiple `pyroscope.ebpf` components by giving them different labels, however it is not recommended as
it can lead to additional memory and CPU usage.
@@ -95,16 +95,20 @@ can help you pin down a profiling target.
| `__name__` | pyroscope metric name. Defaults to `process_cpu`. |
| `__container_id__` | The container ID derived from target. |
-### Container ID
+### Targets
-Each collected stack trace is then associated with a specified target from the targets list, determined by a
-container ID. This association process involves checking the `__container_id__`, `__meta_docker_container_id`,
-and `__meta_kubernetes_pod_container_id` labels of a target against the `/proc/{pid}/cgroup` of a process.
+One of the following special labels _must_ be included in each target of `targets` and the label must correspond to the container or process that is profiled:
-If a corresponding container ID is found, the stack traces are aggregated per target based on the container ID.
-If a container ID is not found, the stack trace is associated with a `default_target`.
+* `__container_id__`: The container ID.
+* `__meta_docker_container_id`: The ID of the Docker container.
+* `__meta_kubernetes_pod_container_id`: The ID of the Kubernetes pod container.
+* `__process_pid__` : The process ID.
-Any stack traces not associated with a listed target are ignored.
+Each process is then associated with a specified target from the targets list, determined by a container ID or process PID.
+
+If a process's container ID matches a target's container ID label, the stack traces are aggregated per target based on the container ID.
+If a process's PID matches a target's process PID label, the stack traces are aggregated per target based on the process PID.
+Otherwise the process is not profiled.
### Service name
@@ -298,11 +302,9 @@ pyroscope.ebpf "default" {
- Components that export [Pyroscope `ProfilesReceiver`]({{< relref "../compatibility/#pyroscope-profilesreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/pyroscope.java.md b/docs/sources/flow/reference/components/pyroscope.java.md
index a7cdb518ac13..92407132e99d 100644
--- a/docs/sources/flow/reference/components/pyroscope.java.md
+++ b/docs/sources/flow/reference/components/pyroscope.java.md
@@ -16,9 +16,9 @@ title: pyroscope.java
`pyroscope.java` continuously profiles Java processes running on the local Linux OS
using [async-profiler](https://github.com/async-profiler/async-profiler).
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
To use the `pyroscope.java` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host PID namespace.
-{{% /admonition %}}
+{{< /admonition >}}
## Usage
@@ -57,11 +57,29 @@ async-profiler binaries for both glibc and musl into the directory with the foll
After process profiling startup, the component detects libc type and copies according `libAsyncProfiler.so` into the
target process file system at the exact same path.
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
The `asprof` binary runs with root permissions.
If you change the `tmp_dir` configuration to something other than `/tmp`, then you must ensure that the
directory is only writable by root.
-{{% /admonition %}}
+{{< /admonition >}}
+
+#### `targets` argument
+
+The special `__process_pid__` label _must always_ be present and corresponds to the
+process PID that is used for profiling.
+
+Labels starting with a double underscore (`__`) are treated as _internal_, and are removed prior to scraping.
+
+The special label `service_name` is required and must always be present.
+If it is not specified, `pyroscope.scrape` will attempt to infer it from
+either of the following sources, in this order:
+1. `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation.
+2. `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name`
+3. `__meta_docker_container_name`
+4. `__meta_dockerswarm_container_label_service_name` or `__meta_dockerswarm_service_name`
+
+If `service_name` is not specified and could not be inferred, then it is set to `unspecified`.
+
## Blocks
The following blocks are supported inside the definition of
@@ -163,11 +181,9 @@ pyroscope.java "java" {
- Components that export [Pyroscope `ProfilesReceiver`]({{< relref "../compatibility/#pyroscope-profilesreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/components/pyroscope.scrape.md b/docs/sources/flow/reference/components/pyroscope.scrape.md
index 74c1fa30e873..35e7022df482 100644
--- a/docs/sources/flow/reference/components/pyroscope.scrape.md
+++ b/docs/sources/flow/reference/components/pyroscope.scrape.md
@@ -114,6 +114,7 @@ either of the following sources, in this order:
1. `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation.
2. `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name`
3. `__meta_docker_container_name`
+4. `__meta_dockerswarm_container_label_service_name` or `__meta_dockerswarm_service_name`
If `service_name` is not specified and could not be inferred, then it is set to `unspecified`.
@@ -522,11 +523,10 @@ discovery.http "dynamic_targets" {
}
pyroscope.scrape "local" {
- targets = [
- {"__address__" = "localhost:4100", "service_name"="pyroscope"},
+ targets = concat([
+ {"__address__" = "localhost:4040", "service_name"="pyroscope"},
{"__address__" = "localhost:12345", "service_name"="agent"},
- discovery.http.dynamic_targets.targets,
- ]
+ ], discovery.http.dynamic_targets.targets)
forward_to = [pyroscope.write.local.receiver]
}
@@ -589,11 +589,9 @@ http://localhost:12345/debug/pprof/mutex
- Components that export [Pyroscope `ProfilesReceiver`]({{< relref "../compatibility/#pyroscope-profilesreceiver-exporters" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
diff --git a/docs/sources/flow/reference/components/pyroscope.write.md b/docs/sources/flow/reference/components/pyroscope.write.md
index 38b6b542abc0..3012be03319c 100644
--- a/docs/sources/flow/reference/components/pyroscope.write.md
+++ b/docs/sources/flow/reference/components/pyroscope.write.md
@@ -168,11 +168,9 @@ pyroscope.scrape "default" {
- Components that consume [Pyroscope `ProfilesReceiver`]({{< relref "../compatibility/#pyroscope-profilesreceiver-consumers" >}})
-{{% admonition type="note" %}}
-
-Connecting some components may not be sensible or components may require further configuration to make the
-connection work correctly. Refer to the linked documentation for more details.
-
-{{% /admonition %}}
+{{< admonition type="note" >}}
+Connecting some components may not be sensible or components may require further configuration to make the connection work correctly.
+Refer to the linked documentation for more details.
+{{< /admonition >}}
\ No newline at end of file
diff --git a/docs/sources/flow/reference/config-blocks/http.md b/docs/sources/flow/reference/config-blocks/http.md
index 39ffa5b2502c..f90944c3ff59 100644
--- a/docs/sources/flow/reference/config-blocks/http.md
+++ b/docs/sources/flow/reference/config-blocks/http.md
@@ -50,12 +50,12 @@ tls > windows_certificate_filter > server | [server][] | Con
The `tls` block configures TLS settings for the HTTP server.
-{{% admonition type="warning" %}}
+{{< admonition type="warning" >}}
If you add the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over plaintext.
Similarly, if you remove the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over TLS.
To ensure all connections use TLS, configure the `tls` block before you start {{< param "PRODUCT_NAME" >}}.
-{{% /admonition %}}
+{{< /admonition >}}
Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
@@ -159,12 +159,12 @@ the following TLS settings are overridden and will cause an error if defined.
* `client_ca`
* `client_ca_file`
-{{% admonition type="warning" %}}
+{{< admonition type="warning" >}}
This feature is only available on Windows.
TLS min and max may not be compatible with the certificate stored in the Windows certificate store. The `windows_certificate_filter`
will serve the found certificate even if it is not compatible with the specified TLS version.
-{{% /admonition %}}
+{{< /admonition >}}
### server block
diff --git a/docs/sources/flow/release-notes.md b/docs/sources/flow/release-notes.md
index f8053bf3c0b3..baa91ae3d068 100644
--- a/docs/sources/flow/release-notes.md
+++ b/docs/sources/flow/release-notes.md
@@ -18,7 +18,7 @@ The release notes provide information about deprecations and breaking changes in
For a complete list of changes to {{< param "PRODUCT_ROOT_NAME" >}}, with links to pull requests and related issues when available, refer to the [Changelog](https://github.com/grafana/agent/blob/main/CHANGELOG.md).
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
These release notes are specific to {{< param "PRODUCT_NAME" >}}.
Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants are contained on separate pages:
@@ -27,7 +27,19 @@ Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants
[release-notes-static]: {{< relref "../static/release-notes.md" >}}
[release-notes-operator]: {{< relref "../operator/release-notes.md" >}}
-{{% /admonition %}}
+{{< /admonition >}}
+
+## v0.40
+
+### Breaking change: Prohibit the configuration of services within modules.
+
+Previously it was possible to configure the HTTP service via the [HTTP config block](https://grafana.com/docs/agent/v0.39/flow/reference/config-blocks/http/) inside of a module.
+This functionality is now only available in the main configuration.
+
+### Breaking change: Change the default value of `disable_high_cardinality_metrics` to `true`.
+
+The `disable_high_cardinality_metrics` configuration argument is used by `otelcol.exporter` components such as `otelcol.exporter.otlp`.
+If you need to see high cardinality metrics containing labels such as IP addresses and port numbers, you now have to explicitly set `disable_high_cardinality_metrics` to `false`.
## v0.39
diff --git a/docs/sources/flow/tasks/configure/configure-macos.md b/docs/sources/flow/tasks/configure/configure-macos.md
index fc1c6677f579..8b860a010dcd 100644
--- a/docs/sources/flow/tasks/configure/configure-macos.md
+++ b/docs/sources/flow/tasks/configure/configure-macos.md
@@ -31,11 +31,11 @@ To configure {{< param "PRODUCT_NAME" >}} on macOS, perform the following steps:
## Configure the {{% param "PRODUCT_NAME" %}} service
-{{% admonition type="note" %}}
+{{< admonition type="note" >}}
Due to limitations in Homebrew, customizing the service used by
{{< param "PRODUCT_NAME" >}} on macOS requires changing the Homebrew formula and
reinstalling {{< param "PRODUCT_NAME" >}}.
-{{% /admonition %}}
+{{< /admonition >}}
To customize the {{< param "PRODUCT_NAME" >}} service on macOS, perform the following
steps:
diff --git a/docs/sources/flow/tasks/estimate-resource-usage.md b/docs/sources/flow/tasks/estimate-resource-usage.md
index e7b066d9e8ee..f3ed1b7aed05 100644
--- a/docs/sources/flow/tasks/estimate-resource-usage.md
+++ b/docs/sources/flow/tasks/estimate-resource-usage.md
@@ -4,7 +4,7 @@ aliases:
- /docs/grafana-cloud/agent/flow/tasks/estimate-resource-usage/
- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/estimate-resource-usage/
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/estimate-resource-usage/
- - /docs/grafana-cloud/send-data/agent/flow/tasks/estimate-resource-usage/
+ - /docs/grafana-cloud/send-data/agent/flow/tasks/estimate-resource-usage/
# Previous page aliases for backwards compatibility:
- /docs/agent/flow/monitoring/resource-usage/
- /docs/grafana-cloud/agent/flow/monitoring/resource-usage/
@@ -13,7 +13,7 @@ aliases:
- /docs/grafana-cloud/send-data/agent/flow/monitoring/resource-usage/
- ../monitoring/resource-usage/ # /docs/agent/latest/flow/monitoring/resource-usage/
canonical: https://grafana.com/docs/agent/latest/flow/monitoring/resource-usage/
-description: Estimate expected Agent resource usage
+description: Estimate expected Grafana Agent resource usage
headless: true
title: Estimate resource usage
menuTitle: Estimate resource usage
@@ -22,24 +22,22 @@ weight: 190
# Estimate {{% param "PRODUCT_NAME" %}} resource usage
-This page provides guidance for expected resource usage of
-{{% param "PRODUCT_NAME" %}} for each telemetry type, based on operational
-experience of some of the {{% param "PRODUCT_NAME" %}} maintainers.
+This page provides guidance for expected resource usage of
+{{< param "PRODUCT_NAME" >}} for each telemetry type, based on operational
+experience of some of the {{< param "PRODUCT_NAME" >}} maintainers.
-{{% admonition type="note" %}}
-
-The resource usage depends on the workload, hardware and the configuration used.
+{{< admonition type="note" >}}
+The resource usage depends on the workload, hardware, and the configuration used.
The information on this page is a good starting point for most users, but your
actual usage may be different.
-
-{{% /admonition %}}
+{{< /admonition >}}
## Prometheus metrics
The Prometheus metrics resource usage depends mainly on the number of active
series that need to be scraped and the scrape interval.
-As a rule of thumb, **per each 1 million active series** and with the default
+As a rule of thumb, **per each 1 million active series** and with the default
scrape interval, you can expect to use approximately:
* 0.4 CPU cores
@@ -48,8 +46,7 @@ scrape interval, you can expect to use approximately:
These recommendations are based on deployments that use [clustering][], but they
will broadly apply to other deployment modes. For more information on how to
-deploy {{% param "PRODUCT_NAME" %}}, see
-[deploying grafana agent][].
+deploy {{< param "PRODUCT_NAME" >}}, see [deploying grafana agent][].
[deploying grafana agent]: {{< relref "../get-started/deploy-agent.md" >}}
[clustering]: {{< relref "../concepts/clustering.md" >}}
@@ -67,7 +64,7 @@ to use approximately:
These recommendations are based on Kubernetes DaemonSet deployments on clusters
with relatively small number of nodes and high logs volume on each. The resource
usage can be higher per each 1 MiB/second of logs if you have a large number of
-small nodes due to the constant overhead of running the {{% param "PRODUCT_NAME" %}} on each node.
+small nodes due to the constant overhead of running the {{< param "PRODUCT_NAME" >}} on each node.
Additionally, factors such as number of labels, number of files and average log
line length may all play a role in the resource usage.
diff --git a/docs/sources/flow/tasks/migrate/from-prometheus.md b/docs/sources/flow/tasks/migrate/from-prometheus.md
index 62fef82d3c2d..84241791ec24 100644
--- a/docs/sources/flow/tasks/migrate/from-prometheus.md
+++ b/docs/sources/flow/tasks/migrate/from-prometheus.md
@@ -71,10 +71,10 @@ This conversion will enable you to take full advantage of the many additional fe
1. If the `convert` command can't convert a Prometheus configuration, diagnostic information is sent to `stderr`.\
You can bypass any non-critical issues and output the {{< param "PRODUCT_NAME" >}} configuration using a best-effort conversion by including the `--bypass-errors` flag.
- {{% admonition type="caution" %}}
+ {{< admonition type="caution" >}}
If you bypass the errors, the behavior of the converted configuration may not match the original Prometheus configuration.
Make sure you fully test the converted configuration before using it in a production environment.
- {{% /admonition %}}
+ {{< /admonition >}}
{{< code >}}
@@ -143,10 +143,10 @@ Your configuration file must be a valid Prometheus configuration file rather tha
1. If your Prometheus configuration can't be converted and loaded directly into {{< param "PRODUCT_NAME" >}}, diagnostic information is sent to `stderr`.
You can bypass any non-critical issues and start the Agent by including the `--config.bypass-conversion-errors` flag in addition to `--config.format=prometheus`.
- {{% admonition type="caution" %}}
+ {{< admonition type="caution" >}}
If you bypass the errors, the behavior of the converted configuration may not match the original Prometheus configuration.
Do not use this flag in a production environment.
- {{% /admonition %}}
+ {{< /admonition >}}
## Example
diff --git a/docs/sources/flow/tasks/migrate/from-promtail.md b/docs/sources/flow/tasks/migrate/from-promtail.md
index 182dec857c3b..7a0dda9b9248 100644
--- a/docs/sources/flow/tasks/migrate/from-promtail.md
+++ b/docs/sources/flow/tasks/migrate/from-promtail.md
@@ -71,10 +71,10 @@ This conversion will enable you to take full advantage of the many additional fe
1. If the convert command can't convert a Promtail configuration, diagnostic information is sent to `stderr`.
You can bypass any non-critical issues and output the {{< param "PRODUCT_NAME" >}} configuration using a best-effort conversion by including the `--bypass-errors` flag.
- {{% admonition type="caution" %}}
+ {{< admonition type="caution" >}}
If you bypass the errors, the behavior of the converted configuration may not match the original Promtail configuration.
Make sure you fully test the converted configuration before using it in a production environment.
- {{% /admonition %}}
+ {{< /admonition >}}
{{< code >}}
@@ -139,10 +139,10 @@ Your configuration file must be a valid Promtail configuration file rather than
1. If your Promtail configuration can't be converted and loaded directly into {{< param "PRODUCT_ROOT_NAME" >}}, diagnostic information is sent to `stderr`.
You can bypass any non-critical issues and start {{< param "PRODUCT_ROOT_NAME" >}} by including the `--config.bypass-conversion-errors` flag in addition to `--config.format=promtail`.
- {{% admonition type="caution" %}}
+ {{< admonition type="caution" >}}
If you bypass the errors, the behavior of the converted configuration may not match the original Promtail configuration.
Do not use this flag in a production environment.
- {{%/admonition %}}
+ {{< /admonition >}}
## Example
@@ -213,7 +213,7 @@ After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} co
The following list is specific to the convert command and not {{< param "PRODUCT_NAME" >}}:
* Check if you are using any extra command line arguments with Promtail that aren't present in your configuration file. For example, `-max-line-size`.
-* Check if you are setting any environment variables, whether [expanded in the config file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`.
+* Check if you are setting any environment variables, whether [expanded in the configuration file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`.
* In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location.
Refer to the [loki.source.file][] documentation for more details.
Check if you have any existing setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions file path.
@@ -224,7 +224,7 @@ The following list is specific to the convert command and not {{< param "PRODUCT
[Promtail]: https://www.grafana.com/docs/loki/
-The above configuration defines three components: +The preceding configuration defines three components: - `prometheus.scrape` - A component that scrapes metrics from components that export targets. - `prometheus.exporter.unix` - A component that exports metrics from the host, built around [node_exporter](https://github.com/prometheus/node_exporter). @@ -180,7 +180,7 @@ The above configuration defines three components: The `prometheus.scrape` component references the `prometheus.exporter.unix` component's targets export, which is a list of scrape targets. The `prometheus.scrape` component then forwards the scraped metrics to the `prometheus.remote_write` component. -One rule is that components cannot form a cycle. This means that a component cannot reference itself directly or indirectly. This is to prevent infinite loops from forming in the pipeline. +One rule is that components can't form a cycle. This means that a component can't reference itself directly or indirectly. This is to prevent infinite loops from forming in the pipeline. ## Exercise for the reader @@ -190,13 +190,13 @@ One rule is that components cannot form a cycle. This means that a component can - Optional: [prometheus.exporter.redis][] -Let's start a container running Redis and configure the agent to scrape metrics from it. +Let's start a container running Redis and configure {{< param "PRODUCT_NAME" >}} to scrape metrics from it. ```bash docker container run -d --name flow-redis -p 6379:6379 --rm redis ``` -Try modifying the above pipeline to scrape metrics from the Redis exporter. You can refer to the [prometheus.exporter.redis][] component documentation for more information on how to configure it. +Try modifying the pipeline to scrape metrics from the Redis exporter. You can refer to the [prometheus.exporter.redis][] component documentation for more information on how to configure it. To give a visual hint, you want to create a pipeline that looks like this: @@ -204,19 +204,19 @@ To give a visual hint, you want to create a pipeline that looks like this: -{{% admonition type="note" %}} +{{< admonition type="note" >}} [concat]: https://grafana.com/docs/agent/