Skip to content

Commit

Permalink
Merge grafana/agent:main into grafana/alloy:main (2024-03-19)
Browse files Browse the repository at this point in the history
Merge grafana/agent:main into grafana/alloy:main (2024-03-19)
  • Loading branch information
rfratto authored Mar 19, 2024
2 parents 2c7d56a + 07d846c commit 4d5df55
Show file tree
Hide file tree
Showing 137 changed files with 2,311 additions and 1,414 deletions.
25 changes: 23 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ internal API changes are not present.
Main (unreleased)
-----------------

### Breaking changes

- The default listen port for `otelcol.receiver.opencensus` has changed from
4317 to 55678 to align with upstream. (@rfratto)

### Enhancements

- Add support for importing folders as single module to `import.file`. (@wildum)
Expand All @@ -19,6 +24,9 @@ Main (unreleased)
- Improve converter diagnostic output by including a Footer and removing lower
level diagnostics when a configuration fails to generate. (@erikbaranowski)

- Increased the alert interval and renamed the `ClusterSplitBrain` alert to `ClusterNodeCountMismatch` in the Grafana
Agent Mixin to better match the alert conditions. (@thampiotr)

### Features

- Added a new CLI flag `--stability.level` which defines the minimum stability
Expand All @@ -30,14 +38,27 @@ Main (unreleased)

- Fix an issue where JSON string array elements were not parsed correctly in `loki.source.cloudflare`. (@thampiotr)

- Update gcp_exporter to a newer version with a patch for incorrect delta histograms (@kgeckhart)

### Other changes

- Clustering for Grafana Agent in Flow mode has graduated from beta to stable.

- Resync defaults for `otelcol.processor.k8sattributes` with upstream. (@hainenber)

- Resync defaults for `otelcol.exporter.otlp` and `otelcol.exporter.otlphttp` with upstream. (@hainenber)

v0.40.3 (2024-03-14)
--------------------

### Bugfixes

- Fix a bug where structured metadata and parsed field are not passed further in `loki.source.api` (@marchellodev)

- Change `import.git` to use Git pulls rather than fetches to fix scenarios where the local code did not get updated. (@mattdurham)

### Other changes

- Clustering for Grafana Agent in Flow mode has graduated from beta to stable.

- Upgrade to Go 1.22.1 (@thampiotr)

v0.40.2 (2024-03-05)
Expand Down
4 changes: 2 additions & 2 deletions docs/developer/release/3-update-version-in-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ The project must be updated to reference the upcoming release tag whenever a new
- Stable Release example PR [here](https://github.com/grafana/agent/pull/3119)
- Patch Release example PR [here](https://github.com/grafana/agent/pull/3191)

4. Create a branch from `release/VERSION_PREFIX` for [grafana/agent](https://github.com/grafana/agent).
4. If one doesn't exist yet, create a branch called `release/VERSION_PREFIX` for [grafana/alloy](https://github.com/grafana/alloy).

5. Cherry pick the commit on main from the merged PR in Step 3 from main into the new branch from Step 4:
5. Cherry pick the commit on main from the merged PR in Step 3 from main into the branch from Step 4:

```
git cherry-pick -x COMMIT_SHA
Expand Down
2 changes: 1 addition & 1 deletion docs/developer/release/8-update-helm-charts.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Our Helm charts require some version updates as well.

1. Copy the content of the last CRDs into helm-charts.

Copy the contents from agent repo `production/operator/crds/` to replace the contents of helm-charts repo `charts/agent-operator/crds`
Copy the contents from agent repo `operations/agent-static-operator/crds` to replace the contents of helm-charts repo `charts/agent-operator/crds`

2. Update references of agent-operator app version in helm-charts pointing to release version.

Expand Down
31 changes: 28 additions & 3 deletions docs/sources/get-started/install/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,22 +32,47 @@ To deploy {{< param "PRODUCT_ROOT_NAME" >}} on Kubernetes using Helm, run the fo
helm repo update
```

1. Create a namespace for {{< param "PRODUCT_NAME" >}}:

```shell
kubectl create namespace <NAMESPACE>
```

Replace the following:

- _`<NAMESPACE>`_: The namespace to use for your {{< param "PRODUCT_NAME" >}}
installation, such as `alloy`.

1. Install {{< param "PRODUCT_ROOT_NAME" >}}:

```shell
helm install <RELEASE_NAME> grafana/grafana-alloy
helm install --namespace <NAMESPACE> <RELEASE_NAME> grafana/grafana-alloy
```

Replace the following:

- _`<RELEASE_NAME>`_: The name to use for your {{< param "PRODUCT_ROOT_NAME" >}} installation, such as `grafana-alloy`.
- _`<NAMESPACE>`_: The namespace created in the previous step.
- _`<RELEASE_NAME>`_: The name to use for your {{< param "PRODUCT_ROOT_NAME" >}} installation, such as `grafana-alloy`.

For more information on the {{< param "PRODUCT_ROOT_NAME" >}} Helm chart, refer to the Helm chart documentation on [Artifact Hub][].
1. Verify that the {{< param "PRODUCT_NAME" >}} pods are running:

```shell
kubectl get pods --namespace <NAMESPACE>
```

Replace the following:

- _`<NAMESPACE>`_: The namespace used in the previous step.

You have successfully deployed {{< param "PRODUCT_NAME" >}} on Kubernetes, using default Helm settings.
To configure {{< param "PRODUCT_NAME" >}}, see the [Configure {{< param "PRODUCT_NAME" >}} on Kubernetes][Configure] guide.

## Next steps

- [Configure {{< param "PRODUCT_NAME" >}}][Configure]

- Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about the Helm chart.

[Helm]: https://helm.sh
[Artifact Hub]: https://artifacthub.io/packages/helm/grafana/grafana-alloy
[Configure]: ../../../tasks/configure/configure-kubernetes/
Original file line number Diff line number Diff line change
Expand Up @@ -219,6 +219,10 @@ Name | Type | Description

The `exclude` block configures which pods to exclude from the processor.

{{< admonition type="note" >}}
Pods with the name `jaeger-agent` or `jaeger-collector` are excluded by default.
{{< /admonition >}}

### pod block

The `pod` block configures a pod to be excluded from the processor.
Expand Down
94 changes: 47 additions & 47 deletions docs/sources/reference/components/otelcol.receiver.opencensus.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ otelcol.receiver.opencensus "LABEL" {
Name | Type | Description | Default | Required
---- | ---- | ----------- | ------- | --------
`cors_allowed_origins` | `list(string)` | A list of allowed Cross-Origin Resource Sharing (CORS) origins. | | no
`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4317"` | no
`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:55678"` | no
`transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no
`max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no
`max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no
Expand All @@ -54,7 +54,7 @@ The "endpoint" parameter is the same for both gRPC and HTTP/JSON, as the protoco

To write traces with HTTP/JSON, `POST` to `[address]/v1/trace`. The JSON message format parallels the gRPC protobuf format. For details, refer to its [OpenApi specification](https://github.com/census-instrumentation/opencensus-proto/blob/master/gen-openapi/opencensus/proto/agent/trace/v1/trace_service.swagger.json).

Note that `max_recv_msg_size`, `read_buffer_size` and `write_buffer_size` are formatted in a way so that the units are included
Note that `max_recv_msg_size`, `read_buffer_size` and `write_buffer_size` are formatted in a way so that the units are included
in the string, such as "512KiB" or "1024KB".

## Blocks
Expand Down Expand Up @@ -153,56 +153,56 @@ finally sending it to an OTLP-capable endpoint:

```river
otelcol.receiver.opencensus "default" {
cors_allowed_origins = ["https://*.test.com", "https://test.com"]
endpoint = "0.0.0.0:9090"
transport = "tcp"
max_recv_msg_size = "32KB"
max_concurrent_streams = "16"
read_buffer_size = "1024KB"
write_buffer_size = "1024KB"
include_metadata = true
tls {
cert_file = "test.crt"
key_file = "test.key"
}
keepalive {
server_parameters {
max_connection_idle = "11s"
max_connection_age = "12s"
max_connection_age_grace = "13s"
time = "30s"
timeout = "5s"
}
enforcement_policy {
min_time = "10s"
permit_without_stream = true
}
}
output {
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
traces = [otelcol.processor.batch.default.input]
}
cors_allowed_origins = ["https://*.test.com", "https://test.com"]
endpoint = "0.0.0.0:9090"
transport = "tcp"
max_recv_msg_size = "32KB"
max_concurrent_streams = "16"
read_buffer_size = "1024KB"
write_buffer_size = "1024KB"
include_metadata = true
tls {
cert_file = "test.crt"
key_file = "test.key"
}
keepalive {
server_parameters {
max_connection_idle = "11s"
max_connection_age = "12s"
max_connection_age_grace = "13s"
time = "30s"
timeout = "5s"
}
enforcement_policy {
min_time = "10s"
permit_without_stream = true
}
}
output {
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
traces = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
output {
metrics = [otelcol.exporter.otlp.default.input]
logs = [otelcol.exporter.otlp.default.input]
traces = [otelcol.exporter.otlp.default.input]
}
output {
metrics = [otelcol.exporter.otlp.default.input]
logs = [otelcol.exporter.otlp.default.input]
traces = [otelcol.exporter.otlp.default.input]
}
}
otelcol.exporter.otlp "default" {
client {
endpoint = env("OTLP_ENDPOINT")
}
client {
endpoint = env("OTLP_ENDPOINT")
}
}
```
<!-- START GENERATED COMPATIBLE COMPONENTS -->
Expand All @@ -219,4 +219,4 @@ Connecting some components may not be sensible or components may require further
Refer to the linked documentation for more details.
{{< /admonition >}}

<!-- END GENERATED COMPATIBLE COMPONENTS -->
<!-- END GENERATED COMPATIBLE COMPONENTS -->
6 changes: 4 additions & 2 deletions docs/sources/reference/components/prometheus.exporter.unix.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,8 @@ The following blocks are supported inside the definition of

### filesystem block

The default values can vary by the operating system the agent runs on - refer to the [integration source](https://github.com/grafana/agent/blob/main/internal/static/integrations/node_exporter/config.go) for up-to-date values on each OS.

| Name | Type | Description | Default | Required |
| ---------------------- | ---------- | ------------------------------------------------------------------- | ----------------------------------------------- | -------- |
| `fs_types_exclude` | `string` | Regexp of filesystem types to ignore for filesystem collector. | (_see below_ ) | no |
Expand All @@ -139,7 +141,7 @@ The following blocks are supported inside the definition of
`fs_types_exclude` defaults to the following regular expression string:

```
^(autofs\|binfmt_misc\|bpf\|cgroup2?\|configfs\|debugfs\|devpts\|devtmpfs\|fusectl\|hugetlbfs\|iso9660\|mqueue\|nsfs\|overlay\|proc\|procfs\|pstore\|rpc_pipefs\|securityfs\|selinuxfs\|squashfs\|sysfs\|tracefs)$
^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
```

### ipvs block
Expand Down Expand Up @@ -183,7 +185,7 @@ The following blocks are supported inside the definition of
`fields` defaults to the following regular expression string:

```
"^(.*_(InErrors\|InErrs)\|Ip_Forwarding\|Ip(6\|Ext)_(InOctets\|OutOctets)\|Icmp6?_(InMsgs\|OutMsgs)\|TcpExt_(Listen.*\|Syncookies.*\|TCPSynRetrans\|TCPTimeouts)\|Tcp_(ActiveOpens\|InSegs\|OutSegs\|OutRsts\|PassiveOpens\|RetransSegs\|CurrEstab)\|Udp6?_(InDatagrams\|OutDatagrams\|NoPorts\|RcvbufErrors\|SndbufErrors))$"
"^(.*_(InErrors|InErrs)|Ip_Forwarding|Ip(6|Ext)_(InOctets|OutOctets)|Icmp6?_(InMsgs|OutMsgs)|TcpExt_(Listen.*|Syncookies.*|TCPSynRetrans|TCPTimeouts)|Tcp_(ActiveOpens|InSegs|OutSegs|OutRsts|PassiveOpens|RetransSegs|CurrEstab)|Udp6?_(InDatagrams|OutDatagrams|NoPorts|RcvbufErrors|SndbufErrors))$"
```

### perf block
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ Name | Type | Description
----------------|-----------|----------------------------------------------------------------------------|---------|---------
`enabled` | `boolean` | Enables an in-memory buffer before sending data to the client. | `true` | no
`num_consumers` | `number` | Number of readers to send batches written to the queue in parallel. | `10` | no
`queue_size` | `number` | Maximum number of unwritten batches allowed in the queue at the same time. | `5000` | no
`queue_size` | `number` | Maximum number of unwritten batches allowed in the queue at the same time. | `1000` | no

When `enabled` is `true`, data is first written to an in-memory buffer before sending it to the configured server.
Batches sent to the component's `input` exported field are added to the buffer as long as the number of unsent batches doesn't exceed the configured `queue_size`.

`queue_size` determines how long an endpoint outage is tolerated.
Assuming 100 requests/second, the default queue size `5000` provides about 50 seconds of outage tolerance.
To calculate the correct value for `queue_size`, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated.
Assuming 100 requests/second, the default queue size `1000` provides about 10 seconds of outage tolerance.
To calculate the correct value for `queue_size`, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.

The `num_consumers` argument controls how many readers read from the buffer and send data in parallel.
Larger values of `num_consumers` allow data to be sent more quickly at the expense of increased network traffic.
Loading

0 comments on commit 4d5df55

Please sign in to comment.