diff --git a/docs/sources/flow/reference/components/loki.relabel.md b/docs/sources/flow/reference/components/loki.relabel.md index 14425715d3b2..a85239666c32 100644 --- a/docs/sources/flow/reference/components/loki.relabel.md +++ b/docs/sources/flow/reference/components/loki.relabel.md @@ -11,26 +11,19 @@ title: loki.relabel # loki.relabel -The `loki.relabel` component rewrites the label set of each log entry passed to -its receiver by applying one or more relabeling `rule`s and forwards the -results to the list of receivers in the component's arguments. +The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling `rule`s and forwards the results to the list of receivers in the component's arguments. -If no labels remain after the relabeling rules are applied, then the log -entries are dropped. +If no labels remain after the relabeling rules are applied, then the log entries are dropped. -The most common use of `loki.relabel` is to filter log entries or standardize -the label set that is passed to one or more downstream receivers. The `rule` -blocks are applied to the label set of each log entry in order of their -appearance in the configuration file. The configured rules can be retrieved by -calling the function in the `rules` export field. +The most common use of `loki.relabel` is to filter log entries or standardize the label set that is passed to one or more downstream receivers. +The `rule` blocks are applied to the label set of each log entry in order of their appearance in the configuration file. +The configured rules can be retrieved by calling the function in the `rules` export field. -If you're looking for a way to process the log entry contents, take a look at -[the `loki.process` component][loki.process] instead. +If you're looking for a way to process the log entry contents, take a look at [the `loki.process` component][loki.process] instead. [loki.process]: {{< relref "./loki.process.md" >}} -Multiple `loki.relabel` components can be specified by giving them -different labels. +Multiple `loki.relabel` components can be specified by giving them different labels. ## Usage @@ -50,32 +43,32 @@ loki.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes -`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no +Name | Type | Description | Default | Required +-----------------|------------------|----------------------------------------------------------------|---------|--------- +`forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes +`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no ## Blocks The following blocks are supported inside the definition of `loki.relabel`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to received log entries. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +rule | [rule][] | Relabeling rules to apply to received log entries. | no [rule]: #rule-block -### rule block +### rule -{{< docs/shared lookup="flow/reference/components/rule-block-logs.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/rule-block-logs.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. +Name | Type | Description +-----------|----------------|------------------------------------------------------------- +`receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. `rules` | `RelabelRules` | The currently configured relabeling rules. ## Component health @@ -89,16 +82,15 @@ In those cases, exported fields are kept at their last healthy values. ## Debug metrics -* `loki_relabel_entries_processed` (counter): Total number of log entries processed. -* `loki_relabel_entries_written` (counter): Total number of log entries forwarded. -* `loki_relabel_cache_misses` (counter): Total number of cache misses. * `loki_relabel_cache_hits` (counter): Total number of cache hits. +* `loki_relabel_cache_misses` (counter): Total number of cache misses. * `loki_relabel_cache_size` (gauge): Total size of relabel cache. +* `loki_relabel_entries_processed` (counter): Total number of log entries processed. +* `loki_relabel_entries_written` (counter): Total number of log entries forwarded. ## Example -The following example creates a `loki.relabel` component that only forwards -entries whose 'level' value is set to 'error'. +The following example creates a `loki.relabel` component that only forwards entries whose 'level' value is set to 'error'. ```river loki.relabel "keep_error_only" { @@ -111,4 +103,3 @@ loki.relabel "keep_error_only" { } } ``` - diff --git a/docs/sources/flow/reference/components/loki.source.api.md b/docs/sources/flow/reference/components/loki.source.api.md index 966589bd64a1..2f29f507ec81 100644 --- a/docs/sources/flow/reference/components/loki.source.api.md +++ b/docs/sources/flow/reference/components/loki.source.api.md @@ -13,7 +13,8 @@ title: loki.source.api `loki.source.api` receives log entries over HTTP and forwards them to other `loki.*` components. -The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the `logproto` format. This means that other [`loki.write`][loki.write] components can be used as a client and send requests to `loki.source.api` which enables using the Agent as a proxy for logs. +The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the `logproto` format. +This means that other [`loki.write`][loki.write] components can be used as a client and send requests to `loki.source.api` which enables using the Agent as a proxy for logs. [loki.write]: {{< relref "./loki.write.md" >}} [loki-push-api]: https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki @@ -24,7 +25,7 @@ The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the ` loki.source.api "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -33,9 +34,9 @@ loki.source.api "LABEL" { The component will start HTTP server on the configured port and address with the following endpoints: - `/loki/api/v1/push` - accepting `POST` requests compatible with [Loki push API][loki-push-api], for example, from another Grafana Agent's [`loki.write`][loki.write] component. -- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored. +- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with Promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored. - `/loki/ready` - accepting `GET` requests - can be used to confirm the server is reachable and healthy. -- `/api/v1/push` - internally reroutes to `/loki/api/v1/push` +- `/api/v1/push` - internally reroutes to `/loki/api/v1/push` - `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw` @@ -45,15 +46,14 @@ The component will start HTTP server on the configured port and address with the `loki.source.api` supports the following arguments: - Name | Type | Description | Default | Required ---------------------------|----------------------|------------------------------------------------------------|---------|---------- - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no - `labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +-------------------------|----------------------|------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no -The `relabel_rules` field can make use of the `rules` export value from a -[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -61,19 +61,19 @@ The `relabel_rules` field can make use of the `rules` export value from a The following blocks are supported inside the definition of `loki.source.api`: - Hierarchy | Name | Description | Required ------------|----------|----------------------------------------------------|---------- - `http` | [http][] | Configures the HTTP server that receives requests. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +`http` | [http][] | Configures the HTTP server that receives requests. | no [http]: #http ### http -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Exported fields -`loki.source.api` does not export any fields. +`loki.source.api` doesn't export any fields. ## Component health @@ -81,7 +81,8 @@ The following blocks are supported inside the definition of `loki.source.api`: ## Debug metrics -The following are some of the metrics that are exposed when this component is used. Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. +The following are some of the metrics that are exposed when this component is used. +Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. * `loki_source_api_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. * `loki_source_api_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. @@ -90,7 +91,9 @@ The following are some of the metrics that are exposed when this component is us ## Example -This example starts an HTTP server on `0.0.0.0` address and port `9999`. The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. +This example starts an HTTP server on `0.0.0.0` address and port `9999`. +The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. +The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. ```river loki.write "local" { @@ -116,4 +119,3 @@ loki.source.api "loki_push_api" { } } ``` - diff --git a/docs/sources/flow/reference/components/loki.source.awsfirehose.md b/docs/sources/flow/reference/components/loki.source.awsfirehose.md index b080adcaced9..800eb4739787 100644 --- a/docs/sources/flow/reference/components/loki.source.awsfirehose.md +++ b/docs/sources/flow/reference/components/loki.source.awsfirehose.md @@ -11,46 +11,42 @@ title: loki.source.awsfirehose # loki.source.awsfirehose -`loki.source.awsfirehose` receives log entries over HTTP -from [AWS Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) -and forwards them to other `loki.*` components. +`loki.source.awsfirehose` receives log entries over HTTP from [AWS Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) and forwards them to other `loki.*` components. -The HTTP API exposed is compatible -with the [Firehose HTTP Delivery API](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). -Since the API model that AWS Firehose uses to deliver data over HTTP is generic enough, the same component can be used -to receive data from multiple origins: +The HTTP API exposed is compatible with the [Firehose HTTP Delivery API](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). +Since the API model that AWS Firehose uses to deliver data over HTTP is generic enough, the same component can be used to receive data from multiple origins: - [AWS CloudWatch logs](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-cloudwatch-logs.html) - [AWS CloudWatch events](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-cloudwatch-events.html) - Custom data through [DirectPUT requests](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-sdk.html) -The component uses a heuristic to try to decode as much information as possible from each log record, and it falls back to writing -the raw records to Loki. The decoding process goes as follows: +The component uses a heuristic to try to decode as much information as possible from each log record, and it falls back to writing the raw records to Loki. +The decoding process goes as follows: -- AWS Firehose sends batched requests -- Each record is treated individually +- AWS Firehose sends batched requests. +- Each record is treated individually. - For each `record` received in each request: - - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it is decoded and each logging event is written to Loki - - All other records are written raw to Loki + - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it is decoded and each logging event is written to Loki. + - All other records are written raw to Loki. -The component exposes some internal labels, available for relabeling. The following tables describes internal labels available -in records coming from any source. +The component exposes some internal labels, available for relabeling. +The following tables describes internal labels available in records coming from any source. -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | -| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | +| Name | Description | Example | +|-----------------------------|-------------------------------|--------------------------------------------------------------------------| +| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | +| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | If the source of the Firehose record is CloudWatch logs, the request is further decoded and enriched with even more labels, exposed as follows: -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | -| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | -| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | -| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | -| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | +| Name | Description | Example | +|----------------------------|--------------------------------------------------|------------------------------------------| +| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | +| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | +| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | +| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | +| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | See [Examples](#example) for a full example configuration showing how to enrich each log entry with these labels. @@ -60,7 +56,7 @@ See [Examples](#example) for a full example configuration showing how to enrich loki.source.awsfirehose "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -68,22 +64,19 @@ loki.source.awsfirehose "LABEL" { The component will start an HTTP server on the configured port and address with the following endpoints: -- `/awsfirehose/api/v1/push` - accepting `POST` requests compatible - with [AWS Firehose HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). +- `/awsfirehose/api/v1/push` - accepting `POST` requests compatible with [AWS Firehose HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). ## Arguments `loki.source.awsfirehose` supports the following arguments: -| Name | Type | Description | Default | Required | - |--------------------------|----------------------|------------------------------------------------------------|---------|----------| -| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| Name | Type | Description | Default | Required | +|--------------------------|----------------------|----------------------------------------------------------------|---------|----------| +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | | `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from the request. | `false` | no | -| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | -The `relabel_rules` field can make use of the `rules` export value from a -[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded -to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -92,25 +85,25 @@ to the list of receivers in `forward_to`. The following blocks are supported inside the definition of `loki.source.awsfirehose`: | Hierarchy | Name | Description | Required | - |-----------|----------|----------------------------------------------------|----------| -| `http` | [http][] | Configures the HTTP server that receives requests. | no | +|-----------|----------|----------------------------------------------------|----------| | `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no | +| `http` | [http][] | Configures the HTTP server that receives requests. | no | [http]: #http [grpc]: #grpc -### http +### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} -### grpc +### http -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Exported fields -`loki.source.awsfirehose` does not export any fields. +`loki.source.awsfirehose` doesn't export any fields. ## Component health @@ -118,21 +111,22 @@ The following blocks are supported inside the definition of `loki.source.awsfire ## Debug metrics -The following are some of the metrics that are exposed when this component is used. +The following are some of the metrics that are exposed when this component is used. + {{% admonition type="note" %}} The metrics include labels such as `status_code` where relevant, which you can use to measure request success rates. {{%/admonition %}} -- `loki_source_awsfirehose_request_errors` (counter): Count of errors while receiving a request. +- `loki_source_awsfirehose_batch_size` (histogram): Size (in units) of the number of records received per request. - `loki_source_awsfirehose_record_errors` (counter): Count of errors while decoding an individual record. - `loki_source_awsfirehose_records_received` (counter): Count of records received. -- `loki_source_awsfirehose_batch_size` (histogram): Size (in units) of the number of records received per request. +- `loki_source_awsfirehose_request_errors` (counter): Count of errors while receiving a request. ## Example -This example starts an HTTP server on `0.0.0.0` address and port `9999`. The server receives log entries and forwards -them to a `loki.write` component. The `loki.write` component will send the logs to the specified loki instance using -basic auth credentials provided. +This example starts an HTTP server on `0.0.0.0` address and port `9999`. +The server receives log entries and forwards them to a `loki.write` component. +The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. ```river loki.write "local" { @@ -156,9 +150,8 @@ loki.source.awsfirehose "loki_fh_receiver" { } ``` -As another example, if you are receiving records that originated from a CloudWatch logs subscription, you can enrich each -received entry by relabeling internal labels. The following configuration builds upon the one above but keeps the origin -log stream and group as `log_stream` and `log_group`, respectively. +As another example, if you are receiving records that originated from a CloudWatch logs subscription, you can enrich each received entry by relabeling internal labels. +The following configuration builds upon the one above but keeps the origin log stream and group as `log_stream` and `log_group`, respectively. ```river loki.write "local" { diff --git a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md index a90320e069ef..c2ff92ad1747 100644 --- a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md +++ b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md @@ -42,25 +42,23 @@ loki.source.azure_event_hubs "LABEL" { `loki.source.azure_event_hubs` supports the following arguments: - Name | Type | Description | Default | Required ------------------------------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------- - `fully_qualified_namespace` | `string` | Event hub namespace. | | yes - `event_hubs` | `list(string)` | Event Hubs to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no - `labels` | `map(string)` | The labels to associate with each received event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no - `disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +----------------------------|----------------------|--------------------------------------------------------------------|----------------------------------|--------- +`fully_qualified_namespace` | `string` | Event hub namespace. | | yes +`event_hubs` | `list(string)` | Event Hubs to consume. | | yes +`group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no +`assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no +`labels` | `map(string)` | The labels to associate with each received event. | `{}` | no +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no The `fully_qualified_namespace` argument must refer to a full `HOST:PORT` that points to your event hub, such as `NAMESPACE.servicebus.windows.net:9093`. The `assignor` argument must be set to one of `"range"`, `"roundrobin"`, or `"sticky"`. -The `relabel_rules` field can make use of the `rules` export value from a -`loki.relabel` component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a `loki.relabel` component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. ### Labels @@ -68,42 +66,40 @@ The `labels` map is applied to every message that the component reads. The following internal labels prefixed with `__` are available but are discarded if not relabeled: +- `__azure_event_hubs_category` +- `__meta_kafka_group_id` +- `__meta_kafka_member_id` - `__meta_kafka_message_key` -- `__meta_kafka_topic` - `__meta_kafka_partition` -- `__meta_kafka_member_id` -- `__meta_kafka_group_id` -- `__azure_event_hubs_category` +- `__meta_kafka_topic` ## Blocks The following blocks are supported inside the definition of `loki.source.azure_event_hubs`: - Hierarchy | Name | Description | Required -----------------|------------------|----------------------------------------------------|---------- - authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes +Hierarchy | Name | Description | Required +---------------|------------------|----------------------------------------------------|--------- +authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes [authentication]: #authentication-block -### authentication block +### authentication The `authentication` block defines the authentication method when communicating with Azure Event Hub. - Name | Type | Description | Default | Required ----------------------|----------------|---------------------------------------------------------------------------|---------|---------- - `mechanism` | `string` | Authentication mechanism. | | yes - `connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no - `scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no +Name | Type | Description | Default | Required +--------------------|----------------|---------------------------------------------------------------------------|---------|--------- +`mechanism` | `string` | Authentication mechanism. | | yes +`connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no +`scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no -`mechanism` supports the values `"connection_string"` and `"oauth"`. If `"connection_string"` is used, -you must set the `connection_string` attribute. If `"oauth"` is used, you must configure one of the supported credential -types as documented -here: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azidentity/README.md#credential-types via environment -variables or Azure CLI. +`mechanism` supports the values `"connection_string"` and `"oauth"`. +If `"connection_string"` is used, you must set the `connection_string` attribute. +If `"oauth"` is used, you must configure one of the [supported credential types](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azidentity/README.md#credential-types) via environment variables or the Azure CLI. ## Exported fields -`loki.source.azure_event_hubs` does not export any fields. +`loki.source.azure_event_hubs` doesn't export any fields. ## Component health @@ -112,7 +108,7 @@ configuration. ## Debug information -`loki.source.azure_event_hubs` does not expose additional debug info. +`loki.source.azure_event_hubs` doesn't expose additional debug info. ## Example @@ -134,4 +130,4 @@ loki.write "example" { url = "loki:3100/api/v1/push" } } -``` \ No newline at end of file +``` diff --git a/docs/sources/flow/reference/components/loki.source.cloudflare.md b/docs/sources/flow/reference/components/loki.source.cloudflare.md index 33d1bf0015a5..9b0f8d453c9f 100644 --- a/docs/sources/flow/reference/components/loki.source.cloudflare.md +++ b/docs/sources/flow/reference/components/loki.source.cloudflare.md @@ -11,12 +11,9 @@ title: loki.source.cloudflare # loki.source.cloudflare -`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and -forwards them to other `loki.*` components. +`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and forwards them to other `loki.*` components. -These logs contain data related to the connecting client, the request path -through the Cloudflare network, and the response from the origin web server and -can be useful for enriching existing logs on an origin server. +These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server and can be useful for enriching existing logs on an origin server. Multiple `loki.source.cloudflare` components can be specified by giving them different labels. @@ -36,65 +33,60 @@ loki.source.cloudflare "LABEL" { `loki.source.cloudflare` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`api_token` | `string` | The API token to authenticate with. | | yes -`zone_id` | `string` | The Cloudflare zone ID to use. | | yes -`labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no -`workers` | `int` | The number of workers to use for parsing logs. | `3` | no -`pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no -`fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no -`additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no +Name | Type | Description | Default | Required +--------------------|----------------------|-------------------------------------------------------------------------------|-------------|--------- +`api_token` | `string` | The API token to authenticate with. | | yes +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`zone_id` | `string` | The Cloudflare zone ID to use. | | yes +`additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no +`fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no +`labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no +`pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no +`workers` | `int` | The number of workers to use for parsing logs. | `3` | no -By default `loki.source.cloudflare` fetches logs with the `default` set of -fields. Here are the different sets of `fields_type` available for selection, -and the fields they include: +By default `loki.source.cloudflare` fetches logs with the `default` set of fields. +The following list shows the different sets of `fields_type` available for selection, and the fields they include: * `default` includes: -``` -"ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" + ``` + plus any extra fields provided via `additional_fields` argument. * `minimal` includes all `default` fields and adds: -``` -"ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer", "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer", "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType" + ``` + plus any extra fields provided via `additional_fields` argument. * `extended` includes all `minimal` fields and adds: -``` -"ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID", "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol", "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID", "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol", "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified" + ``` + plus any extra fields provided via `additional_fields` argument. * `all` includes all `extended` fields and adds: -``` - "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` -``` -plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). + ``` + "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` + ``` + plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). * `custom` includes only the fields defined in `additional_fields`. -The component saves the last successfully-fetched timestamp in its positions -file. If a position is found in the file for a given zone ID, the component -restarts pulling logs from that timestamp. When no position is found, the -component starts pulling logs from the current time. +The component saves the last successfully-fetched timestamp in its positions file. +If a position is found in the file for a given zone ID, the component restarts pulling logs from that timestamp. +When no position is found, the component starts pulling logs from the current time. -Logs are fetched using multiple `workers` which request the last available -`pull_range` repeatedly. It is possible to fall behind due to having too many -log lines to process for each pull; adding more workers, decreasing the pull -range, or decreasing the quantity of fields fetched can mitigate this -performance issue. +Logs are fetched using multiple `workers` which request the last available `pull_range` repeatedly. +It's possible to fall behind due to having too many log lines to process for each pull. +Adding more workers, decreasing the pull range, or decreasing the quantity of fields fetched can mitigate this performance issue. -The last timestamp fetched by the component is recorded in the -`loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. +The last timestamp fetched by the component is recorded in the `loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. + +All incoming Cloudflare log entries are in JSON format. You can make use of the `loki.process` component and a JSON processing stage to extract more labels or change the log line format. +A sample log looks like this: -All incoming Cloudflare log entries are in JSON format. You can make use of the -`loki.process` component and a JSON processing stage to extract more labels or -change the log line format. A sample log looks like this: ```json { "CacheCacheStatus": "miss", @@ -165,15 +157,13 @@ change the log line format. A sample log looks like this: } ``` - ## Exported fields -`loki.source.cloudflare` does not export any fields. +`loki.source.cloudflare` doesn't export any fields. ## Component health -`loki.source.cloudflare` is only reported as unhealthy if given an invalid -configuration. +`loki.source.cloudflare` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -192,8 +182,7 @@ configuration. ## Example -This example pulls logs from Cloudflare's API and forwards them to a -`loki.write` component. +This example pulls logs from Cloudflare's API and forwards them to a `loki.write` component. ```river loki.source.cloudflare "dev" { diff --git a/docs/sources/flow/reference/components/loki.source.docker.md b/docs/sources/flow/reference/components/loki.source.docker.md index 0bb11ddecb17..b50b0ef2121a 100644 --- a/docs/sources/flow/reference/components/loki.source.docker.md +++ b/docs/sources/flow/reference/components/loki.source.docker.md @@ -12,12 +12,10 @@ title: loki.source.docker # loki.source.docker -`loki.source.docker` reads log entries from Docker containers and forwards them -to other `loki.*` components. Each component can read from a single Docker -daemon. +`loki.source.docker` reads log entries from Docker containers and forwards them to other `loki.*` components. +Each component can read from a single Docker daemon. -Multiple `loki.source.docker` components can be specified by giving them -different labels. +Multiple `loki.source.docker` components can be specified by giving them different labels. ## Usage @@ -30,38 +28,36 @@ loki.source.docker "LABEL" { ``` ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.file` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`host` | `string` | Address of the Docker daemon. | | yes -`targets` | `list(map(string))` | List of containers to read logs from. | | yes -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no -`refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no +Name | Type | Description | Default | Required +-------------------|----------------------|--------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`host` | `string` | Address of the Docker daemon. | | yes +`targets` | `list(map(string))` | List of containers to read logs from. | | yes +`labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no +`refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no ## Blocks The following blocks are supported inside the definition of `loki.source.docker`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | HTTP client settings when connecting to the endpoint. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +-----------------------------|-------------------|----------------------------------------------------------|--------- +client | [client][] | HTTP client settings when connecting to the endpoint. | no +client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > -basic_auth` refers to an `basic_auth` block defined inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > basic_auth` refers to an `basic_auth` block defined inside a `client` block. -These blocks are only applicable when connecting to a Docker daemon over HTTP -or HTTPS and has no effect when connecting via a `unix:///` socket +These blocks are only applicable when connecting to a Docker daemon over HTTP or HTTPS and has no effect when connecting via a `unix:///` socket. [client]: #client-block [basic_auth]: #basic_auth-block @@ -69,49 +65,49 @@ or HTTPS and has no effect when connecting via a `unix:///` socket [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### client block +### client -The `client` block configures settings used to connect to HTTP(S) Docker -daemons. +The `client` block configures settings used to connect to HTTP(S) Docker daemons. {{< docs/shared lookup="flow/reference/components/http-client-config-block.md" source="agent" version="" >}} -### basic_auth block +### client > authorization -The `basic_auth` block configures basic authentication for HTTP(S) Docker -daemons. +The `authorization` block configures custom authorization to use for the Docker daemon. -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### client > basic_auth -The `authorization` block configures custom authorization to use for the Docker -daemon. +The `basic_auth` block configures basic authentication for HTTP(S) Docker daemons. -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### client > oauth2 -The `oauth2` block configures OAuth2 authorization to use for the Docker -daemon. +The `oauth2` block configures OAuth2 authorization to use for the Docker daemon. {{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### client > oauth2 > tls_config + +The `tls_config` block configures TLS settings for connecting to HTTPS Docker daemons. + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### client > tls_config -The `tls_config` block configures TLS settings for connecting to HTTPS Docker -daemons. +The `tls_config` block configures TLS settings for connecting to HTTPS Docker daemons. {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields -`loki.source.docker` does not export any fields. +`loki.source.docker` doesn't export any fields. ## Component health -`loki.source.docker` is only reported as unhealthy if given an invalid -configuration. +`loki.source.docker` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -126,15 +122,12 @@ configuration. * `loki_source_docker_target_parsing_errors_total` (gauge): Total number of parsing errors while receiving Docker messages. ## Component behavior -The component uses its data path (a directory named after the domain's -fully qualified name) to store its _positions file_. The positions file -stores the read offsets so that if there is a component or Agent restart, -`loki.source.docker` can pick up tailing from the same spot. +The component uses its data path (a directory named after the domain's fully qualified name) to store its _positions file_. +The positions file stores the read offsets so that if there is a component or Agent restart, `loki.source.docker` can pick up tailing from the same spot. ## Example -This example collects log entries from the files specified in the `targets` -argument and forwards them to a `loki.write` component to be written to Loki. +This example collects log entries from the files specified in the `targets` argument and forwards them to a `loki.write` component to be written to Loki. ```river discovery.docker "linux" { @@ -143,7 +136,7 @@ discovery.docker "linux" { loki.source.docker "default" { host = "unix:///var/run/docker.sock" - targets = discovery.docker.linux.targets + targets = discovery.docker.linux.targets forward_to = [loki.write.local.receiver] } diff --git a/docs/sources/flow/reference/components/loki.source.file.md b/docs/sources/flow/reference/components/loki.source.file.md index 2e9c8d9f333b..18e87bf520a2 100644 --- a/docs/sources/flow/reference/components/loki.source.file.md +++ b/docs/sources/flow/reference/components/loki.source.file.md @@ -11,14 +11,12 @@ title: loki.source.file # loki.source.file -`loki.source.file` reads log entries from files and forwards them to other -`loki.*` components. +`loki.source.file` reads log entries from files and forwards them to other `loki.*` components. -Multiple `loki.source.file` components can be specified by giving them -different labels. +Multiple `loki.source.file` components can be specified by giving them different labels. {{% admonition type="note" %}} -`loki.source.file` does not handle file discovery. You can use `local.file_match` for file discovery. Refer to the [File Globbing](#file-globbing) example for more information. +`loki.source.file` doesn't handle file discovery. You can use `local.file_match` for file discovery. Refer to the [File Globbing](#file-globbing) example for more information. {{% /admonition %}} ## Usage @@ -32,20 +30,18 @@ loki.source.file "LABEL" { ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.file` supports the following arguments: -| Name | Type | Description | Default | Required | -| --------------- | -------------------- | ----------------------------------------------------------------------------------- | ------- | -------- | -| `targets` | `list(map(string))` | List of files to read from. | | yes | -| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | -| `encoding` | `string` | The encoding to convert from when reading files. | `""` | no | -| `tail_from_end` | `bool` | Whether a log file should be tailed from the end if a stored position is not found. | `false` | no | +| Name | Type | Description | Default | Required | +|-----------------|----------------------|------------------------------------------------------------------------------------|---------|----------| +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `targets` | `list(map(string))` | List of files to read from. | | yes | +| `encoding` | `string` | The encoding to convert from when reading files. | `""` | no | +| `tail_from_end` | `bool` | Whether a log file should be tailed from the end if a stored position isn't found. | `false` | no | -The `encoding` argument must be a valid [IANA encoding][] name. If not set, it -defaults to UTF-8. +The `encoding` argument must be a valid [IANA encoding][] name. If not set, it defaults to UTF-8. You can use the `tail_from_end` argument when you want to tail a large file without reading its entire content. When set to true, only new logs will be read, ignoring the existing ones. @@ -62,20 +58,18 @@ The following blocks are supported inside the definition of `loki.source.file`: [decompresssion]: #decompresssion-block [file_watch]: #file_watch-block -### decompresssion block +### decompresssion -The `decompression` block contains configuration for reading logs from -compressed files. The following arguments are supported: +The `decompression` block contains configuration for reading logs from compressed files. The following arguments are supported: | Name | Type | Description | Default | Required | | --------------- | ---------- | --------------------------------------------------------------- | ------- | -------- | | `enabled` | `bool` | Whether decompression is enabled. | | yes | -| `initial_delay` | `duration` | Time to wait before starting to read from new compressed files. | 0 | no | | `format` | `string` | Compression format. | | yes | +| `initial_delay` | `duration` | Time to wait before starting to read from new compressed files. | 0 | no | -If you compress a file under a folder being scraped, `loki.source.file` might -try to ingest your file before you finish compressing it. To avoid it, pick -an `initial_delay` that is enough to avoid it. +If you compress a file under a folder being scraped, `loki.source.file` might try to ingest your file before you finish compressing it. +To avoid it, pick an `initial_delay` that is enough to avoid it. Currently supported compression formats are: @@ -83,8 +77,7 @@ Currently supported compression formats are: - `z` - for zlib - `bz2` - for bzip2 -The component can only support one compression format at a time, in order to -handle multiple formats, you will need to create multiple components. +The component can only support one compression format at a time. To handle multiple formats, you must to create multiple components. ### file_watch block @@ -93,8 +86,8 @@ The following arguments are supported: | Name | Type | Description | Default | Required | | -------------------- | ---------- | ------------------------------------ | ------- | -------- | -| `min_poll_frequency` | `duration` | Minimum frequency to poll for files. | 250ms | no | | `max_poll_frequency` | `duration` | Maximum frequency to poll for files. | 250ms | no | +| `min_poll_frequency` | `duration` | Minimum frequency to poll for files. | 250ms | no | If no file changes are detected, the poll frequency doubles until a file change is detected or the poll frequency reaches the `max_poll_frequency`. @@ -102,12 +95,11 @@ If file changes are detected, the poll frequency is reset to `min_poll_frequency ## Exported fields -`loki.source.file` does not export any fields. +`loki.source.file` doesn't export any fields. ## Component health -`loki.source.file` is only reported as unhealthy if given an invalid -configuration. +`loki.source.file` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -119,45 +111,38 @@ configuration. ## Debug metrics -- `loki_source_file_read_bytes_total` (gauge): Number of bytes read. -- `loki_source_file_file_bytes_total` (gauge): Number of bytes total. -- `loki_source_file_read_lines_total` (counter): Number of lines read. - `loki_source_file_encoding_failures_total` (counter): Number of encoding failures. +- `loki_source_file_file_bytes_total` (gauge): Number of bytes total. - `loki_source_file_files_active_total` (gauge): Number of active files. +- `loki_source_file_read_bytes_total` (gauge): Number of bytes read. +- `loki_source_file_read_lines_total` (counter): Number of lines read. ## Component behavior If the decompression feature is deactivated, the component will continuously monitor and 'tail' the files. -In this mode, upon reaching the end of a file, the component remains active, awaiting and reading new entries in real-time as they are appended. +In this mode, upon reaching the end of a file, the component remains active, awaiting, and reading new entries in real-time as they are appended. -Each element in the list of `targets` as a set of key-value pairs called -_labels_. -The set of targets can either be _static_, or dynamically provided periodically -by a service discovery component. The special label `__path__` _must always_ be -present and must point to the absolute path of the file to read from. +Each element in the list of `targets` as a set of key-value pairs called _labels_. +The set of targets can either be _static_, or dynamically provided periodically by a service discovery component. +The special label `__path__` _must always_ be present and must point to the absolute path of the file to read from. -The `__path__` value is available as the `filename` label to each log entry -the component reads. All other labels starting with a double underscore are -considered _internal_ and are removed from the log entries before they're -passed to other `loki.*` components. +The `__path__` value is available as the `filename` label to each log entry the component reads. +All other labels starting with a double underscore are considered _internal_ and are removed from the log entries before they're passed to other `loki.*` components. -The component uses its data path (a directory named after the domain's -fully qualified name) to store its _positions file_. The positions file is used -to store read offsets, so that in case of a component or Agent restart, +The component uses its data path (a directory named after the domain's fully qualified name) to store its _positions file_. +The positions file is used to store read offsets, so that in case of a component or Agent restart, `loki.source.file` can pick up tailing from the same spot. -If a file is removed from the `targets` list, its positions file entry is also -removed. When it's added back on, `loki.source.file` starts reading it from the -beginning. +If a file is removed from the `targets` list, its positions file entry is also removed. +When it's added back on, `loki.source.file` starts reading it from the beginning. ## Examples ### Static targets -This example collects log entries from the files specified in the targets -argument and forwards them to a `loki.write` component to be written to Loki. +This example collects log entries from the files specified in the targets argument and forwards them to a `loki.write` component to be written to Loki. ```river loki.source.file "tmpfiles" { @@ -178,9 +163,8 @@ loki.write "local" { ### File globbing -This example collects log entries from the files matching `*.log` pattern -using `local.file_match` component. When files appear or disappear, the list of -targets will be updated accordingly. +This example collects log entries from the files matching `*.log` pattern using `local.file_match` component. +When files appear or disappear, the list of targets will be updated accordingly. ```river @@ -204,9 +188,7 @@ loki.write "local" { ### Decompression -This example collects log entries from the compressed files matching `*.gz` -pattern using `local.file_match` component and the decompression configuration -on the `loki.source.file` component. +This example collects log entries from the compressed files matching `*.gz` pattern using `local.file_match` component and the decompression configuration on the `loki.source.file` component. ```river diff --git a/docs/sources/flow/reference/components/loki.source.gcplog.md b/docs/sources/flow/reference/components/loki.source.gcplog.md index 3379a43c32a9..1e38d2f0a568 100644 --- a/docs/sources/flow/reference/components/loki.source.gcplog.md +++ b/docs/sources/flow/reference/components/loki.source.gcplog.md @@ -11,15 +11,11 @@ title: loki.source.gcplog # loki.source.gcplog -`loki.source.gcplog` retrieves logs from cloud resources such as GCS buckets, -load balancers, or Kubernetes clusters running on GCP by making use of Pub/Sub -[subscriptions](https://cloud.google.com/pubsub/docs/subscriber). +`loki.source.gcplog` retrieves logs from cloud resources such as GCS buckets, load balancers, or Kubernetes clusters running on GCP by making use of Pub/Sub [subscriptions](https://cloud.google.com/pubsub/docs/subscriber). -The component uses either the 'push' or 'pull' strategy to retrieve log -entries and forward them to the list of receivers in `forward_to`. +The component uses either the 'push' or 'pull' strategy to retrieve log entries and forward them to the list of receivers in `forward_to`. -Multiple `loki.source.gcplog` components can be specified by giving them -different labels. +Multiple `loki.source.gcplog` components can be specified by giving them different labels. ## Usage @@ -52,12 +48,11 @@ The following blocks are supported inside the definition of |-------------|----------|-------------------------------------------------------------------------------|----------| | pull | [pull][] | Configures a target to pull logs from a GCP Pub/Sub subscription. | no | | push | [push][] | Configures a server to receive logs as GCP Pub/Sub push requests. | no | -| push > http | [http][] | Configures the HTTP server that receives requests when using the `push` mode. | no | | push > grpc | [grpc][] | Configures the gRPC server that receives requests when using the `push` mode. | no | +| push > http | [http][] | Configures the HTTP server that receives requests when using the `push` mode. | no | -The `pull` and `push` inner blocks are mutually exclusive; a component must -contain exactly one of the two in its definition. The `http` and `grpc` block -are just used when the `push` block is configured. +The `pull` and `push` inner blocks are mutually exclusive; a component must contain exactly one of the two in its definition. +The `http` and `grpc` block are just used when the `push` block is configured. [pull]: #pull-block [push]: #push-block @@ -66,72 +61,58 @@ are just used when the `push` block is configured. ### pull block -The `pull` block defines which GCP project ID and subscription to read log -entries from. +The `pull` block defines which GCP project ID and subscription to read log entries from. -The following arguments can be used to configure the `pull` block. Any omitted -fields take their default values. +The following arguments can be used to configure the `pull` block. Any omitted fields take their default values. | Name | Type | Description | Default | Required | |--------------------------|---------------|---------------------------------------------------------------------------|---------|----------| | `project_id` | `string` | The GCP project id the subscription belongs to. | | yes | | `subscription` | `string` | The subscription to pull logs from. | | yes | | `labels` | `map(string)` | Additional labels to associate with incoming logs. | `"{}"` | no | -| `use_incoming_timestamp` | `bool` | Whether to use the incoming log timestamp. | `false` | no | | `use_full_line` | `bool` | Send the full line from Cloud Logging even if `textPayload` is available. | `false` | no | +| `use_incoming_timestamp` | `bool` | Whether to use the incoming log timestamp. | `false` | no | -To make use of the `pull` strategy, the GCP project must have been -[configured](/docs/loki/next/clients/promtail/gcplog-cloud/) -to forward its cloud resource logs onto a Pub/Sub topic for -`loki.source.gcplog` to consume. +To make use of the `pull` strategy, the GCP project must have been [configured](/docs/loki/next/clients/promtail/gcplog-cloud/) to forward its cloud resource logs onto a Pub/Sub topic for `loki.source.gcplog` to consume. -Typically, the host system also needs to have its GCP -[credentials](https://cloud.google.com/docs/authentication/application-default-credentials) -configured. One way to do it is to point the `GOOGLE_APPLICATION_CREDENTIALS` -environment variable to the location of a credential configuration JSON file or -a service account key. +Typically, the host system also needs to have its GCP [credentials](https://cloud.google.com/docs/authentication/application-default-credentials) configured. +One way to do it is to point the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the location of a credential configuration JSON file or a service account key. ### push block -The `push` block defines the configuration of the server that receives -push requests from GCP's Pub/Sub servers. +The `push` block defines the configuration of the server that receives push requests from GCP's Pub/Sub servers. -The following arguments can be used to configure the `push` block. Any omitted -fields take their default values. +The following arguments can be used to configure the `push` block. Any omitted fields take their default values. -| Name | Type | Description | Default | Required | -|-----------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------| -| `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | -| `push_timeout` | `duration` | Sets a maximum processing time for each incoming GCP log entry. | `"0s"` | no | -| `labels` | `map(string)` | Additional labels to associate with incoming entries. | `"{}"` | no | -| `use_incoming_timestamp` | `bool` | Whether to use the incoming entry timestamp. | `false` | no | +| Name | Type | Description | Default | Required | +|-----------------------------|---------------|------------------------------------------------------------------------------------|---------|----------| +| `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | +| `labels` | `map(string)` | Additional labels to associate with incoming entries. | `"{}"` | no | +| `push_timeout` | `duration` | Sets a maximum processing time for each incoming GCP log entry. | `"0s"` | no | +| `use_incoming_timestamp` | `bool` | Whether to use the incoming entry timestamp. | `false` | no | | `use_full_line` | `bool` | Send the full line from Cloud Logging even if `textPayload` is available. By default, if `textPayload` is present in the line, then it's used as log line | `false` | no | -The server listens for POST requests from GCP's Push subscriptions on -`HOST:PORT/gcp/api/v1/push`. +The server listens for POST requests from GCP's Push subscriptions on `HOST:PORT/gcp/api/v1/push`. -By default, for both strategies the component assigns the log entry timestamp -as the time it was processed, except if `use_incoming_timestamp` is set to -true. +By default, for both strategies the component assigns the log entry timestamp as the time it was processed, except if `use_incoming_timestamp` is set to true. The `labels` map is applied to every entry that passes through the component. ### http -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} ## Exported fields -`loki.source.gcplog` does not export any fields. +`loki.source.gcplog` doesn't export any fields. ## Component health -`loki.source.gcplog` is only reported as unhealthy if given an invalid -configuration. +`loki.source.gcplog` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -142,22 +123,19 @@ configuration. ## Debug metrics -When using the `pull` strategy, the component exposes the following debug -metrics: +When using the `pull` strategy, the component exposes the following debug metrics: * `loki_source_gcplog_pull_entries_total` (counter): Number of entries received by the gcplog target. -* `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. * `loki_source_gcplog_pull_last_success_scrape` (gauge): Timestamp of target's last successful poll. +* `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. -When using the `push` strategy, the component exposes the following debug -metrics: +When using the `push` strategy, the component exposes the following debug metrics: * `loki_source_gcplog_push_entries_total` (counter): Number of entries received by the gcplog target. * `loki_source_gcplog_push_entries_total` (counter): Number of parsing errors while receiving gcplog messages. ## Example -This example listens for GCP Pub/Sub PushRequests on `0.0.0.0:8080` and -forwards them to a `loki.write` component. +This example listens for GCP Pub/Sub PushRequests on `0.0.0.0:8080` and forwards them to a `loki.write` component. ```river loki.source.gcplog "local" { @@ -173,8 +151,7 @@ loki.write "local" { } ``` -On the other hand, if we need the server to listen on `0.0.0.0:4040`, and forwards them -to a `loki.write` component. +On the other hand, if we need the server to listen on `0.0.0.0:4040`, and forwards them to a `loki.write` component. ```river loki.source.gcplog "local" { diff --git a/docs/sources/flow/reference/components/loki.source.gelf.md b/docs/sources/flow/reference/components/loki.source.gelf.md index e8544fe0248f..8b2aeb8831b8 100644 --- a/docs/sources/flow/reference/components/loki.source.gelf.md +++ b/docs/sources/flow/reference/components/loki.source.gelf.md @@ -11,11 +11,9 @@ title: loki.source.gelf # loki.source.gelf -`loki.source.gelf` reads [Graylog Extended Long Format (GELF) logs](https://github.com/Graylog2/graylog2-server) from a UDP listener and forwards them to other -`loki.*` components. +`loki.source.gelf` reads [Graylog Extended Long Format (GELF) logs](https://github.com/Graylog2/graylog2-server) from a UDP listener and forwards them to other `loki.*` components. -Multiple `loki.source.gelf` components can be specified by giving them -different labels and ports. +Multiple `loki.source.gelf` components can be specified by giving them different labels and ports. ## Usage @@ -26,42 +24,37 @@ loki.source.gelf "LABEL" { ``` ## Arguments -The component starts a new UDP listener and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new UDP listener and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.gelf` supports the following arguments: -Name | Type | Description | Default | Required ------------- |----------------------|--------------------------------------------------------------------------------|----------------------------| -------- -`listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no -`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +Name | Type | Description | Default | Required +-------------------------|----------------|----------------------------------------------------------------------------|-----------------|--------- +`listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no +`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +{{% admonition type="note" %}} +GELF logs can be sent uncompressed or compressed with GZIP or ZLIB. A `job` label is added with the full name of the component `loki.source.gelf.LABEL`. +{{% /admonition %}} -> **NOTE**: GELF logs can be sent uncompressed or compressed with GZIP or ZLIB. -> A `job` label is added with the full name of the component `loki.source.gelf.LABEL`. - -The `relabel_rules` argument can make use of the `rules` export from a -[loki.relabel][] component to apply one or more relabling rules to log entries -before they're forward to the list of receivers specified in `forward_to`. +The `relabel_rules` argument can make use of the `rules` export from a [loki.relabel][] component to apply one or more relabling rules to log entries before they're forward to the list of receivers specified in `forward_to`. Incoming messages have the following internal labels available: -* `__gelf_message_level`: The GELF level as a string. -* `__gelf_message_host`: The host sending the GELF message. -* `__gelf_message_host`: The GELF level message version sent by the client. * `__gelf_message_facility`: The GELF facility. +* `__gelf_message_host`: The GELF level message version sent by the client. +* `__gelf_message_host`: The host sending the GELF message. +* `__gelf_message_level`: The GELF level as a string. -All labels starting with `__` are removed prior to forwarding log entries. To -keep these labels, relabel them using a [loki.relabel][] component and pass its -`rules` export to the `relabel_rules` argument. +All labels starting with `__` are removed prior to forwarding log entries. +To keep these labels, relabel them using a [loki.relabel][] component and pass its `rules` export to the `relabel_rules` argument. [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Component health -`loki.source.gelf` is only reported as unhealthy if given an invalid -configuration. +`loki.source.gelf` is only reported as unhealthy if given an invalid configuration. ## Debug Metrics diff --git a/docs/sources/flow/reference/components/loki.source.heroku.md b/docs/sources/flow/reference/components/loki.source.heroku.md index f98b00312062..37172a5738eb 100644 --- a/docs/sources/flow/reference/components/loki.source.heroku.md +++ b/docs/sources/flow/reference/components/loki.source.heroku.md @@ -11,20 +11,18 @@ title: loki.source.heroku # loki.source.heroku -`loki.source.heroku` listens for Heroku messages over TCP connections -and forwards them to other `loki.*` components. +`loki.source.heroku` listens for Heroku messages over TCP connections and forwards them to other `loki.*` components. -The component starts a new heroku listener for the given `listener` -block and fans out incoming entries to the list of receivers in `forward_to`. +The component starts a new heroku listener for the given `listener` block and fans out incoming entries to the list of receivers in `forward_to`. -Before using `loki.source.heroku`, Heroku should be configured with the URL where the Agent will be listening. Follow the steps in [Heroku HTTPS Drain docs](https://devcenter.heroku.com/articles/log-drains#https-drains) for using the Heroku CLI with a command like the following: +Before using `loki.source.heroku`, Heroku should be configured with the URL where the Agent will be listening. +Follow the steps in [Heroku HTTPS Drain docs](https://devcenter.heroku.com/articles/log-drains#https-drains) for using the Heroku CLI with a command like the following: ```shell heroku drains:add [http|https]://HOSTNAME:PORT/heroku/api/v1/drain -a HEROKU_APP_NAME ``` -Multiple `loki.source.heroku` components can be specified by giving them -different labels. +Multiple `loki.source.heroku` components can be specified by giving them different labels. ## Usage @@ -42,60 +40,57 @@ loki.source.heroku "LABEL" { `loki.source.heroku` supports the following arguments: -Name | Type | Description | Default | Required ------------------------- | ---------------------- |------------------------------------------------------------------------------------| ------- | -------- -`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no -`labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no +Name | Type | Description | Default | Required +----------------------------|----------------------|------------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no +`labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no -The `relabel_rules` field can make use of the `rules` export value from a -`loki.relabel` component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a `loki.relabel` component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. ## Blocks The following blocks are supported inside the definition of `loki.source.heroku`: - Hierarchy | Name | Description | Required ------------|----------|----------------------------------------------------|---------- - `http` | [http][] | Configures the HTTP server that receives requests. | no - `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +`grpc` | [grpc][] | Configures the gRPC server that receives requests. | no +`http` | [http][] | Configures the HTTP server that receives requests. | no [http]: #http [grpc]: #grpc -### http +### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} -### grpc +### http -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Labels The `labels` map is applied to every message that the component reads. The following internal labels all prefixed with `__` are available but will be discarded if not relabeled: -- `__heroku_drain_host` - `__heroku_drain_app` -- `__heroku_drain_proc` +- `__heroku_drain_host` - `__heroku_drain_log_id` +- `__heroku_drain_proc` -All url query params will be translated to `__heroku_drain_param_` +All URL query parameters will be translated to `__heroku_drain_param_` If the `X-Scope-OrgID` header is set it will be translated to `__tenant_id__` ## Exported fields -`loki.source.heroku` does not export any fields. +`loki.source.heroku` doesn't export any fields. ## Component health -`loki.source.heroku` is only reported as unhealthy if given an invalid -configuration. +`loki.source.heroku` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -109,7 +104,7 @@ configuration. ## Example -This example listens for Heroku messages over TCP in the specified port and forwards them to a `loki.write` component using the Heroku timestamp. +The following example listens for Heroku messages over TCP in the specified port and forwards them to a `loki.write` component using the Heroku timestamp. ```river loki.source.heroku "local" { diff --git a/docs/sources/flow/reference/components/loki.source.journal.md b/docs/sources/flow/reference/components/loki.source.journal.md index 26a1922b7aeb..c984783aaa0c 100644 --- a/docs/sources/flow/reference/components/loki.source.journal.md +++ b/docs/sources/flow/reference/components/loki.source.journal.md @@ -11,11 +11,9 @@ title: loki.source.journal # loki.source.journal -`loki.source.journal` reads from the systemd journal and forwards them to other -`loki.*` components. +`loki.source.journal` reads from the systemd journal and forwards them to other `loki.*` components. -Multiple `loki.source.journal` components can be specified by giving them -different labels. +Multiple `loki.source.journal` components can be specified by giving them different labels. ## Usage @@ -26,56 +24,48 @@ loki.source.journal "LABEL" { ``` ## Arguments -The component starts a new journal reader and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new journal reader and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.journal` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no -`max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no -`path` | `string` | Path to a directory to read entries from. | `""` | no -`matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no +Name | Type | Description | Default | Required +-----------------|----------------------|--------------------------------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no +`labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no +`matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no +`max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no +`path` | `string` | Path to a directory to read entries from. | `""` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -> **NOTE**: A `job` label is added with the full name of the component `loki.source.journal.LABEL`. +{{% admonition type="note" %}} +A `job` label is added with the full name of the component `loki.source.journal.LABEL`. +{{% /admonition %}} -When the `format_as_json` argument is true, log messages are passed through as -JSON with all of the original fields from the journal entry. Otherwise, the log -message is taken from the content of the `MESSAGE` field from the journal -entry. +When the `format_as_json` argument is true, log messages are passed through as JSON with all of the original fields from the journal entry. +Otherwise, the log message is taken from the content of the `MESSAGE` field from the journal entry. -When the `path` argument is empty, `/var/log/journal` and `/run/log/journal` -will be used for discovering journal entries. +When the `path` argument is empty, `/var/log/journal` and `/run/log/journal` will be used for discovering journal entries. -The `relabel_rules` argument can make use of the `rules` export value from a -[loki.relabel][] component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` argument can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. -All messages read from the journal include internal labels following the -pattern of `__journal_FIELDNAME` and will be dropped before sending to the list -of receivers specified in `forward_to`. To keep these labels, use the -`relabel_rules` argument and relabel them to not be prefixed with `__`. +All messages read from the journal include internal labels following the pattern of `__journal_FIELDNAME` and will be dropped before sending to the list of receivers specified in `forward_to`. +To keep these labels, use the `relabel_rules` argument and relabel them to not be prefixed with `__`. -> **NOTE**: many field names from journald start with an `_`, such as -> `_systemd_unit`. The final internal label name would be -> `__journal__systemd_unit`, with _two_ underscores between `__journal` and -> `systemd_unit`. +{{% admonition type="note" %}} +Many field names from journald start with an `_`, such as `_systemd_unit`. The final internal label name would be `__journal__systemd_unit`, with _two_ underscores between `__journal` and `systemd_unit`. +{{% /admonition %}} [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Component health -`loki.source.journal` is only reported as unhealthy if given an invalid -configuration. +`loki.source.journal` is only reported as unhealthy if given an invalid configuration. ## Debug Metrics -* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. * `agent_loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read. +* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. ## Example diff --git a/docs/sources/flow/reference/components/loki.source.kafka.md b/docs/sources/flow/reference/components/loki.source.kafka.md index 4110177d7d09..c0528e085c68 100644 --- a/docs/sources/flow/reference/components/loki.source.kafka.md +++ b/docs/sources/flow/reference/components/loki.source.kafka.md @@ -11,19 +11,14 @@ title: loki.source.kafka # loki.source.kafka -`loki.source.kafka` reads messages from Kafka using a consumer group -and forwards them to other `loki.*` components. +`loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. -The component starts a new Kafka consumer group for the given arguments -and fans out incoming entries to the list of receivers in `forward_to`. +The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`. -Before using `loki.source.kafka`, Kafka should have at least one producer -writing events to at least one topic. Follow the steps in the -[Kafka Quick Start](https://kafka.apache.org/documentation/#quickstart) -to get started with Kafka. +Before using `loki.source.kafka`, Kafka should have at least one producer writing events to at least one topic. +Follow the steps in the [Kafka Quick Start](https://kafka.apache.org/documentation/#quickstart) to get started with Kafka. -Multiple `loki.source.kafka` components can be specified by giving them -different labels. +Multiple `loki.source.kafka` components can be specified by giving them different labels. ## Usage @@ -39,38 +34,35 @@ loki.source.kafka "LABEL" { `loki.source.kafka` supports the following arguments: - Name | Type | Description | Default | Required ---------------------------|----------------------|----------------------------------------------------------|-----------------------|---------- - `brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes - `topics` | `list(string)` | The list of Kafka topics to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `version` | `string` | Kafka version to connect to. | `"2.2.1"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no - `labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +-------------------------|----------------------|----------------------------------------------------------|-----------------------|--------- +`brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`topics` | `list(string)` | The list of Kafka topics to consume. | | yes +`assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no +`group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no +`labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no +`version` | `string` | Kafka version to connect to. | `"2.2.1"` | no `assignor` values can be either `"range"`, `"roundrobin"`, or `"sticky"`. Labels from the `labels` argument are applied to every message that the component reads. -The `relabel_rules` field can make use of the `rules` export value from a -[loki.relabel][] component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. In addition to custom labels, the following internal labels prefixed with `__` are available: +- `__meta_kafka_group_id` +- `__meta_kafka_member_id` - `__meta_kafka_message_key` - `__meta_kafka_message_offset` -- `__meta_kafka_topic` - `__meta_kafka_partition` -- `__meta_kafka_member_id` -- `__meta_kafka_group_id` +- `__meta_kafka_topic` -All labels starting with `__` are removed prior to forwarding log entries. To -keep these labels, relabel them using a [loki.relabel][] component and pass its -`rules` export to the `relabel_rules` argument. +All labels starting with `__` are removed prior to forwarding log entries. +To keep these labels, relabel them using a [loki.relabel][] component and pass its `rules` export to the `relabel_rules` argument. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -80,11 +72,11 @@ The following blocks are supported inside the definition of `loki.source.kafka`: Hierarchy | Name | Description | Required ---------------------------------------------|------------------|-----------------------------------------------------------|---------- - authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no - authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no +authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no +authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no [authentication]: #authentication-block