Skip to content

Commit

Permalink
docs: Describe multi-tail plugin Fluent Bit setup (#608)
Browse files Browse the repository at this point in the history
  • Loading branch information
chrkl authored Dec 7, 2023
1 parent 6295927 commit 72d4cea
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 581 deletions.
18 changes: 7 additions & 11 deletions docs/user/02-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ Kyma's Telemetry module brings a predefined setup of the Fluent Bit DaemonSet an

![Pipeline Concept](./assets/logs-pipelines.drawio.svg)

1. A central `tail` input plugin reads the application logs.
1. A dedicated `tail` input plugin reads the application logs, which are selected in the input section of the `LogPipeline`. Each `tail` input uses a dedicated `tag` with the name `<logpipeline>.*`.

2. The application logs are enriched by the `kubernetes` filter. Then, for every LogPipeline definition, a `rewrite_tag` filter is generated, which uses a dedicated `tag` with the name `<logpipeline>.*`, followed by the custom configuration defined in the LogPipeline resource. You can add your own filters to the default filters.
2. The application logs are enriched by the `kubernetes` filter. You can add your own filters to the default filters.

3. Based on the default and custom filters, you get the desired output for each `LogPipeline`.

Expand All @@ -56,9 +56,9 @@ To ship application logs to a new output, create a resource of the kind `LogPipe

```yaml
kind: LogPipeline
apiVersion: telemetry.kyma-project.io/v1alpha1
metadata:
name: http-backend
apiVersion: telemetry.kyma-project.io/v1alpha1
metadata:
name: http-backend
spec:
output:
http:
Expand Down Expand Up @@ -326,7 +326,7 @@ You activated a LogPipeline and logs start streaming to your backend. To verify

## Log record processing

After a log record has been read, it is preprocessed by centrally configured plugins, like the `kubernetes` filter. Thus, when a record is ready to be processed by the sections defined in the LogPipeline definition, it has several attributes available for processing and shipment.
After a log record has been read, it is preprocessed by configured plugins, like the `kubernetes` filter. Thus, when a record is ready to be processed by the sections defined in the LogPipeline definition, it has several attributes available for processing and shipment.

![Flow](./assets/logs-flow.drawio.svg)

Expand All @@ -347,7 +347,7 @@ In the example, we assume there's a container `myContainer` of Pod `myPod`, runn

### Stage 2: Tail input

The central pipeline tails the log message from a log file managed by the container runtime. The file name contains the Namespace, Pod, and container information that will be available later as part of the [tag](https://docs.fluentbit.io/manual/concepts/key-concepts#tag). The resulting log record available in an internal Fluent Bit representation looks similar to the following example:
The `tail` input plugin reads the log message from a log file managed by the container runtime. The input plugin brings a dedicated filesystem buffer for the pipeline. The file name contains the Namespace, Pod, and container information that will be available later as part of the [tag](https://docs.fluentbit.io/manual/concepts/key-concepts#tag). The tag is prefixed with the pipeline name. The resulting log record available in an internal Fluent Bit representation looks similar to the following example:

```json
{
Expand Down Expand Up @@ -425,10 +425,6 @@ The record **after** applying the JSON parser:
}
```

### Stage 5: Rewrite tag

As per the LogPipeline definition, a dedicated [rewrite_tag](https://docs.fluentbit.io/manual/pipeline/filters/rewrite-tag) filter is introduced. The filter brings a dedicated filesystem buffer for the outputs defined in the related pipeline, and with that, ensures a shipment of the logs isolated from outputs of other pipelines. As a consequence, each pipeline runs on its own [tag](https://docs.fluentbit.io/manual/concepts/key-concepts#tag).

## Operations

A LogPipeline creates a DaemonSet running one Fluent Bit instance per Node in your cluster. That instance collects and ships application logs to the configured backend. The Telemetry module assures that the Fluent Bit instances are operational and healthy at any time. The Telemetry module delivers the data to the backend using typical patterns like buffering and retries (see [Limitations](#limitations)). However, there are scenarios where the instances will drop logs because the backend is either not reachable for some duration, or cannot handle the log load and is causing back pressure.
Expand Down
Loading

0 comments on commit 72d4cea

Please sign in to comment.