The Logging operator solves your logging-related problems in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.
+
+
+
+
+The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages. You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify.
+
The Logging operator solves your logging-related problems in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.
+
The operator deploys and configures a log collector (currently a Fluent Bit DaemonSet) on every node to collect container and application logs from the node file system.
Fluent Bit queries the Kubernetes API and enriches the logs with metadata about the pods, and transfers both the logs and the metadata to a log forwarder instance.
The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng (via the AxoSyslog syslog-ng distribution) as log forwarders.
Your logs are always transferred on authenticated and encrypted channels.
This operator helps you bundle logging information with your applications: you can describe the behavior of your application in its charts, the Logging operator does the rest.
Feature highlights
+
Namespace isolation
Native Kubernetes label selectors
Secure communication (TLS)
Configuration validation
Multiple flow support (multiply logs for different transformations)
Multiple output support (store the same logs in multiple storage: S3, GCS, ES, Loki and more…)
Multiple logging system support (multiple Fluentd, Fluent Bit deployment on the same cluster)
Support for both syslog-ng and Fluentd as the central log routing component
Architecture
The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
The log collectors are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
The log forwarder (also called log aggregator) instance receives, filters, and transforms the incoming logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements. For tips, see Which log forwarder to use.
You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
You can configure the Logging operator using the following Custom Resource Definitions.
+
logging - The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It can also contain configurations for Fluent Bit, Fluentd, and syslog-ng. (Starting with Logging operator version 4.5, you can also configure Fluent Bit, Fluentd, and syslog-ng as separate resources.)
CRDs for Fluentd:
+
+
output - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also clusteroutput. To configure syslog-ng outputs, see SyslogNGOutput.
flow - Defines a Fluentd logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also clusterflow. To configure syslog-ng flows, see SyslogNGFlow.
clusteroutput - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
clusterflow - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure syslog-ng clusterflows, see SyslogNGClusterFlow.
CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+
+
SyslogNGOutput - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also SyslogNGClusterOutput. To configure Fluentd outputs, see output.
SyslogNGFlow - Defines a syslog-ng logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also SyslogNGClusterFlow. To configure Fluentd flows, see flow.
SyslogNGClusterOutput - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
SyslogNGClusterFlow - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure Fluentd clusterflows, see clusterflow.
For the detailed CRD documentation, see List of CRDs.
If you encounter problems while using the Logging operator the documentation does not address, open an issue or talk to us on Discord or on the CNCF Slack.
As a Fluent Bit restart can take a long time when there are many files to index, Logging operator now supports hot reload for Fluent Bit to reload its configuration on the fly.
You can enable hot reloads under the Logging’s spec.fluentbit.configHotReload (legacy method) option, or the new FluentbitAgent’s spec.configHotReload option:
Many thanks to @zrobisho for contributing this feature!
Kubernetes namespace labels and annotations
Logging operator 4.6 supports the new Fluent Bit Kubernetes filter options that will be released in Fluent Bit 3.0. That way you’ll be able to enrich your logs with Kubernetes namespace labels and annotations right at the source of the log messages.
Fluent Bit 3.0 hasn’t been released yet (at the time of this writing), but you can use a developer image to test the feature, using a FluentbitAgent resource like this:
You can now specify the event tailer image to use in the logging-operator chart.
Fluent Bit can now automatically delete irrecoverable chunks.
The Fluentd statefulset and its components created by the Logging operator now include the whole securityContext object.
The Elasticsearch output of the syslog-ng aggregator now supports the template option.
To avoid problems that might occur when a tenant has a faulty output and backpressure kicks in, Logging operator now creates a dedicated tail input for each tenant.
Removed feature
We have removed support for Pod Security Policies (PSPs), which were deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.
Note that the API was left intact, it just doesn’t do anything.
Version 4.5
The following are the highlights and main changes of Logging operator 4.5. For a complete list of changes and bugfixes, see the Logging operator 4.5 releases page.
Standalone FluentdConfig and SyslogNGConfig CRDs
Starting with Logging operator version 4.5, you can either configure Fluentd in the Logging CR, or you can use a standalone FluentdConfig CR. Similarly, you can use a standalone SyslogNGConfig CRD to configure syslog-ng.
These standalone CRDs are namespaced resources that allow you to configure the Fluentd/syslog-ng aggregator in the control namespace, separately from the Logging resource. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The following are the highlights and main changes of Logging operator 4.4. For a complete list of changes and bugfixes, see the Logging operator 4.4 releases page.
New syslog-ng features
When using syslog-ng as the log aggregator, you can now use the following new outputs:
On a side note, nodegroup level isolation for hard multitenancy is also supported, see the Nodegroup-based multitenancy example.
Forwarder logs
Fluent-bit now doesn’t process the logs of the Fluentd and syslog-ng forwarders by default to avoid infinitely growing message loops. With this change, you can access Fluentd and syslog-ng logs simply by running kubectl logs <name-of-forwarder-pod>
In a future Logging operator version the logs of the aggregators will also be available for routing to external outputs.
Timeout-based configuration checks
Timeout-based configuration checks are different from the normal method: they start a Fluentd or syslog-ng instance
+without the dry-run or syntax-check flags, so output plugins or destination drivers actually try to establish
+connections and will fail if there are any issues , for example, with the credentials.
For jobs/individual pods that run to completion, Istio sidecar injection needs to be disabled, otherwise the affected pods would live forever with the running sidecar container. Configuration checkers and Fluentd drainer pods can be configured with the label sidecar.istio.io/inject set to false. You can configure Fluentd drainer labels in the Logging spec.
Improved buffer metrics
The buffer metrics are now available for both the Fluentd and the SyslogNG based aggregators.
The sidecar configuration has been rewritten to add a new metric and improve performance by avoiding unnecessary cardinality.
The name of the metric has been changed as well, but the original metric was kept in place to avoid breaking existing clients.
Metrics currently supported by the sidecar
Old
+# HELP node_buffer_size_bytes Disk space used [deprecated]
++# TYPE node_buffer_size_bytes gauge
++node_buffer_size_bytes{entity="/buffers"} 32253
+
New
+# HELP logging_buffer_files File count
++# TYPE logging_buffer_files gauge
++logging_buffer_files{entity="/buffers",host="all-to-file-fluentd-0"} 2
++# HELP logging_buffer_size_bytes Disk space used
++# TYPE logging_buffer_size_bytes gauge
++logging_buffer_size_bytes{entity="/buffers",host="all-to-file-fluentd-0"} 32253
+
Other improvements
+
You can now configure the resources of the buffer metrics sidecar.
You can now rerun failed configuration checks if there is no configcheck pod.
With the 4.3.0 release, the chart is now distributed through an OCI registry.
+For instructions on how to interact with OCI registries, please take a look at Use OCI-based registries.
+For instructions on installing the previous 4.2.3 version, see Installation for 4.2.
Deploy Logging operator with Helm
+
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
Validate the deployment
To verify that the installation was successful, complete the following steps.
+
+
Check the status of the pods. You should see a new logging-operator pod.
kubectl -n logging get pods
+
Expected output:
NAME READY STATUS RESTARTS AGE
+logging-operator-5df66b87c9-wgsdf 1/1 Running 0 21s
+
+
Check the CRDs. You should see the following five new CRDs.
kubectl get crd
+
Expected output:
NAME CREATED AT
+clusterflows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+clusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:04Z
+eventtailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+flows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+fluentbitagents.logging.banzaicloud.io 2023-08-10T12:05:04Z
+hosttailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+loggings.logging.banzaicloud.io 2023-08-10T12:05:05Z
+nodeagents.logging.banzaicloud.io 2023-08-10T12:05:05Z
+outputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusterflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngoutputs.logging.banzaicloud.io 2023-08-10T12:05:06Z
+
+
3 - Quick start guides
Try out Logging Operator with these quick start guides, that show you the basics of Logging operator.
For other detailed examples using different outputs, see Examples.
+
3.1 - Single app, one destination
This guide shows you how to collect application and container logs in Kubernetes using the Logging operator.
The Logging operator itself doesn’t store any logs. For demonstration purposes, it can deploy a special workload will to the cluster to let you observe the logs flowing through the system.
The Logging operator collects all logs from the cluster, selects the specific logs based on pod labels, and sends the selected log messages to the output.
+For more details about the Logging operator, see the Logging operator overview.
+
Note: This example aims to be simple enough to understand the basic capabilities of the operator. For a production ready setup, see Logging infrastructure setup and Operation.
In this tutorial, you will:
+
Install the Logging operator on a cluster.
Configure Logging operator to collect logs from a namespace and send it to an sample output.
Install a sample application (log-generator) to collect its logs.
Check the collected logs.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
This command installs the latest stable Logging operator and an extra workload (service and deployment). This workload is called logging-operator-test-receiver. It listens on an HTTP port, receives JSON messages, and writes them to the standard output (stdout) so that it is trivial to observe.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Tue Aug 15 15:58:41 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
After the installation, check that the following pods and services are running:
kubectl get deploy -n logging
+
Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE
+logging-operator 1/1 11 15m
+logging-operator-test-receiver 1/1 11 15m
+
kubectl get svc -n logging
+
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+logging-operator ClusterIP None <none> 8080/TCP 15m
+logging-operator-test-receiver ClusterIP 10.99.77.113 <none> 8080/TCP 15m
+
Configure the Logging operator
+
+
Create a Logging resource to deploy syslog-ng or Fluentd as the central log aggregator and forwarder. You can complete this quick start guide with any of them, but they have different features, so they are not equivalent. For details, see Which log forwarder to use.
Note: The control namespace is where the Logging operator deploys the forwarder’s resources, like the StatefulSet and the configuration secrets. Usually it’s called logging.
By default, this namespace is used to define the cluster-wide resources: SyslogNGClusterOutput, SyslogNGClusterFlow, ClusterOutput, and ClusterFlow. For details, see Configure log routing.
Expected output:
logging.logging.banzaicloud.io/quickstart created
+
+
Create a FluentbitAgent resource to collect logs from all containers. No special configuration is required for now.
fluentbitagent.logging.banzaicloud.io/quickstart created
+
+
Check that the resources were created successfully so far. Run the following command:
kubectl get pod --namespace logging --selector app.kubernetes.io/managed-by=quickstart
+
You should already see a completed configcheck pod that validates the forwarder’s configuration before the actual statefulset starts.
+There should also be a running fluentbit instance per node, that already starts to send all logs to the forwarder.
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-jvdp5 1/1 Running 0 3m5s
+quickstart-syslog-ng-0 2/2 Running 0 3m5s
+quickstart-syslog-ng-configcheck-8197c552 0/1 Completed 0 3m42s
+
+
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-nk9ms 1/1 Running 0 19s
+quickstart-fluentd-0 2/2 Running 0 19s
+quickstart-fluentd-configcheck-ac2d4553 0/1 Completed 0 60s
+
+
Create a namespace (for example, quickstart) from where you want to collect the logs.
kubectl create namespace quickstart
+
Expected output:
namespace/quickstart created
+
+
Create a flow and an output resource in the same namespace (quickstart). The flow resource routes logs from the namespace to a specific output. In this example, the output is called http. The flow resources are called SyslogNGFlow and Flow, the output resources are SyslogNGOutput and Output for syslog-ng and Fluentd, respectively.
NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 10m
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 10m
+
+NAME ACTIVE PROBLEMS
+syslogngoutput.logging.banzaicloud.io/http true
+
+NAME ACTIVE PROBLEMS
+syslogngflow.logging.banzaicloud.io/log-generator true
+
+
NAME ACTIVE PROBLEMS
+flow.logging.banzaicloud.io/log-generator true
+
+NAME ACTIVE PROBLEMS
+output.logging.banzaicloud.io/http true
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 3m12s
+
+NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 3m2s
+
+
Install log-generator to produce logs with the label app.kubernetes.io/name: log-generator
Release "log-generator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/log-generator:0.7.0
+Digest: sha256:0eba2c5c3adfc33deeec1d1612839cd1a0aa86f30022672ee022beab22436e04
+NAME: log-generator
+LAST DEPLOYED: Tue Aug 15 16:21:40 2023
+NAMESPACE: quickstart
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
The log-generator application starts to create HTTP access logs. Logging operator collects these log messages and sends them to the test-receiver pod defined in the output custom resource.
+
Check that the logs are delivered to the test-receiver pod output. First, run the following command to get the name of the test-receiver pod:
The log messages include the usual information of the access logs, and also Kubernetes-specific information like the pod name, labels, and so on.
+
(Optional) If you want to retry this guide with the other log forwarder on the same cluster, run the following command to delete the forwarder-specific resources:
If you have completed this guide, you have made the following changes to your cluster:
+
+
Installed the Fluent Bit agent on every node of the cluster to collect the logs and the labels from the node.
+
Installed syslog-ng or Fluentd on the cluster, to receive the logs from the Fluent Bit agents, and filter, parse, and transform them as needed, and to route the incoming logs to an output. To learn more about routing and filtering, see Routing your logs with syslog-ng or Routing your logs with Fluentd match directives. - Created the following resources that configure Logging operator and the components it manages:
+
Logging to configure the logging infrastructure, like the details of the Fluent Bit and the syslog-ng or Fluentd deployment. To learn more about configuring the logging infrastructure, see Logging infrastructure setup.
SyslogNGFlow or Flow that processes the incoming messages and routes them to the appropriate output. To learn more, see syslog-ng flows or Flow and ClusterFlow.
+
Installed a simple receiver to act as the destination of the logs, and configured the the log forwarder to send the logs from the quickstart namespace to this destination.
+
Installed a log-generator application to generate sample log messages, and verified that the logs of this application arrive to the output.
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
4 - Configure log routing
You can configure the various features and parameters of the Logging operator using Custom Resource Definitions (CRDs).
The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
The log collectors are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
The log forwarder (also called log aggregator) instance receives, filters, and transforms the incoming logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements. For tips, see Which log forwarder to use.
You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
You can configure the Logging operator using the following Custom Resource Definitions.
+
logging - The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It can also contain configurations for Fluent Bit, Fluentd, and syslog-ng. (Starting with Logging operator version 4.5, you can also configure Fluent Bit, Fluentd, and syslog-ng as separate resources.)
CRDs for Fluentd:
+
+
output - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also clusteroutput. To configure syslog-ng outputs, see SyslogNGOutput.
flow - Defines a Fluentd logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also clusterflow. To configure syslog-ng flows, see SyslogNGFlow.
clusteroutput - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
clusterflow - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure syslog-ng clusterflows, see SyslogNGClusterFlow.
CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+
+
SyslogNGOutput - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also SyslogNGClusterOutput. To configure Fluentd outputs, see output.
SyslogNGFlow - Defines a syslog-ng logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also SyslogNGClusterFlow. To configure Fluentd flows, see flow.
SyslogNGClusterOutput - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
SyslogNGClusterFlow - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure Fluentd clusterflows, see clusterflow.
The following sections show examples on configuring the various components to configure outputs and to filter and route your log messages to these outputs. For a list of available CRDs, see Custom Resource Definitions.
+
4.1 - Which log forwarder to use
The Logging operator supports Fluentd and syslog-ng (via the AxoSyslog syslog-ng distribution) as log forwarders. The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. Which one to use depends on your logging requirements.
The following points help you decide which forwarder to use.
+
The forwarders support different outputs. If the output you want to use is supported only by one forwarder, use that.
If the volume of incoming log messages is high, use syslog-ng, as its multithreaded processing provides higher performance.
If you have lots of logging flows or need complex routing or log message processing, use syslog-ng.
+
Note: Depending on which log forwarder you use, some of the CRDs you have to create and configure are different.
syslog-ng is supported only in Logging operator 4.0 or newer.
+
4.2 - Output and ClusterOutput
Outputs are the destinations where your log forwarder sends the log messages, for example, to Sumo Logic, or to a file. Depending on which log forwarder you use, you have to configure different custom resources.
Fluentd outputs
+
The Output resource defines an output where your Fluentd Flows can send the log messages. The output is a namespaced resource which means only a Flow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple outputs and attach them to multiple flows.
ClusterOutput defines an Output without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: Flow can be connected to Output and ClusterOutput, but ClusterFlow can be attached only to ClusterOutput.
+
For the details of the supported output plugins, see Fluentd outputs.
For the details of Output custom resource, see OutputSpec.
For the details of ClusterOutput custom resource, see ClusterOutput.
Fluentd S3 output example
The following snippet defines an Amazon S3 bucket as an output.
The SyslogNGOutput resource defines an output for syslog-ng where your SyslogNGFlows can send the log messages. The output is a namespaced resource which means only a SyslogNGFlow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple SyslogNGoutputs and attach them to multiple SyslogNGFlows.
SyslogNGClusterOutput defines a SyslogNGOutput without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: SyslogNGFlow can be connected to SyslogNGOutput and SyslogNGClusterOutput, but SyslogNGClusterFlow can be attached only to SyslogNGClusterOutput.
RFC5424 syslog-ng output example
The following example defines a simple SyslogNGOutput resource that sends the logs to the specified syslog server using the RFC5424 Syslog protocol in a TLS-encrypted connection.
For the details of the supported output plugins, see syslog-ng outputs.
For the details of SyslogNGOutput custom resource, see SyslogNGOutputSpec.
For the details of SyslogNGClusterOutput custom resource, see SyslogNGClusterOutput.
+
4.3 - Flow and ClusterFlow
Flows route the selected log messages to the specified outputs. Depending on which log forwarder you use, you can use different filters and outputs, and have to configure different custom resources.
Fluentd flows
Flow defines a logging flow for Fluentd with filters and outputs.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. (Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies.) For detailed examples on using the match statement, see log routing.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
Flow resources are namespaced, the selector only select Pod logs within namespace.
+ClusterFlow defines a Flow without namespace restrictions. It is also only effective in the controlNamespace.
+ClusterFlow selects logs from ALL namespace.
The following example transforms the log messages from the default namespace and sends them to an S3 output.
Note: In a multi-cluster setup you cannot easily determine which cluster the logs come from. You can append your own labels to each log
+using the record modifier filter.
+
For the details of Flow custom resource, see FlowSpec.
For the details of ClusterFlow custom resource, see ClusterFlow.
SyslogNGFlow defines a logging flow for syslog-ng with filters and outputs.
syslog-ng is supported only in Logging operator 4.0 or newer.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. For detailed examples on using the match statement, see log routing with syslog-ng.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
SyslogNGFlow resources are namespaced, the selector only selects Pod logs within the namespace.
+SyslogNGClusterFlow defines a SyslogNGFlow without namespace restrictions. It is also only effective in the controlNamespace.
+SyslogNGClusterFlow selects logs from ALL namespaces.
The following example selects only messages sent by the log-generator application and forwards them to a syslog output.
4.4 - Routing your logs with Fluentd match directives
+
Note: This page describes routing logs with Fluentd. If you are using syslog-ng to route your log messages, see Routing your logs with syslog-ng.
The first step to process your logs is to select which logs go where.
+The Logging operator uses Kubernetes labels, namespaces and other metadata
+to separate different log flows.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
To select or exclude logs you can use the match statement. Match is a collection
+of select and exclude expressions. In both expression you can use the labels
+attribute to filter for pod’s labels. Moreover, in Cluster flow you can use namespaces
+as a selecting or excluding criteria.
If you specify more than one label in a select or exclude expression, the labels have a logical AND connection between them. For example, an exclude expression with two labels excludes messages that have both labels. If you want an OR connection between labels, list them in separate expressions. For example, to exclude messages that have one of two specified labels, create a separate exclude expression for each label.
The select and exclude statements are evaluated in order!
Without at least one select criteria, no messages will be selected!
syslog-ng is supported only in Logging operator 4.0 or newer.
The first step to process your logs is to select which logs go where.
The match field of the SyslogNGFlow and SyslogNGClusterFlow resources define the routing rules of the logs.
+
Note: Fluentd can use only metadata to route the logs. When using syslog-ng filter expressions, you can filter both on metadata and log content as well.
The syntax of syslog-ng match statements is slightly different from the Fluentd match statements.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
Match expressions select messages by applying patterns on the content or metadata of the messages. You can use simple string matching, and also complex regular expressions. You can combine matches using the and, or, and not boolean operators to create complex expressions to select or exclude messages as needed for your use case.
Currently, only a pattern matching function is supported (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion).
The match field can have one of the following options:
+
+
regexp: A pattern that matches the value of a field or a templated value. For example:
match:
+regexp:<parameters>
+
+
and: Combines the nested match expressions with the logical AND operator.
match:
+and:<list of nested match expressions>
+
+
or: Combines the nested match expressions with the logical OR operator.
match:
+or:<list of nested match expressions>
+
+
not: Matches the logical NOT of the nested match expressions with the logical AND operator.
match:
+not:<list of nested match expressions>
+
regexp patterns
The regexp field (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion) defines the pattern that selects the matching messages. You can do two different kinds of matching:
+
Find a pattern in the value of a field of the messages, for example, to select the messages of a specific application. To do that, set the pattern and value fields (and optionally the type and flags fields).
Find a pattern in a template expression created from multiple fields of the message. To do that, set the pattern and template fields (and optionally the type and flags fields).
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
The following example filters for specific Pod labels:
The regexp field can have the following parameters:
pattern (string)
Defines the pattern to match against the messages. The type field determines how the pattern is interpreted (for example, string or regular expression).
value (string)
References a field of the message. The pattern is applied to the value of this field. If the value field is set, you cannot use the template field.
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
Specifies a template expression that combines fields. The pattern is matched against the value of these combined fields. If the template field is set, you cannot use the value field. For details on template expressions, see the syslog-ng documentation.
type (string)
Specifies how the pattern is interpreted. For details, see Types of regexp.
flags (list)
Specifies flags for the type field.
regexp types
By default, syslog-ng uses PCRE-style regular expressions. Since evaluating complex regular expressions can greatly increase CPU usage and are not always needed, you can following expression types:
Description: Use Perl Compatible Regular Expressions (PCRE). If the type() parameter is not specified, syslog-ng uses PCRE regular expressions by default.
pcre flags
PCRE regular expressions have the following flag options:
global: Usable only in rewrite rules: match for every occurrence of the expression, not only the first one.
+
ignore-case: Disable case-sensitivity.
+
newline: When configured, it changes the newline definition used in PCRE regular expressions to accept either of the following:
+
a single carriage-return
linefeed
the sequence carriage-return and linefeed (\r, \n and \r\n, respectively)
This newline definition is used when the circumflex and dollar patterns (^ and $) are matched against an input. By default, PCRE interprets the linefeed character as indicating the end of a line. It does not affect the \r, \n or \R characters used in patterns.
+
store-matches: Store the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
unicode: Use Unicode support for UTF-8 matches. UTF-8 character sequences are handled as single characters.
Description: Match the strings literally, without regular expression support. By default, only identical strings are matched. For partial matches, use the flags: prefix or flags: substring flags. For example, if the consider the following patterns.
The second matches labels beginning with log-generator, for example, log-generator-1.
The third one matches labels that contain the log-generator string, for example, my-log-generator.
string flags
Literal string searches have the following flags() options:
+
+
global: Usable only in rewrite rules, match for every occurrence of the expression, not only the first one.
+
ignore-case: Disables case-sensitivity.
+
prefix: During the matching process, patterns (also called search expressions) are matched against the input string starting from the beginning of the input string, and the input string is matched only for the maximum character length of the pattern. The initial characters of the pattern and the input string must be identical in the exact same order, and the pattern’s length is definitive for the matching process (that is, if the pattern is longer than the input string, the match will fail).
For example, for the input string exam:
+
the following patterns will match:
+
+
ex (the pattern contains the initial characters of the input string in the exact same order)
exam (the pattern is an exact match for the input string)
the following patterns will not match:
+
+
example (the pattern is longer than the input string)
hexameter (the pattern’s initial characters do not match the input string’s characters in the exact same order, and the pattern is longer than the input string)
+
store-matches: Stores the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example, (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
NOTE: To convert match variables into a syslog-ng list, use the $* macro, which can be further manipulated using List manipulation, or turned into a list in type-aware destinations.
+
substring: The given literal string will match when the pattern is found within the input. Unlike flags: prefix, the pattern does not have to be identical with the given literal string.
Description: Match the strings against a pattern containing ‘*’ and ‘?’ wildcards, without regular expression and character range support. The advantage of glob patterns to regular expressions is that globs can be processed much faster.
+
*: matches an arbitrary string, including an empty string
?: matches an arbitrary character
+
NOTE:
+
The wildcards can match the / character.
You cannot use the * and ? characters literally in the pattern.
Glob patterns cannot have any flags.
Examples
Select all logs
To select all logs, or if you only want to exclude some logs but retain others you need an empty select statement.
The Logging extensions part of the Logging operator solves the following problems:
+
Collect Kubernetes events to provide insight into what is happening inside a cluster, such as decisions made by the scheduler, or why some pods were evicted from the node.
Collect logs from the nodes like kubelet logs.
Collect logs from files on the nodes, for example, audit logs, or the systemd journal.
Collect logs from legacy application log files.
Starting with Logging operator version 3.17.0, logging-extensions are open source and part of Logging operator.
Features
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
+
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
Event-tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
+
Host-tailer tails custom files and transmits their changes to stdout. This way the Logging operator can process them.
+Kubernetes host tailer allows you to tail logs like kubelet, audit logs, or the systemd journal from the nodes.
+
Tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. Instead of running a host-tailer instance on every node, tailer-webhook attaches a sidecar container to the pod, and reads the specified file(s).
Kubernetes events are objects that provide insight into what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. Event tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
The operator handles this CR and generates the following required resources:
+
ServiceAccount: new account for event-tailer
ClusterRole: sets the event-tailer's roles
ClusterRoleBinding: links the account with the roles
ConfigMap: contains the configuration for the event-tailer pod
StatefulSet: manages the lifecycle of the event-tailer pod, which uses the banzaicloud/eventrouter:v0.1.0 image to tail events
Create event tailer
+
+
The simplest way to init an event-tailer is to create a new event-tailer resource with a name and controlNamespace field specified. The following command creates an event tailer called sample:
Check that the new object has been created by running:
kubectl get eventtailer
+
Expected output:
NAME AGE
+sample 22m
+
+
You can see the events in JSON format by checking the log of the event-tailer pod. This way Logging operator can collect the events, and handle them as any other log. Run:
kubectl logs -l app.kubernetes.io/instance=sample-event-tailer | head -1 | jq
+
Once you have an event-tailer, you can bind your events to a specific logging flow. The following example configures a flow to route the previously created sample-eventtailer to the sample-output.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: eventtailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ match:
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ - select:
+ labels:
+ app.kubernetes.io/name: sample-event-tailer
+ outputRefs:
+ - sample-output
+EOF
+
Delete event tailer
To remove an unwanted tailer, delete the related event-tailer custom resource. This terminates the event-tailer pod. For example, run the following command to delete the event tailer called sample:
kubectl delete eventtailer sample && kubectl get pod
+
Expected output:
eventtailer.logging-extensions.banzaicloud.io "sample" deleted
+NAME READY STATUS RESTARTS AGE
+sample-event-tailer-0 1/1 Terminating 0 12s
+
Persist event logs
Event-tailer supports persist mode. In this case, the logs generated from events are stored on a persistent volume. Add the following configuration to your event-tailer spec. In this example, the event tailer is called sample:
Logging operator manages the persistent volume of event-tailer automatically, you don’t have any further task with it. To check that the persistent volume has been created, run:
kubectl get pvc && kubectl get pv
+
The output should be similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+sample-event-tailer-sample-event-tailer-0 Bound pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO standard 43s
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO Delete Bound default/sample-event-tailer-sample-event-tailer-0 standard 42s
+
When an application (mostly legacy programs) is not logging in a Kubernetes-native way, Logging operator cannot process its logs. (For example, an old application does not send its logs to stdout, but uses some log files instead.) File-tailer helps to solve this problem: It configures Fluent Bit to tail the given file(s), and sends the logs to the stdout, to implement Kubernetes-native logging.
However, file-tailer cannot access the pod’s local dir, so the logfiles need to be written on a mounted volume.
Let’s assume the following code represents a legacy application that generates logs into the /legacy-logs/date.log file. While the legacy-logs directory is mounted, it’s accessible from other pods by mounting the same volume.
Logging operator configure the environment and start a file-tailer pod. It’s also able to deal with multi-node clusters, since is starts the host-tailer pod through a daemonset.
Check the created file tailer pod:
kubectl get pod
+
The output should be similar to:
NAME READY STATUS RESTARTS AGE
+file-hosttailer-sample-host-tailer-5tqhv 1/1 Running 0 117s
+test-pod 1/1 Running 0 5m40s
+
Checking the logs of the file-tailer's pod. You will see the logfile’s content on stdout. This way Logging operator can process those logs as well.
Filter to select systemd unit example: kubelet.service
+
maxEntries
int
No
-
Maximum entries to read when starting to tail logs to avoid high pressure
+
containerOverrides
*types.ContainerBase
No
-
Override container fields for the given tailer
Example: Configure logging Flow to route logs from a host tailer
The following example uses the flow’s match term to listen the previously created file-hosttailer-sample Hosttailer’s log.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: hosttailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ match:
+ - select:
+ labels:
+ app.kubernetes.io/name: file-hosttailer-sample
+ # there might be a need to match on container name too (in case of multiple containers)
+ container_names:
+ - nginx-access
+ outputRefs:
+ - sample-output
+EOF
+
Example: Kubernetes host tailer with multiple tailers
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
+
workloadMetaOverrides
*types.MetaBase
No
-
Override metadata of the created resources
+
workloadOverrides
*types.PodSpecBase
No
-
Override podSpec fields for the given daemonset
Advanced configuration overrides
MetaBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
annotations
map[string]string
No
-
+
labels
map[string]string
No
-
PodSpecBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
tolerations
[]corev1.Toleration
No
-
+
nodeSelector
map[string]string
No
-
+
serviceAccountName
string
No
-
+
affinity
*corev1.Affinity
No
-
+
securityContext
*corev1.PodSecurityContext
No
-
+
volumes
[]corev1.Volume
No
-
+
priorityClassName
string
No
-
ContainerBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
resources
*corev1.ResourceRequirements
No
-
+
image
string
No
-
+
pullPolicy
corev1.PullPolicy
No
-
+
command
[]string
No
-
+
volumeMounts
[]corev1.VolumeMount
No
-
+
securityContext
*corev1.SecurityContext
No
-
+
4.6.3 - Tail logfiles with a webhook
The tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. As an alternative to using a host file tailer service, you can use a file tailer webhook service.
+While the containers of the host file tailers run in a separated pod, file tailer webhook uses a different approach: if a pod has a specific annotation, the webhook injects a sidecar container for every tailed file into the pod.
The tailer-webhook behaves differently compared to the host-tailer:
Pros:
+
A simple annotation on the pod initiates the file tailing.
There is no need to use mounted volumes, Logging operator will manage the volumes and mounts between your containers.
Cons:
+
Required to start the Logging operator with webhooks service enabled. This requires additional configuration, especially on certificates since webhook services are allowed over TLS only.
Possibly uses more resources, since every tailed file attaches a new sidecar container to the pod.
Enable webhooks in Logging operator
+
We recommend using cert-manager to manage your certificates. Below is a really simple command that bootstraps generates the required resources for the tailer-webhook.
Alternatively, instead of using the values.yaml file, you can run the installation from command line also by passing the values with the set and set-string parameters:
You also need a service which points to the webhook port (9443) of Logging operator, and where the mutatingwebhookconfiuration will point to. Running the following command in shell will create the required service:
Furthermore, you need to tell Kubernetes to send admission requests to our webhook service. To do that, create a mutatingwebhookconfiguration Kubernetes resource, and:
+
Set the configuration to call /tailer-webhook path on your logging-webhooks service when v1.Pod is created.
Set failurePolicy to ignore, which means that the original pod will be created on webhook errors.
Set sideEffects to none, because we won’t cause any out-of-band changes in Kubernetes.
Unfortunately, mutatingwebhookconfiguration requires the caBundle field to be filled because we used a self-signed certificate, and the certificate cannot be validated through the system trust roots. If your certificate was generated with a system trust root CA, remove the caBundle line, because the certificate will be validated automatically.
+There are more sophisticated ways to load the CA into this field, but this solution requires no further components.
+
For example: you can inject the CA with a simple cert-manager cert-manager.io/inject-ca-from: logging/webhook-tls annotation on the mutatingwebhookconfiguration resource.
Note: If the pod with the sidecar annotation is in the default namespace, Logging operator handles tailer-webhook annotations clusterwide. To restrict the webhook callbacks to the current namespace, change the scope of the mutatingwebhookconfiguration to namespaced.
File tailer example
The following example creates a pod that is running a shell in infinite loop that appends the date command’s output to a file every second. The annotation sidecar.logging-extensions.banzaicloud.io/tail notifies Logging operator to attach a sidecar container to the pod. The sidecar tails the /var/log/date file and sends its output to the stdout.
After you have created the pod with the required annotation, make sure that the test-pod contains two containers by running kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
+test-pod 2/2 Running 0 29m
+
Check the container names in the pod to see that the Logging operator has created the sidecar container called legacy-logs-date-log. The sidecar containers’ name is always built from the path and name of the tailed file. Run the following command:
kubectl get pod test-pod -o json | jq '.spec.containers | map(.name)'
+
Check the logs of the test container. Since it writes the logs into a file, it does not produce any logs on stdout.
kubectl logs test-pod sample-container;echo$?
+
Expected output:
0
+
Check the logs of the legacy-logs-date-log container. This container exposes the logs of the test container on its stdout.
kubectl logs test-pod legacy-logs-date-log
+
Expected output:
Fluent Bit v1.9.5
+* Copyright (C) 2015-2022 The Fluent Bit Authors
+* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
+* https://fluentbit.io
+
+[2022/09/15 11:26:11][ info][fluent bit]version=1.9.5, commit=9ec43447b6, pid=1
+[2022/09/15 11:26:11][ info][storage]version=1.2.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
+[2022/09/15 11:26:11][ info][cmetrics]version=0.3.4
+[2022/09/15 11:26:11][ info][sp] stream processor started
+[2022/09/15 11:26:11][ info][input:tail:tail.0] inotify_fs_add(): inode=938627watch_fd=1name=/legacy-logs/date.log
+[2022/09/15 11:26:11][ info][output:file:file.0] worker #0 started
+Thu Sep 15 11:26:11 UTC 2022
+Thu Sep 15 11:26:12 UTC 2022
+...
+
Multi-container pods
In some cases you have multiple containers in your pod and you want to distinguish which file annotation belongs to which container. You can order every file annotations to particular container by prefixing the annotation with a ${ContainerName}: container key. For example:
Global resources: ClusterFlow, ClusterOutput, SyslogNGClusterFlow, SyslogNGClusterOutput
The namespaced resources are only effective in their own namespace. Global resources are cluster wide.
+
You can create ClusterFlow, ClusterOutput, SyslogNGClusterFlow, and SyslogNGClusterOutput resources only in the controlNamespace, unless the allowClusterResourcesFromAllNamespaces option is enabled in the logging resource. This namespace MUST be a protected namespace so that only administrators can access it.
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
customParsers (string, optional)
Available in Logging operator version 4.2 and later. Specify a custom parser file to load in addition to the default parsers file. It must be a valid key in the configmap specified by customConfig.
The following example defines a Fluentd parser that places the parsed containerd log messages into the log field instead of the message field.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+name:containerd
+spec:
+inputTail:
+Parser:cri-log-key
+# Parser that populates `log` instead of `message` to enable the Kubernetes filter's Merge_Log feature to work
+# Mind the indentation, otherwise Fluent Bit will parse the whole message into the `log` key
+customParsers:|
+ [PARSER]
+ Name cri-log-key
+ Format regex
+ Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L%z
+# Required key remap if one wants to rely on the existing auto-detected log key in the fluentd parser and concat filter otherwise should be omitted
+filterModify:
+- rules:
+- Rename:
+key:log
+value:message
+
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit.
Default: 5
healthCheck (*HealthCheck, optional)
Available in Logging operator version 4.4 and later.
HostNetwork (bool, optional)
image (ImageSpec, optional)
inputTail (InputTail, optional)
labels (map[string]string, optional)
livenessDefaultCheck (bool, optional)
livenessProbe (*corev1.Probe, optional)
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
FluentbitStatus defines the resource status for FluentbitAgent
FluentbitTLS
FluentbitTLS defines the TLS configs
enabled (*bool, required)
secretName (string, optional)
sharedKey (string, optional)
FluentbitTCPOutput
FluentbitTCPOutput defines the TLS configs
json_date_format (string, optional)
Default: iso8601
json_date_key (string, optional)
Default: ts
Workers (*int, optional)
Available in Logging operator version 4.4 and later.
FluentbitNetwork
FluentbitNetwork defines network configuration for fluentbit
connectTimeout (*uint32, optional)
Sets the timeout for connecting to an upstream
Default: 10
connectTimeoutLogError (*bool, optional)
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message
Default: true
dnsMode (string, optional)
Sets the primary transport layer protocol used by the asynchronous DNS resolver for connections established
Default: UDP, UDP or TCP
dnsPreferIpv4 (*bool, optional)
Prioritize IPv4 DNS results when trying to establish a connection
Default: false
dnsResolver (string, optional)
Select the primary DNS resolver type
Default: ASYNC, LEGACY or ASYNC
keepalive (*bool, optional)
Whether or not TCP keepalive is used for the upstream connection
Default: true
keepaliveIdleTimeout (*uint32, optional)
How long in seconds a TCP keepalive connection can be idle before being recycled
Default: 30
keepaliveMaxRecycle (*uint32, optional)
How many times a TCP keepalive connection can be used before being recycled
Default: 0, disabled
sourceAddress (string, optional)
Specify network address (interface) to use for connection and data traffic.
Default: disabled
BufferStorage
BufferStorage is the Service Section Configuration of fluent-bit
storage.backlog.mem_limit (string, optional)
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
Default: 5M
storage.checksum (string, optional)
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts.
Default: Off
storage.metrics (string, optional)
Available in Logging operator version 4.4 and later. If the http_server option has been enabled in the main Service configuration section, this option registers a new endpoint where internal metrics of the storage layer can be consumed.
Default: Off
storage.path (string, optional)
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync (string, optional)
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
Default: normal
HealthCheck
HealthCheck configuration. Available in Logging operator version 4.4 and later.
hcErrorsCount (int, optional)
The error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period.
Default: 5
hcPeriod (int, optional)
The time period (in seconds) to count the error and retry failure data point.
Default: 60
hcRetryFailureCount (int, optional)
The retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period
Default: 5
HotReload
HotReload configuration
image (ImageSpec, optional)
resources (corev1.ResourceRequirements, optional)
InputTail
InputTail defines FluentbitAgent tail input configuration The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
Buffer_Chunk_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
Default: 32k
Buffer_Max_Size (string, optional)
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
Default: Buffer_Chunk_Size
DB (*string, optional)
Specify the database file to keep track of monitored files and offsets.
DB.journal_mode (string, optional)
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
Default: WAL
DB.locking (*bool, optional)
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
Default: true
DB_Sync (string, optional)
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.
Default: Full
Docker_Mode (string, optional)
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Default: Off
Docker_Mode_Flush (string, optional)
Wait period time in seconds to flush queued unfinished split lines.
Default: 4
Docker_Mode_Parser (string, optional)
Specify an optional parser for the first line of the docker multiline mode.
Exclude_Path (string, optional)
Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=.gz,.zip
Ignore_Older (string, optional)
Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.
Key (string, optional)
When a message is unstructured (no parser applied), it’s appended as a string under the key name log. This option allows to define an alternative name for that key.
Default: log
Mem_Buf_Limit (string, optional)
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Multiline (string, optional)
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Default: Off
Multiline_Flush (string, optional)
Wait period time in seconds to process queued multiline messages
Default: 4
multiline.parser ([]string, optional)
Specify one or multiple parser definitions to apply to the content. Part of the new Multiline Core support in 1.8
Default: ""
Parser (string, optional)
Specify the name of a parser to interpret the entry as a structured message.
Parser_Firstline (string, optional)
Name of the parser that machs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)
Parser_N ([]string, optional)
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Path (string, optional)
Pattern specifying a specific log files or multiple ones through the use of common wildcards.
Path_Key (string, optional)
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Read_From_Head (bool, optional)
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
Refresh_Interval (string, optional)
The interval of refreshing the list of watched files in seconds.
Default: 60
Rotate_Wait (string, optional)
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
Default: 5
Skip_Long_Lines (string, optional)
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Default: Off
storage.type (string, optional)
Specify the buffering mechanism to use. It can be memory or filesystem.
Default: memory
Tag (string, optional)
Set a tag (with regex-extract fields) that will be placed on lines read.
Tag_Regex (string, optional)
Set a regex to extract fields from the file.
FilterKubernetes
FilterKubernetes Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
Annotations (string, optional)
Include Kubernetes resource annotations in the extra metadata.
Default: On
Buffer_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs. If this value is empty we will set it “0”.
Default: “0”
Cache_Use_Docker_Id (string, optional)
When enabled, metadata will be fetched from K8s when docker_id is changed.
Default: Off
DNS_Retries (string, optional)
DNS lookup retries N times until the network start working
Default: 6
DNS_Wait_Time (string, optional)
DNS lookup interval between network status checks
Default: 30
Dummy_Meta (string, optional)
If set, use dummy-meta data (for test/dev purposes)
Default: Off
K8S-Logging.Exclude (string, optional)
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Default: On
K8S-Logging.Parser (string, optional)
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Default: Off
Keep_Log (string, optional)
When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).
Default: On
Kube_CA_File (string, optional)
CA certificate file (default:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
Configurable TTL for K8s cached metadata. By default, it is set to 0 which means TTL for cache entries is disabled and cache entries are evicted at random when capacity is reached. In order to enable this option, you should set the number to a time interval. For example, set this value to 60 or 60s and cache entries which have been created more than 60s will be evicted.
Default: 0
Kube_meta_preload_cache_dir (string, optional)
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Kube_Tag_Prefix (string, optional)
When the source records comes from Tail input plugin, this option allows to specify what’s the prefix used in Tail configuration. (default:kube.var.log.containers.)
Token TTL configurable ’time to live’ for the K8s token. By default, it is set to 600 seconds. After this time, the token is reloaded from Kube_Token_File or the Kube_Token_Command. (default:“600”)
Default: 600
Kube_URL (string, optional)
API Server end-point.
Default: https://kubernetes.default.svc:443
Kubelet_Port (string, optional)
kubelet port using for HTTP request, this only works when Use_Kubelet set to On
Default: 10250
Labels (string, optional)
Include Kubernetes resource labels in the extra metadata.
Default: On
Match (string, optional)
Match filtered records (default:kube.*)
Default: kubernetes.*
Merge_Log (string, optional)
When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure. (default:Off)
Default: On
Merge_Log_Key (string, optional)
When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.
Merge_Log_Trim (string, optional)
When Merge_Log is enabled, trim (remove possible \n or \r) field values.
Default: On
Merge_Parser (string, optional)
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Regex_Parser (string, optional)
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
tls.debug (string, optional)
Debug level between 0 (nothing) and 4 (every detail).
Default: -1
tls.verify (string, optional)
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
Default: On
Use_Journal (string, optional)
When enabled, the filter reads logs coming in Journald format.
Default: Off
Use_Kubelet (string, optional)
This is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log.
Default: Off
FilterAws
FilterAws The AWS Filter Enriches logs with AWS Metadata.
az (*bool, optional)
The availability zone (default:true).
Default: true
account_id (*bool, optional)
The account ID for current EC2 instance. (default:false)
Default: false
ami_id (*bool, optional)
The EC2 instance image id. (default:false)
Default: false
ec2_instance_id (*bool, optional)
The EC2 instance ID. (default:true)
Default: true
ec2_instance_type (*bool, optional)
The EC2 instance type. (default:false)
Default: false
hostname (*bool, optional)
The hostname for current EC2 instance. (default:false)
Default: false
imds_version (string, optional)
Specify which version of the instance metadata service to use. Valid values are ‘v1’ or ‘v2’ (default).
Default: v2
Match (string, optional)
Match filtered records (default:*)
Default: *
private_ip (*bool, optional)
The EC2 instance private ip. (default:false)
Default: false
vpc_id (*bool, optional)
The VPC ID for current EC2 instance. (default:false)
Default: false
FilterModify
FilterModify The Modify Filter plugin allows you to change records using rules and conditions.
conditions ([]FilterModifyCondition, optional)
FluentbitAgent Filter Modification Condition
rules ([]FilterModifyRule, optional)
FluentbitAgent Filter Modification Rule
FilterModifyRule
FilterModifyRule The Modify Filter plugin allows you to change records using rules and conditions.
Add (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE if KEY does not exist
Copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist
Hard_copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten
Hard_rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten
Remove (*FilterKey, optional)
Remove a key/value pair with key KEY if it exists
Remove_regex (*FilterKey, optional)
Remove all key/value pairs with key matching regexp KEY
Remove_wildcard (*FilterKey, optional)
Remove all key/value pairs with key matching wildcard KEY
Rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist
Set (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten
FilterModifyCondition
FilterModifyCondition The Modify Filter plugin allows you to change records using rules and conditions.
storage.total_limit_size Limit the maximum number of Chunks in the filesystem for the current output logical destination.
Tag (string, optional)
Time_as_Integer (bool, optional)
Workers (*int, optional)
Available in Logging operator version 4.4 and later. Enables dedicated thread(s) for this output. Default value (2) is set since version 1.8.13. For previous versions is 0.
Fluentd port inside the container (24240 by default). The headless service port is controlled by this field as well. Note that the default ClusterIP service port is always 24240, regardless of this field.
Available in Logging operator version 4.4 and later. Configurable resource requirements for the drainer sidecar container. Default 20m cpu request, 20M memory limit
LoggingRouteSpec defines the desired state of LoggingRoute
source (string, required)
Source identifies the logging that this policy applies to
targets (metav1.LabelSelector, required)
Targets refers to the list of logging resources specified by a label selector to forward logs to. Filtering of namespaces will happen based on the watchNamespaces and watchNamespaceSelector fields of the target logging resource.
LoggingRouteStatus
LoggingRouteStatus defines the actual state of the LoggingRoute
notices ([]string, optional)
Enumerate non-blocker issues the user should pay attention to
noticesCount (int, optional)
Summarize the number of notices for the CLI output
problems ([]string, optional)
Enumerate problems that prohibits this route to take effect and populate the tenants field
problemsCount (int, optional)
Summarize the number of problems for the CLI output
tenants ([]Tenant, optional)
Enumerate all loggings with all the destination namespaces expanded
Tenant
name (string, required)
namespaces ([]string, optional)
LoggingRoute
LoggingRoute (experimental)
+Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
Allow configuration of cluster resources from any namespace. Mutually exclusive with ControlNamespace restriction of Cluster resources
clusterDomain (*string, optional)
Cluster domain name to be used when templating URLs to services .
Default: “cluster.local.”
configCheck (ConfigCheck, optional)
ConfigCheck settings that apply to both fluentd and syslog-ng
controlNamespace (string, required)
Namespace for cluster wide configuration resources like ClusterFlow and ClusterOutput. This should be a protected namespace from regular users. Resources like fluentbit and fluentd will run in this namespace as well.
defaultFlow (*DefaultFlowSpec, optional)
Default flow for unmatched logs. This Flow configuration collects all logs that didn’t matched any other Flow.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
errorOutputRef (string, optional)
GlobalOutput name to flush ERROR events to
flowConfigCheckDisabled (bool, optional)
Disable configuration check before applying new fluentd configuration.
flowConfigOverride (string, optional)
Override generated config. This is a raw configuration string for troubleshooting purposes.
fluentbit (*FluentbitSpec, optional)
FluentbitAgent daemonset configuration. Deprecated, will be removed with next major version Migrate to the standalone NodeAgent resource
WatchNamespaceSelector is a LabelSelector to find matching namespaces to watch as in WatchNamespaces
watchNamespaces ([]string, optional)
Limit namespaces to watch Flow and Output custom resources.
ConfigCheck
labels (map[string]string, optional)
Labels to use for the configcheck pods on top of labels added by the operator by default. Default values can be overwritten.
strategy (ConfigCheckStrategy, optional)
Select the config check strategy to use. DryRun: Parse and validate configuration. StartWithTimeout: Start with given configuration and exit after specified timeout. Default: DryRun
timeoutSeconds (int, optional)
Configure timeout in seconds if strategy is StartWithTimeout
LoggingStatus
LoggingStatus defines the observed state of Logging
configCheckResults (map[string]bool, optional)
Result of the config check. Under normal conditions there is a single item in the map with a bool value.
fluentdConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached fluentd configuration object.
problems ([]string, optional)
Problems with the logging resource
problemsCount (int, optional)
Count of problems for printcolumn
syslogNGConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached SyslogNG configuration object.
watchNamespaces ([]string, optional)
List of namespaces that watchNamespaces + watchNamespaceSelector is resolving to. Not set means all namespaces.
Logging
Logging is the Schema for the loggings API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (LoggingSpec, optional)
status (LoggingStatus, optional)
LoggingList
LoggingList contains a list of Logging
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]Logging, required)
DefaultFlowSpec
DefaultFlowSpec is a Flow for logs that did not match any other Flow
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
daemonSet (*typeoverride.DaemonSet, optional)
disableKubernetesFilter (*bool, optional)
enableUpstream (*bool, optional)
enabled (*bool, optional)
extraVolumeMounts ([]*VolumeMount, optional)
filterAws (*FilterAws, optional)
filterKubernetes (FilterKubernetes, optional)
flush (int32, optional)
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit (default: 5)
Default: 5
inputTail (InputTail, optional)
livenessDefaultCheck (*bool, optional)
Default: true
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled. (default: info)
SyslogNGClusterFlow is the Schema for the syslog-ng clusterflows API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGClusterFlowSpec
SyslogNGClusterFlowSpec is the Kubernetes spec for Flows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGClusterFlowList
SyslogNGClusterFlowList contains a list of SyslogNGClusterFlow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterFlow, required)
+
4.7.1.13 - SyslogNGClusterOutput
SyslogNGClusterOutput
SyslogNGClusterOutput is the Schema for the syslog-ng clusteroutputs API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterOutputSpec, required)
status (SyslogNGOutputStatus, optional)
SyslogNGClusterOutputSpec
SyslogNGClusterOutputSpec contains Kubernetes spec for SyslogNGClusterOutput
(SyslogNGOutputSpec, required)
enabledNamespaces ([]string, optional)
SyslogNGClusterOutputList
SyslogNGClusterOutputList contains a list of SyslogNGClusterOutput
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterOutput, required)
+
4.7.1.14 - SyslogNGConfig
SyslogNGConfig
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGSpec, optional)
status (SyslogNGConfigStatus, optional)
SyslogNGConfigStatus
active (*bool, optional)
logging (string, optional)
problems ([]string, optional)
problemsCount (int, optional)
SyslogNGConfigList
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGConfig, required)
+
4.7.1.15 - SyslogNGFlowSpec
SyslogNGFlowSpec
SyslogNGFlowSpec is the Kubernetes spec for SyslogNGFlows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
localOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGFilter
Filter definition for SyslogNGFlowSpec
id (string, optional)
match (*filter.MatchConfig, optional)
parser (*filter.ParserConfig, optional)
rewrite ([]filter.RewriteConfig, optional)
SyslogNGFlow
Flow Kubernetes object
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGFlowList
FlowList contains a list of Flow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGFlow, required)
+
4.7.1.16 - SyslogNGOutputSpec
SyslogNGOutputSpec
SyslogNGOutputSpec defines the desired state of SyslogNGOutput
Available in Logging operator version 4.5 and later. Parses date automatically from the timestamp registered by the container runtime. Note: jsonKeyPrefix and jsonKeyDelim are respected.
Available in Logging operator version 4.5 and later.
Parses date automatically from the timestamp registered by the container runtime.
+Note: jsonKeyPrefix and jsonKeyDelim are respected.
+It is disabled by default, but if enabled, then the default settings parse the timestamp written by the container runtime and parsed by Fluent Bit using the cri or the docker parser.
format (*string, optional)
Default: “%FT%T.%f%z”
template (*string, optional)
Default(depending on JSONKeyPrefix): “${json.time}”
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the daemonset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
Allow anonymous source. sections are required if disabled.
self_hostname (string, required)
Hostname
shared_key (string, required)
Shared key for authentication.
user_auth (bool, optional)
If true, use user based authentication.
+
4.8.2 - Transport
Transport
ca_cert_path (string, optional)
Specify private CA contained path
ca_path (string, optional)
Specify path to CA certificate file
ca_private_key_passphrase (string, optional)
private CA private key passphrase contained path
ca_private_key_path (string, optional)
private CA private key contained path
cert_path (string, optional)
Specify path to Certificate file
ciphers (string, optional)
Ciphers Default: “ALL:!aNULL:!eNULL:!SSLv2”
client_cert_auth (bool, optional)
When this is set Fluentd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don’t supply a valid client certificate will fail.
insecure (bool, optional)
Use secure connection when use tls) Default: false
private_key_passphrase (string, optional)
public CA private key passphrase contained path
private_key_path (string, optional)
Specify path to private Key file
protocol (string, optional)
Protocol Default: :tcp
version (string, optional)
Version Default: ‘TLSv1_2’
+
4.8.3 - Fluentd filters
You can use the following Fluentd filters in your Flow and ClusterFlow CRDs.
Fluentd Filter plugin to fetch several metadata for a Pod
Configuration
EnhanceK8s
api_groups ([]string, optional)
Kubernetes resources api groups
Default: ["apps/v1", "extensions/v1beta1"]
bearer_token_file (string, optional)
Bearer token path
Default: nil
ca_file (secret.Secret, optional)
Kubernetes API CA file
Default: nil
cache_refresh (int, optional)
Cache refresh
Default: 60*60
cache_refresh_variation (int, optional)
Cache refresh variation
Default: 60*15
cache_size (int, optional)
Cache size
Default: 1000
cache_ttl (int, optional)
Cache TTL
Default: 60602
client_cert (secret.Secret, optional)
Kubernetes API Client certificate
Default: nil
client_key (secret.Secret, optional)
Kubernetes API Client certificate key
Default: nil
core_api_versions ([]string, optional)
Kubernetes core API version (for different Kubernetes versions)
Default: [‘v1’]
data_type (string, optional)
Sumo Logic data type
Default: metrics
in_namespace_path ([]string, optional)
parameters for read/write record
Default: ['$.namespace']
in_pod_path ([]string, optional)
Default: ['$.pod','$.pod_name']
kubernetes_url (string, optional)
Kubernetes API URL
Default: nil
ssl_partial_chain (*bool, optional)
If ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN
This filter plugin consumes a log stream of JSON objects which contain single-line log messages. If a consecutive sequence of log messages form an exception stack trace, they forwarded as a single, combined JSON object. Otherwise, the input log data is forwarded as is. More info at https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions
+
Note: As Tag management is not supported yet, this Plugin is mutually exclusive with Tag normaliser
Fluentd Filter plugin to add information about geographical location of IP addresses with Maxmind GeoIP databases.
+More information at https://github.com/y-ken/fluent-plugin-geoip
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Flow
+metadata:
+name:demo-flow
+spec:
+filters:
+- tag_normaliser:{}
+- parser:
+remove_key_name_field:true
+reserve_data:true
+parse:
+type:nginx
+- prometheus:
+metrics:
+- name:total_counter
+desc:The total number of foo in message.
+type:counter
+labels:
+foo:bar
+labels:
+host:${hostname}
+tag:${tag}
+namespace:$.kubernetes.namespace
+selectors:{}
+localOutputRefs:
+- demo-output
Fluentd config result:
<filter**>
+ @type prometheus
+ @id logging-demo-flow_2_prometheus
+<metric>
+ desc The total number of foo in message.
+ name total_counter
+ type counter
+<labels>
+ foo bar
+</labels>
+</metric>
+<labels>
+ host ${hostname}
+ namespace $.kubernetes.namespace
+ tag ${tag}
+</labels>
+</filter>
A sentry plugin to throttle logs. Logs are grouped by a configurable key. When a group exceeds a configuration rate, logs are dropped for this group.
Configuration
Throttle
group_bucket_limit (int, optional)
Maximum number logs allowed per groups over the period of group_bucket_period_s
Default: 6000
group_bucket_period_s (int, optional)
This is the period of of time over which group_bucket_limit applies
Default: 60
group_drop_logs (bool, optional)
When a group reaches its limit, logs will be dropped from further processing if this value is true
Default: true
group_key (string, optional)
Used to group logs. Groups are rate limited independently
Default: kubernetes.container_name
group_reset_rate_s (int, optional)
After a group has exceeded its bucket limit, logs are dropped until the rate per second falls below or equal to group_reset_rate_s.
Default: group_bucket_limit/group_bucket_period_s
group_warning_delay_s (int, optional)
When a group reaches its limit and as long as it is not reset, a warning message with the current log rate of the group is emitted repeatedly. This is the delay between every repetition.
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. For details, see https://github.com/aliyun/fluent-plugin-oss.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
If true, put_log_events_retry_limit will be ignored
put_log_events_retry_limit (int, optional)
Maximum count of retry (if exceeding this, the events will be discarded)
put_log_events_retry_wait (string, optional)
Time before retrying PutLogEvents (retry interval increases exponentially like put_log_events_retry_wait * (2 ^ retry_count))
region (string, required)
AWS Region
remove_log_group_aws_tags_key (string, optional)
Remove field specified by log_group_aws_tags_key
remove_log_group_name_key (string, optional)
Remove field specified by log_group_name_key
remove_log_stream_name_key (string, optional)
Remove field specified by log_stream_name_key
remove_retention_in_days (string, optional)
Remove field specified by retention_in_days
retention_in_days (string, optional)
Use to set the expiry time for log group when created with auto_create_stream. (default to no expiry)
retention_in_days_key (string, optional)
Use specified field of records as retention period
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
append_new_line (*bool, optional)
If it is enabled, the plugin adds new line character (\n) to each serialized record. Before appending \n, plugin calls chomp and removes separator from the end of each record as chomp_record is true. Therefore, you don’t need to enable chomp_record option when you use kinesis_firehose output with default configuration (append_new_line is true). If you want to set append_new_line false, you can choose chomp_record false (default) or true (compatible format with plugin v2). (Default:true)
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required) {#assume role credentials-role_arn}
The Amazon Resource Name (ARN) of the role to assume
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
stream_name (string, required)
Name of the stream to put data.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required)
The Amazon Resource Name (ARN) of the role to assume
The s3 output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log ‘2011-01-02 message B’ is reached, and then another log ‘2011-01-03 message B’ is reached in this order, the former one is stored in “20110102.gz” file, and latter one in “20110103.gz” file.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sse_customer_algorithm (string, optional)
Specifies the algorithm to use to when encrypting the object
sse_customer_key (string, optional)
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data
sse_customer_key_md5 (string, optional)
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
If false, the certificate of endpoint will not be verified
storage_class (string, optional)
The type of storage to use for the object, for example STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR For a complete list of possible values, see the Amazon S3 API reference.
store_as (string, optional)
Archive format on S3
use_bundled_cert (string, optional)
Use aws-sdk-ruby bundled cert
use_server_side_encryption (string, optional)
The Server-side encryption algorithm used when storing this object in S3 (AES256, aws:kms)
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into s3
Available in Logging operator version 4.5 and later. Azure Cloud to use, for example, AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud, AZURESTACKCLOUD (in uppercase). This field is supported only if the fluentd plugin honors it, for example, https://github.com/elsesiy/fluent-plugin-azure-storage-append-blob-lts
Compat format type: out_file, json, ltsv (default: out_file)
Default: json
path (string, optional)
Path prefix of the files on Azure
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
4.8.4.8 - Buffer
Buffer
chunk_full_threshold (string, optional)
The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size * chunk_full_threshold (== 8MB * 0.95 in default)
chunk_limit_records (int, optional)
The max number of events that each chunks can store in it
chunk_limit_size (string, optional)
The max size of each chunks: events will be written into chunks until the size of chunks become this size (default: 8MB)
Default: 8MB
compress (string, optional)
If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
delayed_commit_timeout (string, optional)
The timeout seconds until output plugin decides that async write operation fails
disable_chunk_backup (bool, optional)
Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
disabled (bool, optional)
Disable buffer section (default: false)
Default: false,hidden
flush_at_shutdown (bool, optional)
The value to specify to flush/write all buffer chunks at shutdown, or not
flush_interval (string, optional)
Default: 60s
flush_mode (string, optional)
Default: default (equals to lazy if time is specified as chunk key, interval otherwise) lazy: flush/write chunks once per timekey interval: flush/write chunks per specified time via flush_interval immediate: flush/write chunks immediately after events are appended into chunks
flush_thread_burst_interval (string, optional)
The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
flush_thread_count (int, optional)
The number of threads of output plugins, which is used to write chunks in parallel
flush_thread_interval (string, optional)
The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
overflow_action (string, optional)
How output plugin behaves when its buffer queue is full throw_exception: raise exception to show this error in log block: block processing of input plugin to emit events into that buffer drop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
path (string, optional)
The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Default: operator generated
queue_limit_length (int, optional)
The queue length limitation of this buffer plugin instance
queued_chunks_limit_size (int, optional)
Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
retry_exponential_backoff_base (string, optional)
The base number of exponential backoff for retries
retry_forever (*bool, optional)
If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Default: true
retry_max_interval (string, optional)
The maximum interval seconds for exponential backoff between retries while failing
retry_max_times (int, optional)
The maximum number of times to retry to flush while failing
retry_randomize (bool, optional)
If true, output plugin will retry after randomized interval not to do burst retries
retry_secondary_threshold (string, optional)
The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
retry_timeout (string, optional)
The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
retry_type (string, optional)
exponential_backoff: wait seconds will become large exponentially per failures periodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
retry_wait (string, optional)
Seconds to wait before next retry to flush, or constant factor of exponential backoff
tags (*string, optional)
When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
Default: tag,time
timekey (string, required)
Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Default: 10m
timekey_use_utc (bool, optional)
Output plugin decides to use UTC or not to format placeholders using timekey
timekey_wait (string, optional)
Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Default: 1m
timekey_zone (string, optional)
The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
total_limit_size (string, optional)
The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
type (string, optional)
Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
+
4.8.4.9 - Datadog
Datadog output plugin for Fluentd
Overview
It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Datadog - so no need to use a log shipper if you don’t want to.
+For details, see https://github.com/DataDog/fluent-plugin-datadog.
Example
spec:
+datadog:
+api_key:
+value:'<YOUR_API_KEY>'# For referencing a secret, see https://kube-logging.dev/docs/configuration/plugins/outputs/secret/
+dd_source:'<INTEGRATION_NAME>'
+dd_tags:'<KEY1:VALUE1>,<KEY2:VALUE2>'
+dd_sourcecategory:'<YOUR_SOURCE_CATEGORY>'
+
Configuration
Output Config
api_key (*secret.Secret, required)
This parameter is required in order to authenticate your fluent agent.
Set the log compression level for HTTP (1 to 9, 9 being the best ratio)
Default: “6”
dd_hostname (string, optional)
Used by Datadog to identify the host submitting the logs.
Default: “hostname -f”
dd_source (string, optional)
This tells Datadog what integration it is
Default: nil
dd_sourcecategory (string, optional)
Multiple value attribute. Can be used to refine the source attribute
Default: nil
dd_tags (string, optional)
Custom tags with the following format “key1:value1, key2:value2”
Default: nil
host (string, optional)
Proxy endpoint when logs are not directly forwarded to Datadog
Default: “http-intake.logs.datadoghq.com”
include_tag_key (bool, optional)
Automatically include the Fluentd tag in the record.
Default: false
max_backoff (string, optional)
The maximum time waited between each retry in seconds
Default: “30”
max_retries (string, optional)
The number of retries before the output plugin stops. Set to -1 for unlimited retries
Default: “-1”
no_ssl_validation (bool, optional)
Disable SSL validation (useful for proxy forwarding)
Default: false
port (string, optional)
Proxy port when logs are not directly forwarded to Datadog and ssl is not used
Default: “80”
service (string, optional)
Used by Datadog to correlate between logs, traces and metrics.
Default: nil
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
ssl_port (string, optional)
Port used to send logs over a SSL encrypted connection to Datadog. If use_http is disabled, use 10516 for the US region and 443 for the EU region.
Default: “443”
tag_key (string, optional)
Where to store the Fluentd tag.
Default: “tag”
timestamp_key (string, optional)
Name of the attribute which will contain timestamp of the log event. If nil, timestamp attribute is not added.
Default: “@timestamp”
use_compression (bool, optional)
Enable log compression for HTTP
Default: true
use_http (bool, optional)
Enable HTTP forwarding. If you disable it, make sure to change the port to 10514 or ssl_port to 10516
Default: true
use_json (bool, optional)
Event format, if true, the event is sent in json format. Othwerwise, in plain text.
Default: true
use_ssl (bool, optional)
If true, the agent initializes a secure connection to Datadog. In clear TCP otherwise.
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
content_type (string, optional)
With content_type application/x-ndjson, elasticsearch plugin adds application/x-ndjson as Content-Profile in payload.
Default: application/json
custom_headers (string, optional)
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
flatten_hashes (bool, optional)
Elasticsearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places: {“people” => 100} {“people” => {“some” => “thing”}} The second log line will be rejected by the Elasticsearch parser because objects and concrete values can’t live in the same field. To combat this, you can enable hash flattening.
flatten_hashes_separator (string, optional)
Flatten separator
host (string, optional)
You can specify the Elasticsearch host using this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple Elasticsearch hosts with separator “,”. If you specify the hosts option, the host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, elasticsearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy. For example ignore_exceptions ["Elasticsearch::Transport::Transport::ServerError"] will match all subclasses of ServerError - Elasticsearch::Transport::Transport::Errors::BadRequest, Elasticsearch::Transport::Transport::Errors::ServiceUnavailable, etc.
ilm_policy (string, optional)
Specify ILM policy contents as Hash.
ilm_policy_id (string, optional)
Specify ILM policy id.
ilm_policy_overwrite (bool, optional)
Specify whether overwriting ilm policy or not.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_name (string, optional)
The index name to write events to
Default: fluentd
index_prefix (string, optional)
Specify the index prefix for the rollover index to be created.
Default: logstash
log_es_400_reason (bool, optional)
By default, the error logger won’t record the reason for a 400 error from the Elasticsearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn’t desirable if all you want is the 400 error reasons. You can set this true to capture the 400 error reasons without all the other debug logs.
Default: false
logstash_dateformat (string, optional)
Set the Logstash date format.
Default: %Y.%m.%d
logstash_format (bool, optional)
Enable Logstash log format.
Default: false
logstash_prefix (string, optional)
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional)
Set the Logstash prefix separator.
Default: -
max_retry_get_es_version (string, optional)
You can specify the number of times to retry fetching the Elasticsearch version.
This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify the Elasticsearch port using this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, Elasticsearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, Elasticsearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of elasticsearch shield.
Default: false
reload_after (string, optional)
When reload_connections is true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the elasticsearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the request. This can be useful to quickly remove a dead node from the list of addresses.
Remove keys on update will not update the configured keys in elasticsearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Similar to parent_key config, will add _routing into elasticsearch command if routing_key is set and the field does exist in input event.
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sniffer_class_name (string, optional)
The default Sniffer used by the Elasticsearch::Transport class works well when Fluentd has a direct connection to all of the Elasticsearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::ElasticsearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. https://github.com/uken/fluent-plugin-elasticsearch#sniffer-class-name
ssl_max_version (string, optional)
Specify min/max SSL/TLS version
ssl_min_version (string, optional)
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in Elasticsearch 7.x
Similar to target_index_key config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to type_name.
Default: fluentd
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, elasticsearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
type_name (string, optional)
Set the index type for elasticsearch. This is the fallback if target_type_key is missing.
Default: fluentd
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because es_rejected_execution_exception is caused by exceeding Elasticsearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior. If you want to increase it and forcibly retrying bulk request, please consider to change unrecoverable_error_types parameter from default value. Change default value of thread_pool.bulk.queue_size in elasticsearch.yml)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders, for example, %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
validate_client_version (bool, optional)
When you use mismatched Elasticsearch server and client libraries, fluent-plugin-elasticsearch cannot send data into Elasticsearch.
Default: false
verify_es_version_at_startup (*bool, optional)
Because Elasticsearch plugin should change behavior each of Elasticsearch major versions. For example, Elasticsearch 6 starts to prohibit multiple type_names in one index, and Elasticsearch 7 will handle only _doc type_name in index. If you want to disable to verify Elasticsearch version at start up, set it as false. When using the following configuration, ES plugin intends to communicate into Elasticsearch 6. (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
The Path of the file. The actual path is path + time + “.log” by default.
path_suffix (string, optional)
The suffix of output result.
Default: “.log”
recompress (bool, optional)
Performs compression again even if the buffer chunk is already compressed.
Default: false
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
symlink_path (bool, optional)
Create symlink to temporary buffered file when buffer_type is file. This is useful for tailing file content to check logs.
The timeout time for socket connect. When the connection timed out during establishment, Errno::ETIMEDOUT is raised.
dns_round_robin (bool, optional)
Enable client-side DNS round robin. Uniform randomly pick an IP address to send data when a hostname has several IP addresses. heartbeat_type udp is not available with dns_round_robin true. Use heartbeat_type tcp or heartbeat_type none.
expire_dns_cache (int, optional)
Set TTL to expire DNS cache in seconds. Set 0 not to use DNS Cache.
Default: 0
hard_timeout (int, optional)
The hard timeout used to detect server failure. The default value is equal to the send_timeout parameter.
Default: 60
heartbeat_interval (int, optional)
The interval of the heartbeat packer.
Default: 1
heartbeat_type (string, optional)
The transport protocol to use for heartbeats. Set “none” to disable heartbeat. [transport, tcp, udp, none]
ignore_network_errors_at_startup (bool, optional)
Ignore DNS resolution and errors at startup time.
keepalive (bool, optional)
Enable keepalive connection.
Default: false
keepalive_timeout (int, optional)
Expired time of keepalive. Default value is nil, which means to keep connection as long as possible.
Default: 0
phi_failure_detector (bool, optional)
Use the “Phi accrual failure detector” to detect server failure.
Default: true
phi_threshold (int, optional)
The threshold parameter used to detect server faults. phi_threshold is deeply related to heartbeat_interval. If you are using longer heartbeat_interval, please use the larger phi_threshold. Otherwise you will see frequent detachments of destination servers. The default value 16 is tuned for heartbeat_interval 1s.
Default: 16
recover_wait (int, optional)
The wait time before accepting a server fault recovery.
Default: 10
require_ack_response (bool, optional)
Change the protocol to at-least-once. The plugin waits the ack from destination’s in_forward plugin.
Server definitions at least one is required Server
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
tls_allow_self_signed_cert (bool, optional)
Allow self signed certificates or not.
Default: false
tls_cert_logical_store_name (string, optional)
The certificate logical store name on Windows system certstore. This parameter is for Windows only.
tls_cert_path (*secret.Secret, optional)
The additional CA certificate path for TLS.
tls_cert_thumbprint (string, optional)
The certificate thumbprint for searching from Windows system certstore This parameter is for Windows only.
tls_cert_use_enterprise_store (bool, optional)
Enable to use certificate enterprise store on Windows system certstore. This parameter is for Windows only.
Verify hostname of servers and certificates or not in TLS transport.
Default: true
tls_version (string, optional)
The default version of TLS transport. [TLSv1_1, TLSv1_2]
Default: TLSv1_2
transport (string, optional)
The transport protocol to use [ tcp, tls ]
verify_connection_at_startup (bool, optional)
Verify that a connection can be made with one of out_forward nodes at the time of startup.
Default: false
Fluentd Server
server
host (string, required)
The IP address or host name of the server.
name (string, optional)
The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
password (*secret.Secret, optional)
The password for authentication.
port (int, optional)
The port number of the host. Note that both TCP packets (event stream) and UDP packets (heartbeat message) are sent to this port.
Default: 24224
shared_key (*secret.Secret, optional)
The shared key per server.
standby (bool, optional)
Marks a node as the standby node for an Active-Standby model between Fluentd nodes. When an active node goes down, the standby node is promoted to an active node. The standby node is not used by the out_forward plugin until then.
username (*secret.Secret, optional)
The username for authentication.
weight (int, optional)
The load balancing weight. If the weight of one server is 20 and the weight of the other server is 30, events are sent in a 2:3 ratio. .
User provided web-safe keys and arbitrary string values that will returned with requests for the file as “x-goog-meta-” response headers. Object Metadata
overwrite (bool, optional)
Overwrite already existing path
Default: false
path (string, optional)
Path prefix of the files on GCS
project (string, required)
Project identifier for GCS
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
storage_class (string, optional)
Storage class of the file: dranearlinecoldlinemulti_regionalregionalstandard
TLS: CA certificate file for server certificate verification Secret
cert (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
configure_kubernetes_labels (*bool, optional)
Configure Kubernetes metadata in a Prometheus like format
Default: false
drop_single_key (*bool, optional)
If a record only has 1 key, then just set the log line to the value and discard the key.
Default: false
extra_labels (map[string]string, optional)
Set of extra labels to include with every Loki stream.
extract_kubernetes_labels (*bool, optional)
Extract kubernetes labels as loki labels
Default: false
include_thread_label (*bool, optional)
whether to include the fluentd_thread label when multiple threads are used for flushing.
Default: true
insecure_tls (*bool, optional)
TLS: disable server certificate verification
Default: false
key (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
labels (Label, optional)
Set of labels to include with every Loki stream.
line_format (string, optional)
Format to use when flattening the record to a log line: json, key_value (default: key_value)
Default: json
password (*secret.Secret, optional)
Specify password if the Loki server requires authentication. Secret
remove_keys ([]string, optional)
Comma separated list of needless record keys to remove
Default: []
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
tenant (string, optional)
Loki is a multi-tenant log storage platform and all requests sent must include a tenant.
url (string, optional)
The url of the Loki server to send logs to.
Default: https://logs-us-west1.grafana.net
username (*secret.Secret, optional)
Specify a username if the Loki server requires authentication. Secret
Raise UnrecoverableError when the response code is non success, 1xx/3xx/4xx/5xx. If false, the plugin logs error message instead of raising UnrecoverableError.
Using array format of JSON. This parameter is used and valid only for json format. When json_array as true, Content-Profile should be application/json and be able to use JSON data for the HTTP request body.
Default: false
open_timeout (int, optional)
Connection open timeout in seconds.
proxy (string, optional)
Proxy for HTTP request.
read_timeout (int, optional)
Read timeout in seconds.
retryable_response_codes ([]int, optional)
List of retryable response codes. If the response code is included in this list, the plugin retries the buffer flush. Since Fluentd v2 the Status code 503 is going to be removed from default.
Default: [503]
ssl_timeout (int, optional)
TLS timeout in seconds.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Maximum value of total message size to be included in one batch transmission. .
Default: 4096
kafka_agg_max_messages (int, optional)
Maximum number of messages to include in one batch transmission. .
Default: nil
keytab (*secret.Secret, optional)
max_send_retries (int, optional)
Number of times to retry sending of messages to a leader
Default: 1
message_key_key (string, optional)
Message Key
Default: “message_key”
partition_key (string, optional)
Partition
Default: “partition”
partition_key_key (string, optional)
Partition Key
Default: “partition_key”
password (*secret.Secret, optional)
Password when using PLAIN/SCRAM SASL authentication
principal (string, optional)
required_acks (int, optional)
The number of acks required per request .
Default: -1
ssl_ca_cert (*secret.Secret, optional)
CA certificate
ssl_ca_certs_from_system (*bool, optional)
System’s CA cert store
Default: false
ssl_client_cert (*secret.Secret, optional)
Client certificate
ssl_client_cert_chain (*secret.Secret, optional)
Client certificate chain
ssl_client_cert_key (*secret.Secret, optional)
Client certificate key
ssl_verify_hostname (*bool, optional)
Verify certificate hostname
sasl_over_ssl (bool, required)
SASL over SSL
Default: true
scram_mechanism (string, optional)
If set, use SCRAM authentication with specified mechanism. When unset, default to PLAIN authentication
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
topic_key (string, optional)
Topic Key
Default: “topic”
use_default_for_unknown_topic (bool, optional)
Use default for unknown topics
Default: false
username (*secret.Secret, optional)
Username when using PLAIN/SCRAM SASL authentication
HTTPS POST Request Timeout, Optional. Supports s and ms Suffices
Default: 30 s
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Limit to the size of the Logz.io upload bulk. Defaults to 1000000 bytes leaving about 24kB for overhead.
bulk_limit_warning_limit (int, optional)
Limit to the size of the Logz.io warning message when a record exceeds bulk_limit to prevent a recursion when Fluent warnings are sent to the Logz.io output.
endpoint (*Endpoint, required)
Define LogZ endpoint URL
gzip (bool, optional)
Should the plugin ship the logs in gzip compression. Default is false.
http_idle_timeout (int, optional)
Timeout in seconds that the http persistent connection will stay open without traffic.
output_include_tags (bool, optional)
Should the appender add the fluentd tag to the document, called “fluentd_tag”
output_include_time (bool, optional)
Should the appender add a timestamp to your logs on their process time (recommended).
retry_count (int, optional)
How many times to resend failed bulks.
retry_sleep (int, optional)
How long to sleep initially between retries, exponential step-off.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Specify the application name for the rollover index to be created.
Default: default
buffer (*Buffer, optional)
bulk_message_request_threshold (string, optional)
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
This parameter adds additional headers to request. Example: {"token":"secret"}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
data_stream_enable (*bool, optional)
Use @type opensearch_data_stream
data_stream_name (string, optional)
You can specify Opensearch data stream name by this parameter. This parameter is mandatory for opensearch_data_stream.
data_stream_template_name (string, optional)
Specify an existing index template for the data stream. If not present, a new template is created and named after the data stream.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on Fluentd statup.(default: true)
You can specify OpenSearch host by this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, the opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional)
http_backend_excon_nonblock
Default: true
id_key (string, optional)
Field on your data to identify the data uniquely
ignore_exceptions (string, optional)
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
This param is to set a pipeline ID of your OpenSearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify OpenSearch port by this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
reload_after (string, optional)
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
remove_keys_on_update (string, optional)
Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
routing_key (string, optional)
routing_key
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
selector_class_name (string, optional)
selector_class_name
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
sniffer_class_name (string, optional)
The default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The sniffer_class_name parameter gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name.
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in OpenSearch
tag_key (string, optional)
This will add the Fluentd tag in the JSON record.
Default: tag
target_index_affinity (bool, optional)
target_index_affinity
Default: false
target_index_key (string, optional)
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.’) as a separator.
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_exclude_timestamp (bool, optional)
time_key_exclude_timestamp
Default: false
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
truncate_caches_interval (string, optional)
truncate_caches_interval
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
unrecoverable_record_types (string, optional)
unrecoverable_record_types
use_legacy_template (*bool, optional)
Specify wether to use legacy template or not.
Default: true
user (string, optional)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
Default: true
validate_client_version (bool, optional)
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
verify_os_version_at_startup (*bool, optional)
verify_os_version_at_startup (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
Default: index
OpenSearchEndpointCredentials
access_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
assume_role_arn (*secret.Secret, optional)
Typically, you can use AssumeRole for cross-account access or federation.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
strftime_format (string, optional)
Users can set strftime format.
Default: “%s”
ttl (int, optional)
If 0 or negative value is set, ttl is not set in each key.
+
4.8.4.26 - Relabel
Available in Logging Operator version 4.2 and later.
The relabel output uses the relabel output plugin of Fluentd to route events back to a specific Flow, where they can be processed again.
This is useful, for example, if you need to preprocess a subset of logs differently, but then do the same processing on all messages at the end. In this case, you can create multiple flows for preprocessing based on specific log matchers and then aggregate everything into a single final flow for postprocessing.
The value of the label parameter of the relabel output must be the same as the value of the flowLabel parameter of the Flow (or ClusterFlow) where you want to send the messages.
Using the relabel output also makes it possible to pass the messages emitted by the Concat plugin in case of a timeout. Set the timeout_label of the concat plugin to the flowLabel of the flow where you want to send the timeout messages.
Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. .
Default: true
data_type (string, optional)
The type of data that will be sent to Sumo Logic, either event or metric
Default: event
fields (Fields, optional)
In this case, parameters inside <fields> are used as indexed fields and removed from the original input events
The host location for events. Cannot set both host and host_key parameters at the same time. (Default:hostname)
host_key (string, optional)
Key for the host location. Cannot set both host and host_key parameters at the same time.
idle_timeout (int, optional)
If a connection has not been used for this number of seconds it will automatically be reset upon the next use to avoid attempting to send to a closed connection. nil means no timeout.
index (string, optional)
Identifier for the Splunk index to be used for indexing events. If this parameter is not set, the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.
index_key (string, optional)
The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.
insecure_ssl (*bool, optional)
Indicates if insecure SSL connection is allowed
Default: false
keep_keys (bool, optional)
By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.
metric_name_key (string, optional)
Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event parameter. When this prameter is set, the metrics_from_event parameter is automatically set to false.
Default: true
metric_value_key (string, optional)
Field name that contains the metric value, this parameter is required when metric_name_key is configured.
metrics_from_event (*bool, optional)
When data_type is set to “metric”, the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. (Default:true)
non_utf8_replacement_string (string, optional)
If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. .
Default: ’ '
open_timeout (int, optional)
The amount of time to wait for a connection to be opened.
protocol (string, optional)
This is the protocol to use for calling the Hec API. Available values are: http, https.
Default: https
read_timeout (int, optional)
The amount of time allowed between reading two chunks from the socket.
ssl_ciphers (string, optional)
List of SSL ciphers allowed.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source (string, optional)
The source field for events. If this parameter is not set, the source will be decided by HEC. Cannot set both source and source_key parameters at the same time.
source_key (string, optional)
Field name to contain source. Cannot set both source and source_key parameters at the same time.
sourcetype (string, optional)
The sourcetype field for events. When not set, the sourcetype is decided by HEC. Cannot set both source and source_key parameters at the same time.
sourcetype_key (string, optional)
Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.
SQS queue url e.g. https://sqs.us-west-2.amazonaws.com/123456789012/myqueue
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Used to specify the key when merging json or sending logs in text format
Default: message
metric_data_format (string, optional)
The format of metrics you will be sending, either graphite or carbon2 or prometheus
Default: graphite
open_timeout (int, optional)
Set timeout seconds to wait until connection is opened.
Default: 60
proxy_uri (string, optional)
Add the uri of the proxy environment if present.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source_category (string, optional)
Set _sourceCategory metadata field within SumoLogic
Default: nil
source_host (string, optional)
Set _sourceHost metadata field within SumoLogic
Default: nil
source_name (string, required)
Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is nil)
source_name_key (string, optional)
Set as source::path_key’s value so that the source_name can be extracted from Fluentd’s buffer
Default: source_name
sumo_client (string, optional)
Name of sumo client which is send as X-Sumo-Client header
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Authorization Bearer token for http request to VMware Log Intelligence Secret
content_type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Default: simple
LogIntelligenceHeadersOut
LogIntelligenceHeadersOut is used to convert the input LogIntelligenceHeaders to a fluentd
+output that uses the correct key names for the VMware Log Intelligence plugin. This allows the
+Ouput to accept the config is snake_case (as other output plugins do) but output the fluentd
+ config with the proper key names (ie. content_type -> Content-Type)
Authorization (*secret.Secret, required)
Authorization Bearer token for http request to VMware Log Intelligence
Content-Type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Flatten hashes to create one key/val pair w/o losing log data
Default: true
flatten_hashes_separator (string, optional)
Separator to use for joining flattened keys
Default: _
http_conn_debug (bool, optional)
If set, enables debug logs for http connection
Default: false
http_method (string, optional)
HTTP method (post)
Default: post
host (string, optional)
VMware Aria Operations For Logs Host ex. localhost
log_text_keys ([]string, optional)
Keys from log event whose values should be added as log message/text to VMware Aria Operations For Logs. These key/value pairs won’t be expanded/flattened and won’t be added as metadata/fields.
VMware Aria Operations For Logs ingestion api path ex. ‘api/v1/events/ingest’
Default: api/v1/events/ingest
port (int, optional)
VMware Aria Operations For Logs port ex. 9000
Default: 80
raise_on_error (bool, optional)
Raise errors that were rescued during HTTP requests?
Default: false
rate_limit_msec (int, optional)
Simple rate limiting: ignore any records within rate_limit_msec since the last one
Default: 0
request_retries (int, optional)
Number of retries
Default: 3
request_timeout (int, optional)
http connection ttl for each request
Default: 5
ssl_verify (*bool, optional)
SSL verification flag
Default: true
scheme (string, optional)
HTTP scheme (http,https)
Default: http
serializer (string, optional)
Serialization (json)
Default: json
shorten_keys (map[string]string, optional)
Keys from log event to rewrite for instance from ‘kubernetes_namespace’ to ‘k8s_namespace’ tags will be rewritten with substring substitution and applied in the order present in the hash. Hashes enumerate their values in the order that the corresponding keys were inserted, see: https://ruby-doc.org/core-2.2.2/Hash.html
The annotation format is logging.banzaicloud.io/<loggingRef>: watched. Since the name part of the an annotation can’t be empty the default applies to empty loggingRef value as well.
The mount path is generated from the secret information
The name of the counter to create. Note that the value of this option is always prefixed with syslogng_, so for example key("my-custom-key") becomes syslogng_my-custom-key.
labels (ArrowMap, optional)
The labels used to create separate counters, based on the fields of the messages processed by metrics-probe(). The keys of the map are the name of the label, and the values are syslog-ng templates.
level (int, optional)
Sets the stats level of the generated metrics (default 0).
- (struct{}, required)
+
4.8.5.3 - Rewrite
Rewrite filters can be used to modify record contents. Logging operator currently supports the following rewrite functions:
SyslogNGOutput and SyslogNGClusterOutput resources have almost the same structure as Output and ClusterOutput resources, with the main difference being the number and kind of supported destinations.
You can use the following syslog-ng outputs in your SyslogNGOutput and SyslogNGClusterOutput resources.
+
4.8.6.1 - Authentication for syslog-ng outputs
Overview
GRPC-based outputs use this configuration instead of the simple tls field found at most HTTP based destinations. For details, see the documentation of a related syslog-ng destination, for example, Grafana Loki.
Configuration
Auth
Authentication settings. Only one authentication method can be set. Default: Insecure
adc (*ADC, optional)
Application Default Credentials (ADC).
alts (*ALTS, optional)
Application Layer Transport Security (ALTS) is a simple to use authentication, only available within Google’s infrastructure.
insecure (*Insecure, optional)
This is the default method, authentication is disabled (auth(insecure())).
Prunes the unused space in the LogMessage representation
dir (string, optional)
Description: Defines the folder where the disk-buffer files are stored.
disk_buf_size (int64, required)
This is a required option. The maximum size of the disk-buffer in bytes. The minimum value is 1048576 bytes.
mem_buf_length (*int64, optional)
Use this option if the option reliable() is set to no. This option contains the number of messages stored in overflow queue.
mem_buf_size (*int64, optional)
Use this option if the option reliable() is set to yes. This option contains the size of the messages in bytes that is used in the memory part of the disk buffer.
q_out_size (*int64, optional)
The number of messages stored in the output buffer of the destination.
reliable (bool, required)
If set to yes, syslog-ng OSE cannot lose logs in case of reload/restart, unreachable destination or syslog-ng OSE crash. This solution provides a slower, but reliable disk-buffer option.
The group of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-group().
Default: Use the global settings
dir_owner (string, optional)
The owner of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-owner().
Default: Use the global settings
dir_perm (int, optional)
The permission mask of directories created by syslog-ng. Log directories are only created if a file after macro expansion refers to a non-existing directory, and directory creation is enabled (see also the create-dirs() option). For octal numbers prefix the number with 0, for example, use 0755 for rwxr-xr-x.
Default: Use the global settings
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
The body of the HTTP request, for example, body("${ISODATE} ${MESSAGE}"). You can use strings, macros, and template functions in the body. If not set, it will contain the message received from the source by default.
body-prefix (string, optional)
The string syslog-ng OSE puts at the beginning of the body of the HTTP request, before the log message.
body-suffix (string, optional)
The string syslog-ng OSE puts to the end of the body of the HTTP request, after the log message.
delimiter (string, optional)
By default, syslog-ng OSE separates the log messages of the batch with a newline character.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
log-fifo-size (int, optional)
The number of messages that the output queue can store.
method (string, optional)
Specifies the HTTP method to use when sending the message to the server. POST | PUT
password (secret.Secret, optional)
The password that syslog-ng OSE uses to authenticate on the server where it sends the messages.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timeout (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: http://127.0.0.1:8000
user (string, optional)
The username that syslog-ng OSE uses to authenticate on the server where it sends the messages.
user-agent (string, optional)
The value of the USER-AGENT header in the messages sent to the server.
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
Batch
batch-bytes (int, optional)
Description: Sets the maximum size of payload in a batch. If the size of the messages reaches this value, syslog-ng OSE sends the batch to the destination even if the number of messages is less than the value of the batch-lines() option. Note that if the batch-timeout() option is enabled and the queue becomes empty, syslog-ng OSE flushes the messages only if batch-timeout() expires, or the batch reaches the limit set in batch-bytes().
batch-lines (int, optional)
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
+
4.8.6.6 - Loggly output
Overview
The loggly() destination sends log messages to the Loggly Logging-as-a-Service provider.
+You can send log messages over TCP, or encrypted with TLS for syslog-ng outputs.
A JSON object representing key-value pairs for the Event. These key-value pairs adds structure to Events, making it easier to search. Attributes can be nested JSON objects, however, we recommend limiting the amount of nesting.
Default: "--scope rfc5424 --exclude MESSAGE --exclude DATE --leave-initial-dot"
batch_bytes (int, optional)
batch_lines (int, optional)
batch_timeout (int, optional)
body (string, optional)
content_type (string, optional)
This field specifies the content type of the log records being sent to Falcon’s LogScale.
Default: "application/json"
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
extra_headers (string, optional)
This field represents additional headers that can be included in the HTTP request when sending log records to Falcon’s LogScale.
Default: empty
persist_name (string, optional)
rawstring (string, optional)
The raw string representing the Event. The default display for an Event in LogScale is the rawstring. If you do not provide the rawstring field, then the response defaults to a JSON representation of the attributes field.
Default: empty
timezone (string, optional)
The timezone is only required if you specify the timestamp in milliseconds. The timezone specifies the local timezone for the event. Note that you must still specify the timestamp in UTC time.
token (*secret.Secret, optional)
An Ingest Token is a unique string that identifies a repository and allows you to send data to that repository.
Default: empty
url (*secret.Secret, optional)
Ingester URL is the URL of the Humio cluster you want to send data to.
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
labels (filter.ArrowMap, optional)
Using the Labels map, Kubernetes label to Loki label mapping can be configured. Example: {"app" : "$PROGRAM"}
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See syslog-ng docs for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
template (string, optional)
Template for customizing the log message format.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timestamp (string, optional)
The timestamp that will be applied to the outgoing messages (possible values: current|received|msg default: current). Loki does not accept events, in which the timestamp is not monotonically increasing.
url (string, optional)
Specifies the hostname or IP address and optionally the port number of the service that can receive log data via gRPC. Use a colon (:) after the address to specify the port number of the server. For example: grpc://127.0.0.1:8000
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The name of the MongoDB collection where the log messages are stored (collections are similar to SQL tables). Note that the name of the collection must not start with a dollar sign ($), and that it may contain dot (.) characters.
dir (string, optional)
Defines the folder where the disk-buffer files are stored.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
fallback-topic is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).
qos (int, optional)
qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered.
template (string, optional)
Template where you can configure the message template sent to the MQTT broker. By default, the template is: $ISODATE $HOST $MSGHDR$MSG
topic (string, optional)
Topic defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.
The password used for authentication on a password-protected Redis server.
command (StringList, optional)
Internal rendered form of the CommandAndArguments field
command_and_arguments ([]string, optional)
The Redis command to execute, for example, LPUSH, INCR, or HINCRBY. Using the HINCRBY command with an increment value of 1 allows you to create various statistics. For example, the command("HINCRBY" "${HOST}/programs" "${PROGRAM}" "1") command counts the number of log messages on each host for each program.
Default: ""
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
host (string, optional)
The hostname or IP address of the Redis server.
Default: 127.0.0.1
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
Persistname
port (int, optional)
The port number of the Redis server.
Default: 6379
retries (int, optional)
If syslog-ng OSE cannot send a message, it will try again until the number of attempts reaches retries().
Default: 3
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Default: 0
time-reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The number of messages that the output queue can store.
max_object_size (int, optional)
Set the maximum object size size.
Default: 5120GiB
max_pending_uploads (int, optional)
Set the maximum number of pending uploads.
Default: 32
object_key (string, optional)
The object_key for the S3 server.
object_key_timestamp (RawString, optional)
Set object_key_timestamp
persist_name (string, optional)
Persistname
region (string, optional)
Set the region option.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
secret_key (*secret.Secret, optional)
The secret_key for the S3 server.
storage_class (string, optional)
Set the storage_class option.
template (RawString, optional)
Template
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
persist_name (string, optional)
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
persist_name (string, optional)
port (int, optional)
This option sets the port number of the Sumo Logic server to connect to.
Default: 6514
tag (string, optional)
This option specifies the list of tags to add as the tags fields of Sumo Logic messages. If not specified, syslog-ng OSE automatically adds the tags already assigned to the message. If you set the tag() option, only the tags you specify will be added to the messages.
By default, syslog-ng OSE closes destination sockets if it receives any input from the socket (for example, a reply). If this option is set to no, syslog-ng OSE just ignores the input, but does not close the socket. For details, see the documentation of the AxoSyslog syslog-ng distribution.
disk_buffer (*DiskBuffer, optional)
Enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Unique name for the syslog-ng driver. If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The name of a directory that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. (Optional) For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file, that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
cipher-suite (string, optional)
Description: Specifies the cipher, hash, and key-exchange algorithms used for the encryption, for example, ECDHE-ECDSA-AES256-SHA384. The list of available algorithms depends on the version of OpenSSL used to compile syslog-ng.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Use the certificate store of the system for verifying HTTPS certificates. For details, see the AxoSyslog Core documentation.
GrpcTLS
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
+
5 - Examples
Flow examples
The following examples show some simple flows. For more examples that use filters, see Filter examples in Flows.
Flow with a single output
This Flow sends every message with the app: nginx label to the output called forward-output-sample.
Flow with multiple outputs
This Flow sends every message with the app: nginx label to the gcs-output-sample and s3-output-example outputs.
YAML files for simple logging flows with filter examples.
GeoIP filter
Parser and tag normalizer
Dedot filter
Multiple format
+
5.2 - Parsing custom date formats
By default, the syslog-ng aggregator uses the time when a message has been received on its input source as the timestamp. If you want to use the timestamp written in the message metadata, you can use a date-parser.
Available in Logging operator version 4.5 and later.
To use the timestamps written by the container runtime (cri or docker) and parsed by Fluent Bit, define the sourceDateParser in the syslog-ng spec.
5.3 - Store Nginx Access Logs in Amazon CloudWatch with Logging Operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to CloudWatch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator and a demo Application
Install the Logging operator and a demo application using Helm.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Create AWS secret
+
If you have your $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY set you can use the following snippet.
5.4 - Transport all logs into Amazon S3 with Logging operator
This guide describes how to collect all the container logs in Kubernetes using the Logging operator, and how to send them to Amazon S3.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator
Install the Logging operator.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
Check the output. The logs will be available in the bucket on a path like:
5.5 - Store NGINX access logs in Elasticsearch with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Elasticsearch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Elasticsearch
First, deploy Elasticsearch in your Kubernetes cluster. The following procedure is based on the Elastic Cloud on Kubernetes quickstart, but there are some minor configuration changes, and we install everything into the logging namespace.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Use the following command to retrieve the password of the elastic user:
kubectl -n logging get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}'| base64 --decode;echo
+
+
Enable port forwarding to the Kibana Dashboard Service.
Open the Kibana dashboard in your browser at https://localhost:5601 and login as elastic using the retrieved password.
+
By default, the Logging operator sends the incoming log messages into an index called fluentd. Create an Index Pattern that includes this index (for example, fluentd*), then select Menu > Kibana > Discover. You should see the dashboard and some sample log messages from the demo application.
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Splunk.
Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output (in this case, to Splunk). For more details about the Logging operator, see the Logging operator overview.
Deploy Splunk
First, deploy Splunk Standalone in your Kubernetes cluster. The following procedure is based on the Splunk on Kubernetes quickstart.
5.7 - Sumo Logic with Logging operator and Fluentd
This guide walks you through a simple Sumo Logic setup using the Logging Operator.
+Sumo Logic has Prometheus and logging capabilities as well. Now we only focus on the logging part.
Configuration
There are 3 crucial plugins needed for a proper Sumo Logic setup.
+
Kubernetes metadata enhancer
Sumo Logic filter
Sumo Logic output
Let’s setup the logging first.
GlobalFilters
The first thing we need to ensure is that the EnhanceK8s filter is present in the globalFilters section of the Logging spec.
+This adds additional data to the log lines (like deployment and service names).
Now we can create a ClusterFlow. Add the Sumo Logic filter to the filters section of the ClusterFlow spec.
+It will use the Kubernetes metadata and moves them to a special field called _sumo_metadata.
+All those moved fields will be sent as HTTP Header to the Sumo Logic endpoint.
+
Note: As we are using Fluent Bit to enrich Kubernetes metadata, we need to specify the field names where this data is stored.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Configure the Logging operator
+
+
Create the logging resource with a persistent syslog-ng installation.
5.9 - Transport Nginx Access Logs into Kafka with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Kafka.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
5.10 - Store Nginx Access Logs in Grafana Loki with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Grafana Loki.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Loki and Grafana
+
+
Add the chart repositories of Loki and Grafana using the following commands:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Nodegroup-based multitenancy allows you to have multiple tenants (for example, different developer teams or customer environments) on the same cluster who can configure their own logging resources within their assigned namespaces residing on different node groups.
+These resources are isolated from the resources of the other tenants so the configuration issues and performance characteristics of one tenant doesn’t affect the others.
Sample setup
The following procedure creates two tenants (A and B) and their respective namespaces on a two-node cluster.
+
+
If you don’t already have a cluster, create one with your provider. For a quick test, you can use a local cluster, for example, using minikube:
minikube start --nodes=2
+
+
Set labels on the nodes that correspond to your tenants, for example, tenant-a and tenant-b.
Output metrics are added before the log reaches the destination, and is decorated with the output metadata like: name, namespace, and scope. scope stores whether the output is a local or global one. For example:
The following sections describe how to change the configuration of your logging infrastructure, that is, how to configure your log collectors and forwarders.
The logging resource defines the logging infrastructure for your cluster that collects and transports your log messages, and also contains configurations for the Fluent Bit log collector and the Fluentd and syslog-ng log forwarders. It also establishes the controlNamespace, the administrative namespace of the Logging operator. The Fluentd and syslog-ng statefulsets and the Fluent Bit daemonset are deployed in this namespace, and global resources like ClusterOutput and ClusterFlow are evaluated only in this namespace by default - they are ignored in any other namespace unless allowClusterResourcesFromAllNamespaces is set to true.
You can customize the configuration of Fluentd, syslog-ng, and Fluent Bit in the logging resource. The logging resource also declares watchNamespaces, that specifies the namespaces where Flow/SyslogNGFlow and Output/SyslogNGOutput resources will be applied into Fluentd’s/syslog-ng’s configuration.
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
You can customize the following sections of the logging resource:
+
Generic parameters of the logging resource. For the list of available parameters, see LoggingSpec.
The fluentd statefulset that Logging operator deploys. For a list of parameters, see FluentdSpec. For examples on customizing the Fluentd configuration, see Configure Fluentd.
The syslogNG statefulset that Logging operator deploys. For a list of parameters, see SyslogNGSpec. For examples on customizing the Fluentd configuration, see Configure syslog-ng.
The fluentbit field is deprecated. Fluent Bit should now be configured separately, see Fluent Bit log collector.
The following example snippets use the logging namespace. To create this namespace if it does not already exist, run:
Starting with Logging operator version 4.3, you can use the watchNamespaceSelector selector to select the watched namespaces based on their label, or an expression, for example:
Using the standalone FluentdConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.fluentd configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see FluentdSpec.
Migrating from spec.fluentd to FluentdConfig
The standalone FluentdConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.fluentd configuration method. Using the FluentdConfig CRD allows you to remove the spec.fluentd section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentdConfig CRD, so you can have separate roles that can manage the Logging resource and the FluentdConfig resource (that is, the Fluentd deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.fluentd configuration from the Logging resource to a separate FluentdConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.fluentd section. For example:
Create a new FluentdConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.fluentd section from the Logging resource into the spec section of the FluentdConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.fluentd section from the Logging resource, then apply the Logging and the FluentdConfig CRDs.
Using the standalone FluentdConfig resource
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one FluentdConfig at a time. The controller registers the active FluentdConfig resource into the Logging resource’s status under fluentdConfigName, and also registers the Logging resource name under logging in the FluentdConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example"
+}
+
kubectl get fluentdconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a FluentdConfig is already registered to a Logging resource and you create another FluentdConfig resource in the same namespace, then the first FluentdConfig is left intact, while the second one should have the following status:
kubectl get fluentdconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached fluentd configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example",
+"problems": [
+"multiple fluentd configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Logging
+metadata:
+name:default-logging-simple
+spec:
+fluentd:
+disablePvc:true
+bufferStorageVolume:
+hostPath:
+path:""# leave it empty to automatically generate: /opt/logging-operator/default-logging-simple/default-logging-simple-fluentd-buffer
+fluentbit:{}
+controlNamespace:logging
+
FluentOutLogrotate
The following snippet redirects Fluentd’s stdout to a file and configures rotation settings.
This mechanism was used prior to version 4.4 to avoid Fluent-bit rereading Fluentd’s logs and causing an exponentially growing amount of redundant logs.
Example configuration used by the operator in version 4.3 and earlier (keep 10 files, 10M each):
Fluentd logs are now excluded using the fluentbit.io/exclude: "true" annotation.
Scaling
You can scale the Fluentd deployment manually by changing the number of replicas in the fluentd section of the The Logging custom resource. For example:
While you can scale down the Fluentd deployment by decreasing the number of replicas in the fluentd section of the The Logging custom resource, it won’t automatically be graceful, as the controller will stop the extra replica pods without waiting for any remaining buffers to be flushed.
+You can enable graceful draining in the scaling subsection:
When graceful draining is enabled, the operator starts drainer jobs for any undrained volumes.
+The drainer job flushes any remaining buffers before terminating, and the operator marks the associated volume (the PVC, actually) as drained until it gets used again.
+The drainer job has a template very similar to that of the Fluentd deployment with the addition of a sidecar container that oversees the buffers and signals Fluentd to terminate when all buffers are gone.
+Pods created by the job are labeled as not to receive any further logs, thus buffers will clear out eventually.
If you want, you can specify a custom drainer job sidecar image in the drain subsection:
In addition to the drainer job, the operator also creates a placeholder pod with the same name as the terminated pod of the Fluentd deployment to keep the deployment from recreating that pod which would result in concurrent access of the volume.
+The placeholder pod just runs a pause container, and goes away as soon as the job has finished successfully or the deployment is scaled back up and explicitly flushing the buffers is no longer necessary because the newly created replica will take care of processing them.
You can mark volumes that should be ignored by the drain logic by adding the label logging.banzaicloud.io/drain: no to the PVC.
Autoscaling with HPA
To configure autoscaling of the Fluentd deployment using Horizontal Pod Autoscaler (HPA), complete the following steps.
Install Prometheus and the Prometheus Adapter if you don’t already have them installed on the cluster. Adjust the default Prometheus address values as needed for your environment (set prometheus.url, prometheus.port, and prometheus.path to the appropriate values).
+
(Optional) Install metrics-server to access basic metrics. If the readiness of the metrics-server pod fails with HTTP 500, try adding the --kubelet-insecure-tls flag to the container.
+
If you want to use a custom metric for autoscaling Fluentd and the necessary metric is not available in Prometheus, define a Prometheus recording rule:
Alternatively, you can define the derived metric as a configuration rule in the Prometheus Adapter’s config map.
+
If it’s not already installed, install the logging-operator and configure a logging resource with at least one flow. Make sure that the logging resource has buffer volume metrics monitoring enabled under spec.fluentd:
Verify that the custom metric is available by running:
kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/buffer_space_usage_ratio'
+
+
The logging-operator enforces the replica count of the stateful set based on the logging resource’s replica count, even if it’s not set explicitly. To allow for HPA to control the replica count of the stateful set, this coupling has to be severed.
+Currently, the only way to do that is by deleting the logging-operator deployment.
+
Create a HPA resource. The following example tries to keep the average buffer volume usage of Fluentd instances at 80%.
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluentd in the livenessProbe section of the The Logging custom resource. For example:
You can deploy custom images by overriding the default images using the following parameters in the fluentd or fluentbit sections of the logging resource.
+
+
+
Name
Type
Default
Description
+
+
repository
string
""
Image repository
+
tag
string
""
Image tag
+
pullPolicy
string
""
Always, IfNotPresent, Never
The following example deploys a custom fluentd image:
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
Using the standalone syslogNGConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.syslogNG configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see SyslogNGSpec.
Migrating from spec.syslogNG to syslogNGConfig
The standalone syslogNGConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.syslogNG configuration method. Using the syslogNGConfig CRD allows you to remove the spec.syslogNG section from the Logging CRD, which has the following benefits.
+
RBAC control over the syslogNGConfig CRD, so you can have separate roles that can manage the Logging resource and the syslogNGConfig resource (that is, the syslog-ng deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.syslogNG configuration from the Logging resource to a separate syslogNGConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.syslogNG section. For example:
Create a new syslogNGConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.syslogNG section from the Logging resource into the spec section of the syslogNGConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.syslogNG section from the Logging resource, then apply the Logging and the syslogNGConfig CRDs.
Using the standalone syslogNGConfig resource
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one syslogNGConfig at a time. The controller registers the active syslogNGConfig resource into the Logging resource’s status under syslogNGConfigName, and also registers the Logging resource name under logging in the syslogNGConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example"
+}
+
kubectl get syslogngconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a syslogNGConfig is already registered to a Logging resource and you create another syslogNGConfig resource in the same namespace, then the first syslogNGConfig is left intact, while the second one should have the following status:
kubectl get syslogngconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached syslog-ng configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example",
+"problems": [
+"multiple syslog-ng configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
Volume mount for buffering
The following example sets a volume mount that syslog-ng can use for buffering messages on the disk (if Disk buffer is configured in the output).
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for syslog-ng in the livenessProbe section of the The Logging custom resource. For example:
Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations.
Logging operator uses Fluent Bit as a log collector agent: Logging operator deploys Fluent Bit to your Kubernetes nodes where it collects and enriches the local logs and transfers them to a log forwarder instance.
Ways to configure Fluent Bit
There are three ways to configure the Fluent Bit daemonset:
+
Using the spec.fluentbit section of The Logging custom resource. This method is deprecated and will be removed in the next major release.
Using the standalone FluentbitAgent CRD. This method is only available in Logging operator version 4.2 and newer, and the specification of the CRD is compatible with the spec.fluentbit configuration method.
Using the spec.nodeagents section of The Logging custom resource. This method is deprecated and will be removed from the Logging operator. (Note that this configuration isn’t compatible with the FluentbitAgent CRD.)
For the detailed list of available parameters, see FluentbitSpec.
Migrating from spec.fluentbit to FluentbitAgent
The standalone FluentbitAgent CRD is only available in Logging operator version 4.2 and newer. Its specification and logic is identical with the spec.fluentbit configuration method. Using the FluentbitAgent CRD allows you to remove the spec.fluentbit section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentbitAgent CRD, so you can have separate roles that can manage the Logging resource and the FluentbitAgent resource (that is, the Fluent Bit deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
Create a new FluentbitAgent CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+
+
Copy the the spec.fluentbit section from the Logging resource into the spec section of the FluentbitAgent CRD, then fix the indentation.
+
Specify the paths for the positiondb and the bufferStorageVolume. If you used the default settings in the spec.fluentbit configuration, set empty strings as paths, like in the following example. This is needed to retain the existing buffers of the deployment, otherwise data loss may occur.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+spec:
+inputTail:
+storage.type:filesystem
+positiondb:
+hostPath:
+path:""
+bufferStorageVolume:
+hostPath:
+path:""
+
+
Delete the spec.fluentbit section from the Logging resource, then apply the Logging and the FluentbitAgent CRDs.
Examples
The following sections show you some examples on configuring Fluent Bit. For the detailed list of available parameters, see FluentbitSpec.
+
Note: These examples use the traditional method that configures the Fluent Bit deployment using spec.fluentbit section of The Logging custom resource.
Filters
Kubernetes (filterKubernetes)
Fluent Bit Kubernetes Filter allows you to enrich your log files with Kubernetes metadata. For example:
For the detailed list of available parameters for this plugin, see InputTail.
+More Info.
Buffering
Buffering in Fluent Bit places the processed data into a temporal location until is sent to Fluentd. By default, the Logging operator sets storage.path to /buffers and leaves fluent-bit defaults for the other options.
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluent Bit in the livenessProbe section of the The Logging custom resource. For example:
There can be at least two different use cases where one might need multiple sets of node agents running with different configuration while still forwarding logs to the same aggregator.
One specific example is when there is a need for a configuration change in a rolling upgrade manner. As new nodes come up, they need to run with a new configuration, while old nodes use the previous configuration.
The other use case is when there are different node groups in a cluster for multitenancy reasons for example. You might need different Fluent Bit configurations on the separate node groups in that case.
Starting with Logging operator version 4.2, you can do that by using the FluentbitAgent CRD. This allows you to implement hard multitenancy on the node group level.
To configure multiple FluentbitAgent CRDs for a cluster, complete the following steps.
+
Note: The examples refer to a scenario where you have two node groups that have the Kubernetes label nodeGroup=A and nodeGroup=B. These labels are fictional and are used only as examples. Node labels are not available in the log metadata, to have similar labels, you have to apply the node labels directly to the pods. How to do that is beyond the scope of this guide (for example, you can use a policy engine, like Kyverno).
Edit your existing FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=A. For details, see nodeSelector in the Kubernetes documentation.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the same name as the logging resource does
+name:multi
+spec:
+nodeSelector:
+nodeGroup:"A"
+
+
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
Create a new FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=B. For details, see nodeSelector in the Kubernetes documentation.
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
Verify that the Logging operator pod is running. Issue the following command: kubectl get pods |grep logging-operator
+The output should include the a running pod, for example:
NAME READY STATUS RESTARTS AGE
+logging-demo-log-generator-6448d45cd9-z7zk8 1/1 Running 0 24m
+
+
Check the status of your resources. Beginning with Logging Operator 3.8, all custom resources have a Status and a Problems field. In a healthy system, the Problems field of the resources is empty, for example:
kubectl get clusteroutput -A
+
Sample output:
NAMESPACE NAME ACTIVE PROBLEMS
+default nullout true
+
The ACTIVE column indicates that the ClusterOutput has successfully passed the configcheck and presented it in the current fluentd configuration. When no errors are reported the PROBLEMS column is empty.
Take a look at another example, in which we have an incorrect ClusterFlow.
kubectl get clusterflow -o wide
+
Sample output:
NAME ACTIVE PROBLEMS
+all-log true
+nullout false1
+
You can see that the nulloutClusterflow is inactive and there is 1 problem with the configuration. To display the problem, check the status field of the object, for example:
kubectl get clusterflow nullout -o=jsonpath='{.status}'| jq
+
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
7.1.1 - Troubleshooting Fluent Bit
The following sections help you troubleshoot the Fluent Bit component of the Logging operator.
Check the Fluent Bit daemonset
Verify that the Fluent Bit daemonset is available. Issue the following command: kubectl get daemonsets
+The output should include a Fluent Bit daemonset, for example:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+logging-demo-fluentbit 11111 <none> 110s
+
Check the Fluent Bit configuration
You can display the current configuration of the Fluent Bit daemonset using the following command:
+kubectl get secret logging-demo-fluentbit -o jsonpath="{.data['fluent-bit\.conf']}" | base64 --decode
All Fluent Bit image tags have a debug version marked with the -debug suffix. You can install this debug version using the following command:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
After deploying the debug version, you can kubectl exec into the pod using sh and look around. For example: kubectl exec -it logging-demo-fluentbit-778zg sh
Check the queued log messages
You can check the buffer directory if Fluent Bit is configured to buffer queued log messages to disk instead of in memory. (You can configure it through the InputTail fluentbit config, by setting the storage.type field to filesystem.)
kubectl exec -it logging-demo-fluentbit-9dpzg ls /buffers
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
7.1.2 - Troubleshooting Fluentd
The following sections help you troubleshoot the Fluentd statefulset component of the Logging operator.
Check Fluentd pod status (statefulset)
Verify that the Fluentd statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-fluentd 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated fluentd configuration before applying it to fluentd. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check Fluentd configuration
Use the following command to display the configuration of Fluentd:
+kubectl get secret logging-demo-fluentd-app -o jsonpath="{.data['fluentd\.conf']}" | base64 --decode
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Tip: If the logs include the error="can't create buffer file ... error message, Fluentd can’t create the buffer file at the specified location. This can mean for example that the disk is full, the filesystem is read-only, or some other permission error. Check the buffer-related settings of your Fluentd configuration.
Set stdout as an output
You can use an stdout filter at any point in the flow to dump the log messages to the stdout of the Fluentd container. For example:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
kubectl exec -it logging-demo-fluentd-0 ls /buffers
Defaulting container name to fluentd.
+Use 'kubectl describe pod/logging-demo-fluentd-0 -n logging' to see all of the containers in this pod.
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer.meta
+
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
7.1.3 - Troubleshooting syslog-ng
The following sections help you troubleshoot the syslog-ng statefulset component of the Logging operator.
Check syslog-ng pod status (statefulset)
Verify that the syslog-ng statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-syslogng 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated syslog-ng configuration before applying it to syslog-ng. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check syslog-ng configuration
Use the following command to display the configuration of syslog-ng:
+kubectl get secret logging-demo-syslogng-app -o jsonpath="{.data['syslogng\.conf']}" | base64 --decode
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
7.2 - Monitor your logging pipeline with Prometheus Operator
You can configure the Logging operator to expose metrics endpoints for Fluentd, Fluent Bit, and syslog-ng using ServiceMonitor resources. That way, a Prometheus operator running in the same cluster can automatically fetch your logging metrics.
Metrics Variables
You can configure the following metrics-related options in the spec.fluentd.metrics, spec.syslogNG.metrics, and spec.fluentbit.metrics sections of your Logging resource.
+
+
+
Variable Name
Type
Required
Default
Description
+
+
interval
string
No
“15s”
Scrape Interval
+
timeout
string
No
“5s”
Scrape Timeout
+
port
int
No
-
Metrics Port.
+
path
int
No
-
Metrics Path.
+
serviceMonitor
bool
No
false
Enable to create ServiceMonitor for Prometheus operator
Prometheus Operator Documentation
+The prometheus-operator install may take a few more minutes. Please be patient.
+The logging-operator metrics function depends on the prometheus-operator’s resources.
+If those do not exist in the cluster it may cause the logging-operator’s malfunction.
Install Logging Operator with Helm
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
This section describes how to set alerts for your logging infrastructure. Alternatively, you can enable the default alerting rules that are provided by the Logging operator.
+
Note: Alerting based on the contents of the collected log messages is not covered here.
Prerequisites
Using alerting rules requires the following:
+
Logging operator 3.14.0 or newer installed on the cluster.
syslog-ng is supported only in Logging operator 4.0 or newer.
Enable the default alerting rules
Logging operator comes with a number of default alerting rules that help you monitor your logging environment and ensure that it’s working properly. To enable the default rules, complete the following steps.
The number of Fluent Bit errors or retries is high
For the Fluentd and syslog-ng log forwarders:
+
Prometheus cannot access the log forwarder node
The buffers of the log forwarder are filling up quickly
Traffic to the log forwarder is increasing at a high rate
The number of errors or retries is high on the log forwarder
The buffers are over 90% full
Currently, you cannot modify the default alerting rules, because they are generated from the source files. For the detailed list of alerts, see the source code:
For example, the Logging operator creates the following alerting rule to detect if a Fluentd node is down:
apiVersion:monitoring.coreos.com/v1
+kind:PrometheusRule
+name:logging-demo-fluentd-metrics
+namespace:logging
+spec:
+groups:
+- name:fluentd
+rules:
+- alert:FluentdNodeDown
+annotations:
+description:Prometheus could not scrape {{ "{{ $labels.job }}" }} for more
+than 30 minutes
+summary:fluentd cannot be scraped
+expr:up{job="logging-demo-fluentd-metrics", namespace="logging"} == 0
+for:10m
+labels:
+service:fluentd
+severity:critical
+
On the Prometheus web interface, this rule looks like:
+
7.4 - Readiness probe
This section describes how to configure readiness probes for your Fluentd and syslog-ng pods. If you don’t configure custom readiness probes, Logging operator uses the default probes.
Prerequisites
+
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
+
syslog-ng is supported only in Logging operator 4.0 or newer.
Overview of default readiness probes
By default, Logging operator performs the following readiness checks:
+
Number of buffer files is too high (higher than 5000)
Fluentd buffers are over 90% full
syslog-ng buffers are over 90% full
The parameters of the readiness probes and pod failure is set by using the usual Kubernetes probe configuration parameters. Instead of the Kubernetes defaults, the Logging operator uses the following values for these parameters:
Currently, you cannot modify the default readiness probes, because they are generated from the source files. For the detailed list of readiness probes, see the Default readiness probes. However, you can customize their values in the Logging custom resource, separately for the Fluentd and syslog-ng log forwarder. For example:
The Logging operator applies the following readiness probe by default:
readinessProbe:
+exec:
+command:
+- /bin/sh
+- -c
+- FREESPACE_THRESHOLD=90
+- FREESPACE_CURRENT=$(df -h $BUFFER_PATH | grep / | awk '{ print $5}' | sed
+'s/%//g')
+- if [ "$FREESPACE_CURRENT" -gt "$FREESPACE_THRESHOLD" ] ; then exit 1; fi
+- MAX_FILE_NUMBER=5000
+- FILE_NUMBER_CURRENT=$(find $BUFFER_PATH -type f -name *.buffer | wc -l)
+- if [ "$FILE_NUMBER_CURRENT" -gt "$MAX_FILE_NUMBER" ] ; then exit 1; fi
+failureThreshold:1
+initialDelaySeconds:5
+periodSeconds:30
+successThreshold:3
+timeoutSeconds:3
+
Add custom readiness probes
You can add your own custom readiness probes to the spec.ReadinessProbe section of the logging custom resource. For details on the format of readiness probes, see the official Kubernetes documentation.
+
CAUTION:
If you set any custom readiness probes, they completely override the default probes.
+
+
7.5 - Collect Fluentd errors
This section describes how to collect Fluentd error messages (messages that are sent to the @ERROR label from another plugin in Fluentd).
+
Note: It depends on the specific plugin implementation what messages are sent to the @ERROR label. For example, a parsing plugin that fails to parse a line could send that line to the @ERROR label.
Prerequisites
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
Configure error output
To collect the error messages of Fluentd, complete the following steps.
+
+
Create a ClusterOutput that receives logs from every logging flow where error happens. For example, create a file output. For details on creating outputs, see Output and ClusterOutput.
Set the errorOutputRef in the Logging resource to your preferred ClusterOutput.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Logging
+metadata:
+name:example
+spec:
+controlNamespace:default
+enableRecreateWorkloadOnImmutableFieldChange:true
+errorOutputRef:error-file
+fluentbit:
+bufferStorage:{}
+bufferStorageVolume:
+hostPath:
+path:""
+filterKubernetes:{}
+# rest of the resource is omitted
+
You cannot apply filters for this specific error flow.
+
Apply the ClusterOutput and Logging to your cluster.
+
7.6 - Optimization
Watch specific resources
The Logging operator watches resources in all namespaces, which is required because it manages cluster-scoped objects, and also objects in multiple namespaces.
However, in a large-scale infrastructure, where the number of resources is large, it makes sense to limit the scope of resources monitored by the Logging operator to save considerable amount of memory and container restarts.
Starting with Logging operator version 3.12.0, this is now available by passing the following command-line arguments to the operator.
+
watch-namespace: Watch only objects in this namespace. Note that even if the watch-namespace option is set, the operator must watch certain objects (like Flows and Outputs) in every namespace.
watch-logging-name: Logging resource name to optionally filter the list of watched objects based on which logging they belong to by checking the app.kubernetes.io/managed-by label.
+
7.7 - Scaling
+
Note: When multiple instances send logs to the same output, the output can receive chunks of messages out of order. Some outputs tolerate this (for example, Elasticsearch), some do not, some require fine tuning (for example, Loki).
Scaling Fluentd
In a large-scale infrastructure the logging components can get high load as well. The typical sign of this is when fluentd cannot handle its buffer directory size growth for more than the configured or calculated (timekey + timekey_wait) flush interval. In this case, you can scale the fluentd statefulset.
The Logging Operator supports scaling a Fluentd aggregator statefulset up and down. Scaling statefulset pods down is challenging, because we need to take care of the underlying volumes with buffered data that hasn’t been sent, but the Logging Operator supports that use case as well.
The details for that and how to configure an HPA is described in the following documents:
SyslogNG can be scaled up as well, but persistent disk buffers are not processed automatically when scaling the statefulset down. That is currently a manual process.
+
7.8 - CPU and memory requirements
The resource requirements and limits of your Logging operator deployment must match the size of your cluster and the logging workloads. By default, the Logging operator uses the following configuration.
This documentation helps to set-up a developer environment and writing plugins for the Logging operator.
Setting up Kind
+
+
Install Kind on your computer
go get sigs.k8s.io/kind@v0.5.1
+
+
Create cluster
kind create cluster --name logging
+
+
Install prerequisites (this is a Kubebuilder makefile that will generate and install crds)
make install
+
+
Run the Operator
go run main.go
+
Writing a plugin
To add a plugin to the logging operator you need to define the plugin struct.
+
Note: Place your plugin in the corresponding directory pkg/sdk/logging/model/filter or pkg/sdk/logging/model/output
typeMyExampleOutputstruct{
+// Path that is required for the plugin
+Pathstring`json:"path,omitempty"`
+}
+
The plugin uses the JSON tags to parse and validate configuration. Without tags the configuration is not valid. The fluent parameter name must match with the JSON tag. Don’t forget to use omitempty for non required parameters.
Implement ToDirective
To render the configuration you have to implement the ToDirective function.
The operator parse the docstrings for the documentation.
...
+// AWS access key id
+AwsAccessKey*secret.Secret`json:"aws_key_id,omitempty"`
+...
+
Will generate the following Markdown
+
+
+
Variable Name
Default
Applied function
+
+
AwsAccessKey
AWS access key id
You can hint default values in docstring via (default: value). This is useful if you don’t want to set default explicitly with tag. However during rendering defaults in tags have priority over docstring.
...
+// The format of S3 object keys (default: %{path}%{time_slice}_%{index}.%{file_extension})
+S3ObjectKeyFormatstring`json:"s3_object_key_format,omitempty"`
+...
+
Special docstrings
+
+docName:"Title for the plugin section"
+docLink:"Buffer,./buffer.md"
You can declare document title and description above the type _doc* interface{} variable declaration.
Example Document headings:
// +docName:"Amazon S3 plugin for Fluentd"
+// **s3** output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log '2011-01-02 message B' is reached, and then another log '2011-01-03 message B' is reached in this order, the former one is stored in "20110102.gz" file, and latter one in "20110103.gz" file.
+type_docS3interface{}
+
Run the following command to generate updated docs and CRDs for your new plugin.
make generate
+
+
10 - Commercial support for the Logging operator
If you encounter problems while using the Logging operator the documentation does not address, open an issue or talk to us on Discord or on the CNCF Slack.
The following companies provide commercial support for the Logging operator:
Licensed under the Apache License, Version 2.0 (the “License”);
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an “AS IS” BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
13 - Community
If you have questions about Logging operator or its components, get in touch with us on Slack!
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/commercial-support/index.html b/4.6/docs/commercial-support/index.html
new file mode 100644
index 000000000..f93a101fd
--- /dev/null
+++ b/4.6/docs/commercial-support/index.html
@@ -0,0 +1,616 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Commercial support for the Logging operator | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Commercial support for the Logging operator
+
If you encounter problems while using the Logging operator the documentation does not address, open an issue or talk to us on Discord or on the CNCF Slack.
The following companies provide commercial support for the Logging operator:
You can configure the various features and parameters of the Logging operator using Custom Resource Definitions (CRDs).
The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
The log collectors are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
The log forwarder (also called log aggregator) instance receives, filters, and transforms the incoming logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements. For tips, see Which log forwarder to use.
You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
You can configure the Logging operator using the following Custom Resource Definitions.
+
logging - The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It can also contain configurations for Fluent Bit, Fluentd, and syslog-ng. (Starting with Logging operator version 4.5, you can also configure Fluent Bit, Fluentd, and syslog-ng as separate resources.)
CRDs for Fluentd:
+
+
output - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also clusteroutput. To configure syslog-ng outputs, see SyslogNGOutput.
flow - Defines a Fluentd logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also clusterflow. To configure syslog-ng flows, see SyslogNGFlow.
clusteroutput - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
clusterflow - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure syslog-ng clusterflows, see SyslogNGClusterFlow.
CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+
+
SyslogNGOutput - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also SyslogNGClusterOutput. To configure Fluentd outputs, see output.
SyslogNGFlow - Defines a syslog-ng logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also SyslogNGClusterFlow. To configure Fluentd flows, see flow.
SyslogNGClusterOutput - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
SyslogNGClusterFlow - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure Fluentd clusterflows, see clusterflow.
The following sections show examples on configuring the various components to configure outputs and to filter and route your log messages to these outputs. For a list of available CRDs, see Custom Resource Definitions.
+
1 - Which log forwarder to use
The Logging operator supports Fluentd and syslog-ng (via the AxoSyslog syslog-ng distribution) as log forwarders. The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. Which one to use depends on your logging requirements.
The following points help you decide which forwarder to use.
+
The forwarders support different outputs. If the output you want to use is supported only by one forwarder, use that.
If the volume of incoming log messages is high, use syslog-ng, as its multithreaded processing provides higher performance.
If you have lots of logging flows or need complex routing or log message processing, use syslog-ng.
+
Note: Depending on which log forwarder you use, some of the CRDs you have to create and configure are different.
syslog-ng is supported only in Logging operator 4.0 or newer.
+
2 - Output and ClusterOutput
Outputs are the destinations where your log forwarder sends the log messages, for example, to Sumo Logic, or to a file. Depending on which log forwarder you use, you have to configure different custom resources.
Fluentd outputs
+
The Output resource defines an output where your Fluentd Flows can send the log messages. The output is a namespaced resource which means only a Flow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple outputs and attach them to multiple flows.
ClusterOutput defines an Output without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: Flow can be connected to Output and ClusterOutput, but ClusterFlow can be attached only to ClusterOutput.
+
For the details of the supported output plugins, see Fluentd outputs.
For the details of Output custom resource, see OutputSpec.
For the details of ClusterOutput custom resource, see ClusterOutput.
Fluentd S3 output example
The following snippet defines an Amazon S3 bucket as an output.
The SyslogNGOutput resource defines an output for syslog-ng where your SyslogNGFlows can send the log messages. The output is a namespaced resource which means only a SyslogNGFlow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple SyslogNGoutputs and attach them to multiple SyslogNGFlows.
SyslogNGClusterOutput defines a SyslogNGOutput without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: SyslogNGFlow can be connected to SyslogNGOutput and SyslogNGClusterOutput, but SyslogNGClusterFlow can be attached only to SyslogNGClusterOutput.
RFC5424 syslog-ng output example
The following example defines a simple SyslogNGOutput resource that sends the logs to the specified syslog server using the RFC5424 Syslog protocol in a TLS-encrypted connection.
For the details of the supported output plugins, see syslog-ng outputs.
For the details of SyslogNGOutput custom resource, see SyslogNGOutputSpec.
For the details of SyslogNGClusterOutput custom resource, see SyslogNGClusterOutput.
+
3 - Flow and ClusterFlow
Flows route the selected log messages to the specified outputs. Depending on which log forwarder you use, you can use different filters and outputs, and have to configure different custom resources.
Fluentd flows
Flow defines a logging flow for Fluentd with filters and outputs.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. (Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies.) For detailed examples on using the match statement, see log routing.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
Flow resources are namespaced, the selector only select Pod logs within namespace.
+ClusterFlow defines a Flow without namespace restrictions. It is also only effective in the controlNamespace.
+ClusterFlow selects logs from ALL namespace.
The following example transforms the log messages from the default namespace and sends them to an S3 output.
Note: In a multi-cluster setup you cannot easily determine which cluster the logs come from. You can append your own labels to each log
+using the record modifier filter.
+
For the details of Flow custom resource, see FlowSpec.
For the details of ClusterFlow custom resource, see ClusterFlow.
SyslogNGFlow defines a logging flow for syslog-ng with filters and outputs.
syslog-ng is supported only in Logging operator 4.0 or newer.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. For detailed examples on using the match statement, see log routing with syslog-ng.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
SyslogNGFlow resources are namespaced, the selector only selects Pod logs within the namespace.
+SyslogNGClusterFlow defines a SyslogNGFlow without namespace restrictions. It is also only effective in the controlNamespace.
+SyslogNGClusterFlow selects logs from ALL namespaces.
The following example selects only messages sent by the log-generator application and forwards them to a syslog output.
4 - Routing your logs with Fluentd match directives
+
Note: This page describes routing logs with Fluentd. If you are using syslog-ng to route your log messages, see Routing your logs with syslog-ng.
The first step to process your logs is to select which logs go where.
+The Logging operator uses Kubernetes labels, namespaces and other metadata
+to separate different log flows.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
To select or exclude logs you can use the match statement. Match is a collection
+of select and exclude expressions. In both expression you can use the labels
+attribute to filter for pod’s labels. Moreover, in Cluster flow you can use namespaces
+as a selecting or excluding criteria.
If you specify more than one label in a select or exclude expression, the labels have a logical AND connection between them. For example, an exclude expression with two labels excludes messages that have both labels. If you want an OR connection between labels, list them in separate expressions. For example, to exclude messages that have one of two specified labels, create a separate exclude expression for each label.
The select and exclude statements are evaluated in order!
Without at least one select criteria, no messages will be selected!
syslog-ng is supported only in Logging operator 4.0 or newer.
The first step to process your logs is to select which logs go where.
The match field of the SyslogNGFlow and SyslogNGClusterFlow resources define the routing rules of the logs.
+
Note: Fluentd can use only metadata to route the logs. When using syslog-ng filter expressions, you can filter both on metadata and log content as well.
The syntax of syslog-ng match statements is slightly different from the Fluentd match statements.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
Match expressions select messages by applying patterns on the content or metadata of the messages. You can use simple string matching, and also complex regular expressions. You can combine matches using the and, or, and not boolean operators to create complex expressions to select or exclude messages as needed for your use case.
Currently, only a pattern matching function is supported (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion).
The match field can have one of the following options:
+
+
regexp: A pattern that matches the value of a field or a templated value. For example:
match:
+regexp:<parameters>
+
+
and: Combines the nested match expressions with the logical AND operator.
match:
+and:<list of nested match expressions>
+
+
or: Combines the nested match expressions with the logical OR operator.
match:
+or:<list of nested match expressions>
+
+
not: Matches the logical NOT of the nested match expressions with the logical AND operator.
match:
+not:<list of nested match expressions>
+
regexp patterns
The regexp field (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion) defines the pattern that selects the matching messages. You can do two different kinds of matching:
+
Find a pattern in the value of a field of the messages, for example, to select the messages of a specific application. To do that, set the pattern and value fields (and optionally the type and flags fields).
Find a pattern in a template expression created from multiple fields of the message. To do that, set the pattern and template fields (and optionally the type and flags fields).
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
The following example filters for specific Pod labels:
The regexp field can have the following parameters:
pattern (string)
Defines the pattern to match against the messages. The type field determines how the pattern is interpreted (for example, string or regular expression).
value (string)
References a field of the message. The pattern is applied to the value of this field. If the value field is set, you cannot use the template field.
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
Specifies a template expression that combines fields. The pattern is matched against the value of these combined fields. If the template field is set, you cannot use the value field. For details on template expressions, see the syslog-ng documentation.
type (string)
Specifies how the pattern is interpreted. For details, see Types of regexp.
flags (list)
Specifies flags for the type field.
regexp types
By default, syslog-ng uses PCRE-style regular expressions. Since evaluating complex regular expressions can greatly increase CPU usage and are not always needed, you can following expression types:
Description: Use Perl Compatible Regular Expressions (PCRE). If the type() parameter is not specified, syslog-ng uses PCRE regular expressions by default.
pcre flags
PCRE regular expressions have the following flag options:
global: Usable only in rewrite rules: match for every occurrence of the expression, not only the first one.
+
ignore-case: Disable case-sensitivity.
+
newline: When configured, it changes the newline definition used in PCRE regular expressions to accept either of the following:
+
a single carriage-return
linefeed
the sequence carriage-return and linefeed (\r, \n and \r\n, respectively)
This newline definition is used when the circumflex and dollar patterns (^ and $) are matched against an input. By default, PCRE interprets the linefeed character as indicating the end of a line. It does not affect the \r, \n or \R characters used in patterns.
+
store-matches: Store the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
unicode: Use Unicode support for UTF-8 matches. UTF-8 character sequences are handled as single characters.
Description: Match the strings literally, without regular expression support. By default, only identical strings are matched. For partial matches, use the flags: prefix or flags: substring flags. For example, if the consider the following patterns.
The second matches labels beginning with log-generator, for example, log-generator-1.
The third one matches labels that contain the log-generator string, for example, my-log-generator.
string flags
Literal string searches have the following flags() options:
+
+
global: Usable only in rewrite rules, match for every occurrence of the expression, not only the first one.
+
ignore-case: Disables case-sensitivity.
+
prefix: During the matching process, patterns (also called search expressions) are matched against the input string starting from the beginning of the input string, and the input string is matched only for the maximum character length of the pattern. The initial characters of the pattern and the input string must be identical in the exact same order, and the pattern’s length is definitive for the matching process (that is, if the pattern is longer than the input string, the match will fail).
For example, for the input string exam:
+
the following patterns will match:
+
+
ex (the pattern contains the initial characters of the input string in the exact same order)
exam (the pattern is an exact match for the input string)
the following patterns will not match:
+
+
example (the pattern is longer than the input string)
hexameter (the pattern’s initial characters do not match the input string’s characters in the exact same order, and the pattern is longer than the input string)
+
store-matches: Stores the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example, (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
NOTE: To convert match variables into a syslog-ng list, use the $* macro, which can be further manipulated using List manipulation, or turned into a list in type-aware destinations.
+
substring: The given literal string will match when the pattern is found within the input. Unlike flags: prefix, the pattern does not have to be identical with the given literal string.
Description: Match the strings against a pattern containing ‘*’ and ‘?’ wildcards, without regular expression and character range support. The advantage of glob patterns to regular expressions is that globs can be processed much faster.
+
*: matches an arbitrary string, including an empty string
?: matches an arbitrary character
+
NOTE:
+
The wildcards can match the / character.
You cannot use the * and ? characters literally in the pattern.
Glob patterns cannot have any flags.
Examples
Select all logs
To select all logs, or if you only want to exclude some logs but retain others you need an empty select statement.
The Logging extensions part of the Logging operator solves the following problems:
+
Collect Kubernetes events to provide insight into what is happening inside a cluster, such as decisions made by the scheduler, or why some pods were evicted from the node.
Collect logs from the nodes like kubelet logs.
Collect logs from files on the nodes, for example, audit logs, or the systemd journal.
Collect logs from legacy application log files.
Starting with Logging operator version 3.17.0, logging-extensions are open source and part of Logging operator.
Features
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
+
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
Event-tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
+
Host-tailer tails custom files and transmits their changes to stdout. This way the Logging operator can process them.
+Kubernetes host tailer allows you to tail logs like kubelet, audit logs, or the systemd journal from the nodes.
+
Tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. Instead of running a host-tailer instance on every node, tailer-webhook attaches a sidecar container to the pod, and reads the specified file(s).
Kubernetes events are objects that provide insight into what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. Event tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
The operator handles this CR and generates the following required resources:
+
ServiceAccount: new account for event-tailer
ClusterRole: sets the event-tailer's roles
ClusterRoleBinding: links the account with the roles
ConfigMap: contains the configuration for the event-tailer pod
StatefulSet: manages the lifecycle of the event-tailer pod, which uses the banzaicloud/eventrouter:v0.1.0 image to tail events
Create event tailer
+
+
The simplest way to init an event-tailer is to create a new event-tailer resource with a name and controlNamespace field specified. The following command creates an event tailer called sample:
Check that the new object has been created by running:
kubectl get eventtailer
+
Expected output:
NAME AGE
+sample 22m
+
+
You can see the events in JSON format by checking the log of the event-tailer pod. This way Logging operator can collect the events, and handle them as any other log. Run:
kubectl logs -l app.kubernetes.io/instance=sample-event-tailer | head -1 | jq
+
Once you have an event-tailer, you can bind your events to a specific logging flow. The following example configures a flow to route the previously created sample-eventtailer to the sample-output.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: eventtailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ match:
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ - select:
+ labels:
+ app.kubernetes.io/name: sample-event-tailer
+ outputRefs:
+ - sample-output
+EOF
+
Delete event tailer
To remove an unwanted tailer, delete the related event-tailer custom resource. This terminates the event-tailer pod. For example, run the following command to delete the event tailer called sample:
kubectl delete eventtailer sample && kubectl get pod
+
Expected output:
eventtailer.logging-extensions.banzaicloud.io "sample" deleted
+NAME READY STATUS RESTARTS AGE
+sample-event-tailer-0 1/1 Terminating 0 12s
+
Persist event logs
Event-tailer supports persist mode. In this case, the logs generated from events are stored on a persistent volume. Add the following configuration to your event-tailer spec. In this example, the event tailer is called sample:
Logging operator manages the persistent volume of event-tailer automatically, you don’t have any further task with it. To check that the persistent volume has been created, run:
kubectl get pvc && kubectl get pv
+
The output should be similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+sample-event-tailer-sample-event-tailer-0 Bound pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO standard 43s
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO Delete Bound default/sample-event-tailer-sample-event-tailer-0 standard 42s
+
When an application (mostly legacy programs) is not logging in a Kubernetes-native way, Logging operator cannot process its logs. (For example, an old application does not send its logs to stdout, but uses some log files instead.) File-tailer helps to solve this problem: It configures Fluent Bit to tail the given file(s), and sends the logs to the stdout, to implement Kubernetes-native logging.
However, file-tailer cannot access the pod’s local dir, so the logfiles need to be written on a mounted volume.
Let’s assume the following code represents a legacy application that generates logs into the /legacy-logs/date.log file. While the legacy-logs directory is mounted, it’s accessible from other pods by mounting the same volume.
Logging operator configure the environment and start a file-tailer pod. It’s also able to deal with multi-node clusters, since is starts the host-tailer pod through a daemonset.
Check the created file tailer pod:
kubectl get pod
+
The output should be similar to:
NAME READY STATUS RESTARTS AGE
+file-hosttailer-sample-host-tailer-5tqhv 1/1 Running 0 117s
+test-pod 1/1 Running 0 5m40s
+
Checking the logs of the file-tailer's pod. You will see the logfile’s content on stdout. This way Logging operator can process those logs as well.
Filter to select systemd unit example: kubelet.service
+
maxEntries
int
No
-
Maximum entries to read when starting to tail logs to avoid high pressure
+
containerOverrides
*types.ContainerBase
No
-
Override container fields for the given tailer
Example: Configure logging Flow to route logs from a host tailer
The following example uses the flow’s match term to listen the previously created file-hosttailer-sample Hosttailer’s log.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: hosttailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ match:
+ - select:
+ labels:
+ app.kubernetes.io/name: file-hosttailer-sample
+ # there might be a need to match on container name too (in case of multiple containers)
+ container_names:
+ - nginx-access
+ outputRefs:
+ - sample-output
+EOF
+
Example: Kubernetes host tailer with multiple tailers
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
+
workloadMetaOverrides
*types.MetaBase
No
-
Override metadata of the created resources
+
workloadOverrides
*types.PodSpecBase
No
-
Override podSpec fields for the given daemonset
Advanced configuration overrides
MetaBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
annotations
map[string]string
No
-
+
labels
map[string]string
No
-
PodSpecBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
tolerations
[]corev1.Toleration
No
-
+
nodeSelector
map[string]string
No
-
+
serviceAccountName
string
No
-
+
affinity
*corev1.Affinity
No
-
+
securityContext
*corev1.PodSecurityContext
No
-
+
volumes
[]corev1.Volume
No
-
+
priorityClassName
string
No
-
ContainerBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
resources
*corev1.ResourceRequirements
No
-
+
image
string
No
-
+
pullPolicy
corev1.PullPolicy
No
-
+
command
[]string
No
-
+
volumeMounts
[]corev1.VolumeMount
No
-
+
securityContext
*corev1.SecurityContext
No
-
+
6.3 - Tail logfiles with a webhook
The tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. As an alternative to using a host file tailer service, you can use a file tailer webhook service.
+While the containers of the host file tailers run in a separated pod, file tailer webhook uses a different approach: if a pod has a specific annotation, the webhook injects a sidecar container for every tailed file into the pod.
The tailer-webhook behaves differently compared to the host-tailer:
Pros:
+
A simple annotation on the pod initiates the file tailing.
There is no need to use mounted volumes, Logging operator will manage the volumes and mounts between your containers.
Cons:
+
Required to start the Logging operator with webhooks service enabled. This requires additional configuration, especially on certificates since webhook services are allowed over TLS only.
Possibly uses more resources, since every tailed file attaches a new sidecar container to the pod.
Enable webhooks in Logging operator
+
We recommend using cert-manager to manage your certificates. Below is a really simple command that bootstraps generates the required resources for the tailer-webhook.
Alternatively, instead of using the values.yaml file, you can run the installation from command line also by passing the values with the set and set-string parameters:
You also need a service which points to the webhook port (9443) of Logging operator, and where the mutatingwebhookconfiuration will point to. Running the following command in shell will create the required service:
Furthermore, you need to tell Kubernetes to send admission requests to our webhook service. To do that, create a mutatingwebhookconfiguration Kubernetes resource, and:
+
Set the configuration to call /tailer-webhook path on your logging-webhooks service when v1.Pod is created.
Set failurePolicy to ignore, which means that the original pod will be created on webhook errors.
Set sideEffects to none, because we won’t cause any out-of-band changes in Kubernetes.
Unfortunately, mutatingwebhookconfiguration requires the caBundle field to be filled because we used a self-signed certificate, and the certificate cannot be validated through the system trust roots. If your certificate was generated with a system trust root CA, remove the caBundle line, because the certificate will be validated automatically.
+There are more sophisticated ways to load the CA into this field, but this solution requires no further components.
+
For example: you can inject the CA with a simple cert-manager cert-manager.io/inject-ca-from: logging/webhook-tls annotation on the mutatingwebhookconfiguration resource.
Note: If the pod with the sidecar annotation is in the default namespace, Logging operator handles tailer-webhook annotations clusterwide. To restrict the webhook callbacks to the current namespace, change the scope of the mutatingwebhookconfiguration to namespaced.
File tailer example
The following example creates a pod that is running a shell in infinite loop that appends the date command’s output to a file every second. The annotation sidecar.logging-extensions.banzaicloud.io/tail notifies Logging operator to attach a sidecar container to the pod. The sidecar tails the /var/log/date file and sends its output to the stdout.
After you have created the pod with the required annotation, make sure that the test-pod contains two containers by running kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
+test-pod 2/2 Running 0 29m
+
Check the container names in the pod to see that the Logging operator has created the sidecar container called legacy-logs-date-log. The sidecar containers’ name is always built from the path and name of the tailed file. Run the following command:
kubectl get pod test-pod -o json | jq '.spec.containers | map(.name)'
+
Check the logs of the test container. Since it writes the logs into a file, it does not produce any logs on stdout.
kubectl logs test-pod sample-container;echo$?
+
Expected output:
0
+
Check the logs of the legacy-logs-date-log container. This container exposes the logs of the test container on its stdout.
kubectl logs test-pod legacy-logs-date-log
+
Expected output:
Fluent Bit v1.9.5
+* Copyright (C) 2015-2022 The Fluent Bit Authors
+* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
+* https://fluentbit.io
+
+[2022/09/15 11:26:11][ info][fluent bit]version=1.9.5, commit=9ec43447b6, pid=1
+[2022/09/15 11:26:11][ info][storage]version=1.2.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
+[2022/09/15 11:26:11][ info][cmetrics]version=0.3.4
+[2022/09/15 11:26:11][ info][sp] stream processor started
+[2022/09/15 11:26:11][ info][input:tail:tail.0] inotify_fs_add(): inode=938627watch_fd=1name=/legacy-logs/date.log
+[2022/09/15 11:26:11][ info][output:file:file.0] worker #0 started
+Thu Sep 15 11:26:11 UTC 2022
+Thu Sep 15 11:26:12 UTC 2022
+...
+
Multi-container pods
In some cases you have multiple containers in your pod and you want to distinguish which file annotation belongs to which container. You can order every file annotations to particular container by prefixing the annotation with a ${ContainerName}: container key. For example:
Global resources: ClusterFlow, ClusterOutput, SyslogNGClusterFlow, SyslogNGClusterOutput
The namespaced resources are only effective in their own namespace. Global resources are cluster wide.
+
You can create ClusterFlow, ClusterOutput, SyslogNGClusterFlow, and SyslogNGClusterOutput resources only in the controlNamespace, unless the allowClusterResourcesFromAllNamespaces option is enabled in the logging resource. This namespace MUST be a protected namespace so that only administrators can access it.
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
customParsers (string, optional)
Available in Logging operator version 4.2 and later. Specify a custom parser file to load in addition to the default parsers file. It must be a valid key in the configmap specified by customConfig.
The following example defines a Fluentd parser that places the parsed containerd log messages into the log field instead of the message field.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+name:containerd
+spec:
+inputTail:
+Parser:cri-log-key
+# Parser that populates `log` instead of `message` to enable the Kubernetes filter's Merge_Log feature to work
+# Mind the indentation, otherwise Fluent Bit will parse the whole message into the `log` key
+customParsers:|
+ [PARSER]
+ Name cri-log-key
+ Format regex
+ Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L%z
+# Required key remap if one wants to rely on the existing auto-detected log key in the fluentd parser and concat filter otherwise should be omitted
+filterModify:
+- rules:
+- Rename:
+key:log
+value:message
+
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit.
Default: 5
healthCheck (*HealthCheck, optional)
Available in Logging operator version 4.4 and later.
HostNetwork (bool, optional)
image (ImageSpec, optional)
inputTail (InputTail, optional)
labels (map[string]string, optional)
livenessDefaultCheck (bool, optional)
livenessProbe (*corev1.Probe, optional)
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
FluentbitStatus defines the resource status for FluentbitAgent
FluentbitTLS
FluentbitTLS defines the TLS configs
enabled (*bool, required)
secretName (string, optional)
sharedKey (string, optional)
FluentbitTCPOutput
FluentbitTCPOutput defines the TLS configs
json_date_format (string, optional)
Default: iso8601
json_date_key (string, optional)
Default: ts
Workers (*int, optional)
Available in Logging operator version 4.4 and later.
FluentbitNetwork
FluentbitNetwork defines network configuration for fluentbit
connectTimeout (*uint32, optional)
Sets the timeout for connecting to an upstream
Default: 10
connectTimeoutLogError (*bool, optional)
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message
Default: true
dnsMode (string, optional)
Sets the primary transport layer protocol used by the asynchronous DNS resolver for connections established
Default: UDP, UDP or TCP
dnsPreferIpv4 (*bool, optional)
Prioritize IPv4 DNS results when trying to establish a connection
Default: false
dnsResolver (string, optional)
Select the primary DNS resolver type
Default: ASYNC, LEGACY or ASYNC
keepalive (*bool, optional)
Whether or not TCP keepalive is used for the upstream connection
Default: true
keepaliveIdleTimeout (*uint32, optional)
How long in seconds a TCP keepalive connection can be idle before being recycled
Default: 30
keepaliveMaxRecycle (*uint32, optional)
How many times a TCP keepalive connection can be used before being recycled
Default: 0, disabled
sourceAddress (string, optional)
Specify network address (interface) to use for connection and data traffic.
Default: disabled
BufferStorage
BufferStorage is the Service Section Configuration of fluent-bit
storage.backlog.mem_limit (string, optional)
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
Default: 5M
storage.checksum (string, optional)
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts.
Default: Off
storage.metrics (string, optional)
Available in Logging operator version 4.4 and later. If the http_server option has been enabled in the main Service configuration section, this option registers a new endpoint where internal metrics of the storage layer can be consumed.
Default: Off
storage.path (string, optional)
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync (string, optional)
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
Default: normal
HealthCheck
HealthCheck configuration. Available in Logging operator version 4.4 and later.
hcErrorsCount (int, optional)
The error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period.
Default: 5
hcPeriod (int, optional)
The time period (in seconds) to count the error and retry failure data point.
Default: 60
hcRetryFailureCount (int, optional)
The retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period
Default: 5
HotReload
HotReload configuration
image (ImageSpec, optional)
resources (corev1.ResourceRequirements, optional)
InputTail
InputTail defines FluentbitAgent tail input configuration The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
Buffer_Chunk_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
Default: 32k
Buffer_Max_Size (string, optional)
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
Default: Buffer_Chunk_Size
DB (*string, optional)
Specify the database file to keep track of monitored files and offsets.
DB.journal_mode (string, optional)
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
Default: WAL
DB.locking (*bool, optional)
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
Default: true
DB_Sync (string, optional)
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.
Default: Full
Docker_Mode (string, optional)
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Default: Off
Docker_Mode_Flush (string, optional)
Wait period time in seconds to flush queued unfinished split lines.
Default: 4
Docker_Mode_Parser (string, optional)
Specify an optional parser for the first line of the docker multiline mode.
Exclude_Path (string, optional)
Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=.gz,.zip
Ignore_Older (string, optional)
Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.
Key (string, optional)
When a message is unstructured (no parser applied), it’s appended as a string under the key name log. This option allows to define an alternative name for that key.
Default: log
Mem_Buf_Limit (string, optional)
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Multiline (string, optional)
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Default: Off
Multiline_Flush (string, optional)
Wait period time in seconds to process queued multiline messages
Default: 4
multiline.parser ([]string, optional)
Specify one or multiple parser definitions to apply to the content. Part of the new Multiline Core support in 1.8
Default: ""
Parser (string, optional)
Specify the name of a parser to interpret the entry as a structured message.
Parser_Firstline (string, optional)
Name of the parser that machs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)
Parser_N ([]string, optional)
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Path (string, optional)
Pattern specifying a specific log files or multiple ones through the use of common wildcards.
Path_Key (string, optional)
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Read_From_Head (bool, optional)
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
Refresh_Interval (string, optional)
The interval of refreshing the list of watched files in seconds.
Default: 60
Rotate_Wait (string, optional)
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
Default: 5
Skip_Long_Lines (string, optional)
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Default: Off
storage.type (string, optional)
Specify the buffering mechanism to use. It can be memory or filesystem.
Default: memory
Tag (string, optional)
Set a tag (with regex-extract fields) that will be placed on lines read.
Tag_Regex (string, optional)
Set a regex to extract fields from the file.
FilterKubernetes
FilterKubernetes Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
Annotations (string, optional)
Include Kubernetes resource annotations in the extra metadata.
Default: On
Buffer_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs. If this value is empty we will set it “0”.
Default: “0”
Cache_Use_Docker_Id (string, optional)
When enabled, metadata will be fetched from K8s when docker_id is changed.
Default: Off
DNS_Retries (string, optional)
DNS lookup retries N times until the network start working
Default: 6
DNS_Wait_Time (string, optional)
DNS lookup interval between network status checks
Default: 30
Dummy_Meta (string, optional)
If set, use dummy-meta data (for test/dev purposes)
Default: Off
K8S-Logging.Exclude (string, optional)
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Default: On
K8S-Logging.Parser (string, optional)
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Default: Off
Keep_Log (string, optional)
When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).
Default: On
Kube_CA_File (string, optional)
CA certificate file (default:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
Configurable TTL for K8s cached metadata. By default, it is set to 0 which means TTL for cache entries is disabled and cache entries are evicted at random when capacity is reached. In order to enable this option, you should set the number to a time interval. For example, set this value to 60 or 60s and cache entries which have been created more than 60s will be evicted.
Default: 0
Kube_meta_preload_cache_dir (string, optional)
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Kube_Tag_Prefix (string, optional)
When the source records comes from Tail input plugin, this option allows to specify what’s the prefix used in Tail configuration. (default:kube.var.log.containers.)
Token TTL configurable ’time to live’ for the K8s token. By default, it is set to 600 seconds. After this time, the token is reloaded from Kube_Token_File or the Kube_Token_Command. (default:“600”)
Default: 600
Kube_URL (string, optional)
API Server end-point.
Default: https://kubernetes.default.svc:443
Kubelet_Port (string, optional)
kubelet port using for HTTP request, this only works when Use_Kubelet set to On
Default: 10250
Labels (string, optional)
Include Kubernetes resource labels in the extra metadata.
Default: On
Match (string, optional)
Match filtered records (default:kube.*)
Default: kubernetes.*
Merge_Log (string, optional)
When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure. (default:Off)
Default: On
Merge_Log_Key (string, optional)
When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.
Merge_Log_Trim (string, optional)
When Merge_Log is enabled, trim (remove possible \n or \r) field values.
Default: On
Merge_Parser (string, optional)
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Regex_Parser (string, optional)
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
tls.debug (string, optional)
Debug level between 0 (nothing) and 4 (every detail).
Default: -1
tls.verify (string, optional)
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
Default: On
Use_Journal (string, optional)
When enabled, the filter reads logs coming in Journald format.
Default: Off
Use_Kubelet (string, optional)
This is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log.
Default: Off
FilterAws
FilterAws The AWS Filter Enriches logs with AWS Metadata.
az (*bool, optional)
The availability zone (default:true).
Default: true
account_id (*bool, optional)
The account ID for current EC2 instance. (default:false)
Default: false
ami_id (*bool, optional)
The EC2 instance image id. (default:false)
Default: false
ec2_instance_id (*bool, optional)
The EC2 instance ID. (default:true)
Default: true
ec2_instance_type (*bool, optional)
The EC2 instance type. (default:false)
Default: false
hostname (*bool, optional)
The hostname for current EC2 instance. (default:false)
Default: false
imds_version (string, optional)
Specify which version of the instance metadata service to use. Valid values are ‘v1’ or ‘v2’ (default).
Default: v2
Match (string, optional)
Match filtered records (default:*)
Default: *
private_ip (*bool, optional)
The EC2 instance private ip. (default:false)
Default: false
vpc_id (*bool, optional)
The VPC ID for current EC2 instance. (default:false)
Default: false
FilterModify
FilterModify The Modify Filter plugin allows you to change records using rules and conditions.
conditions ([]FilterModifyCondition, optional)
FluentbitAgent Filter Modification Condition
rules ([]FilterModifyRule, optional)
FluentbitAgent Filter Modification Rule
FilterModifyRule
FilterModifyRule The Modify Filter plugin allows you to change records using rules and conditions.
Add (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE if KEY does not exist
Copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist
Hard_copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten
Hard_rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten
Remove (*FilterKey, optional)
Remove a key/value pair with key KEY if it exists
Remove_regex (*FilterKey, optional)
Remove all key/value pairs with key matching regexp KEY
Remove_wildcard (*FilterKey, optional)
Remove all key/value pairs with key matching wildcard KEY
Rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist
Set (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten
FilterModifyCondition
FilterModifyCondition The Modify Filter plugin allows you to change records using rules and conditions.
storage.total_limit_size Limit the maximum number of Chunks in the filesystem for the current output logical destination.
Tag (string, optional)
Time_as_Integer (bool, optional)
Workers (*int, optional)
Available in Logging operator version 4.4 and later. Enables dedicated thread(s) for this output. Default value (2) is set since version 1.8.13. For previous versions is 0.
Fluentd port inside the container (24240 by default). The headless service port is controlled by this field as well. Note that the default ClusterIP service port is always 24240, regardless of this field.
Available in Logging operator version 4.4 and later. Configurable resource requirements for the drainer sidecar container. Default 20m cpu request, 20M memory limit
LoggingRouteSpec defines the desired state of LoggingRoute
source (string, required)
Source identifies the logging that this policy applies to
targets (metav1.LabelSelector, required)
Targets refers to the list of logging resources specified by a label selector to forward logs to. Filtering of namespaces will happen based on the watchNamespaces and watchNamespaceSelector fields of the target logging resource.
LoggingRouteStatus
LoggingRouteStatus defines the actual state of the LoggingRoute
notices ([]string, optional)
Enumerate non-blocker issues the user should pay attention to
noticesCount (int, optional)
Summarize the number of notices for the CLI output
problems ([]string, optional)
Enumerate problems that prohibits this route to take effect and populate the tenants field
problemsCount (int, optional)
Summarize the number of problems for the CLI output
tenants ([]Tenant, optional)
Enumerate all loggings with all the destination namespaces expanded
Tenant
name (string, required)
namespaces ([]string, optional)
LoggingRoute
LoggingRoute (experimental)
+Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
Allow configuration of cluster resources from any namespace. Mutually exclusive with ControlNamespace restriction of Cluster resources
clusterDomain (*string, optional)
Cluster domain name to be used when templating URLs to services .
Default: “cluster.local.”
configCheck (ConfigCheck, optional)
ConfigCheck settings that apply to both fluentd and syslog-ng
controlNamespace (string, required)
Namespace for cluster wide configuration resources like ClusterFlow and ClusterOutput. This should be a protected namespace from regular users. Resources like fluentbit and fluentd will run in this namespace as well.
defaultFlow (*DefaultFlowSpec, optional)
Default flow for unmatched logs. This Flow configuration collects all logs that didn’t matched any other Flow.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
errorOutputRef (string, optional)
GlobalOutput name to flush ERROR events to
flowConfigCheckDisabled (bool, optional)
Disable configuration check before applying new fluentd configuration.
flowConfigOverride (string, optional)
Override generated config. This is a raw configuration string for troubleshooting purposes.
fluentbit (*FluentbitSpec, optional)
FluentbitAgent daemonset configuration. Deprecated, will be removed with next major version Migrate to the standalone NodeAgent resource
WatchNamespaceSelector is a LabelSelector to find matching namespaces to watch as in WatchNamespaces
watchNamespaces ([]string, optional)
Limit namespaces to watch Flow and Output custom resources.
ConfigCheck
labels (map[string]string, optional)
Labels to use for the configcheck pods on top of labels added by the operator by default. Default values can be overwritten.
strategy (ConfigCheckStrategy, optional)
Select the config check strategy to use. DryRun: Parse and validate configuration. StartWithTimeout: Start with given configuration and exit after specified timeout. Default: DryRun
timeoutSeconds (int, optional)
Configure timeout in seconds if strategy is StartWithTimeout
LoggingStatus
LoggingStatus defines the observed state of Logging
configCheckResults (map[string]bool, optional)
Result of the config check. Under normal conditions there is a single item in the map with a bool value.
fluentdConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached fluentd configuration object.
problems ([]string, optional)
Problems with the logging resource
problemsCount (int, optional)
Count of problems for printcolumn
syslogNGConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached SyslogNG configuration object.
watchNamespaces ([]string, optional)
List of namespaces that watchNamespaces + watchNamespaceSelector is resolving to. Not set means all namespaces.
Logging
Logging is the Schema for the loggings API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (LoggingSpec, optional)
status (LoggingStatus, optional)
LoggingList
LoggingList contains a list of Logging
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]Logging, required)
DefaultFlowSpec
DefaultFlowSpec is a Flow for logs that did not match any other Flow
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
daemonSet (*typeoverride.DaemonSet, optional)
disableKubernetesFilter (*bool, optional)
enableUpstream (*bool, optional)
enabled (*bool, optional)
extraVolumeMounts ([]*VolumeMount, optional)
filterAws (*FilterAws, optional)
filterKubernetes (FilterKubernetes, optional)
flush (int32, optional)
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit (default: 5)
Default: 5
inputTail (InputTail, optional)
livenessDefaultCheck (*bool, optional)
Default: true
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled. (default: info)
SyslogNGClusterFlow is the Schema for the syslog-ng clusterflows API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGClusterFlowSpec
SyslogNGClusterFlowSpec is the Kubernetes spec for Flows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGClusterFlowList
SyslogNGClusterFlowList contains a list of SyslogNGClusterFlow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterFlow, required)
+
7.1.13 - SyslogNGClusterOutput
SyslogNGClusterOutput
SyslogNGClusterOutput is the Schema for the syslog-ng clusteroutputs API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterOutputSpec, required)
status (SyslogNGOutputStatus, optional)
SyslogNGClusterOutputSpec
SyslogNGClusterOutputSpec contains Kubernetes spec for SyslogNGClusterOutput
(SyslogNGOutputSpec, required)
enabledNamespaces ([]string, optional)
SyslogNGClusterOutputList
SyslogNGClusterOutputList contains a list of SyslogNGClusterOutput
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterOutput, required)
+
7.1.14 - SyslogNGConfig
SyslogNGConfig
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGSpec, optional)
status (SyslogNGConfigStatus, optional)
SyslogNGConfigStatus
active (*bool, optional)
logging (string, optional)
problems ([]string, optional)
problemsCount (int, optional)
SyslogNGConfigList
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGConfig, required)
+
7.1.15 - SyslogNGFlowSpec
SyslogNGFlowSpec
SyslogNGFlowSpec is the Kubernetes spec for SyslogNGFlows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
localOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGFilter
Filter definition for SyslogNGFlowSpec
id (string, optional)
match (*filter.MatchConfig, optional)
parser (*filter.ParserConfig, optional)
rewrite ([]filter.RewriteConfig, optional)
SyslogNGFlow
Flow Kubernetes object
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGFlowList
FlowList contains a list of Flow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGFlow, required)
+
7.1.16 - SyslogNGOutputSpec
SyslogNGOutputSpec
SyslogNGOutputSpec defines the desired state of SyslogNGOutput
Available in Logging operator version 4.5 and later. Parses date automatically from the timestamp registered by the container runtime. Note: jsonKeyPrefix and jsonKeyDelim are respected.
Available in Logging operator version 4.5 and later.
Parses date automatically from the timestamp registered by the container runtime.
+Note: jsonKeyPrefix and jsonKeyDelim are respected.
+It is disabled by default, but if enabled, then the default settings parse the timestamp written by the container runtime and parsed by Fluent Bit using the cri or the docker parser.
format (*string, optional)
Default: “%FT%T.%f%z”
template (*string, optional)
Default(depending on JSONKeyPrefix): “${json.time}”
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the daemonset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
Allow anonymous source. sections are required if disabled.
self_hostname (string, required)
Hostname
shared_key (string, required)
Shared key for authentication.
user_auth (bool, optional)
If true, use user based authentication.
+
8.2 - Transport
Transport
ca_cert_path (string, optional)
Specify private CA contained path
ca_path (string, optional)
Specify path to CA certificate file
ca_private_key_passphrase (string, optional)
private CA private key passphrase contained path
ca_private_key_path (string, optional)
private CA private key contained path
cert_path (string, optional)
Specify path to Certificate file
ciphers (string, optional)
Ciphers Default: “ALL:!aNULL:!eNULL:!SSLv2”
client_cert_auth (bool, optional)
When this is set Fluentd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don’t supply a valid client certificate will fail.
insecure (bool, optional)
Use secure connection when use tls) Default: false
private_key_passphrase (string, optional)
public CA private key passphrase contained path
private_key_path (string, optional)
Specify path to private Key file
protocol (string, optional)
Protocol Default: :tcp
version (string, optional)
Version Default: ‘TLSv1_2’
+
8.3 - Fluentd filters
You can use the following Fluentd filters in your Flow and ClusterFlow CRDs.
Fluentd Filter plugin to fetch several metadata for a Pod
Configuration
EnhanceK8s
api_groups ([]string, optional)
Kubernetes resources api groups
Default: ["apps/v1", "extensions/v1beta1"]
bearer_token_file (string, optional)
Bearer token path
Default: nil
ca_file (secret.Secret, optional)
Kubernetes API CA file
Default: nil
cache_refresh (int, optional)
Cache refresh
Default: 60*60
cache_refresh_variation (int, optional)
Cache refresh variation
Default: 60*15
cache_size (int, optional)
Cache size
Default: 1000
cache_ttl (int, optional)
Cache TTL
Default: 60602
client_cert (secret.Secret, optional)
Kubernetes API Client certificate
Default: nil
client_key (secret.Secret, optional)
Kubernetes API Client certificate key
Default: nil
core_api_versions ([]string, optional)
Kubernetes core API version (for different Kubernetes versions)
Default: [‘v1’]
data_type (string, optional)
Sumo Logic data type
Default: metrics
in_namespace_path ([]string, optional)
parameters for read/write record
Default: ['$.namespace']
in_pod_path ([]string, optional)
Default: ['$.pod','$.pod_name']
kubernetes_url (string, optional)
Kubernetes API URL
Default: nil
ssl_partial_chain (*bool, optional)
If ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN
This filter plugin consumes a log stream of JSON objects which contain single-line log messages. If a consecutive sequence of log messages form an exception stack trace, they forwarded as a single, combined JSON object. Otherwise, the input log data is forwarded as is. More info at https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions
+
Note: As Tag management is not supported yet, this Plugin is mutually exclusive with Tag normaliser
Fluentd Filter plugin to add information about geographical location of IP addresses with Maxmind GeoIP databases.
+More information at https://github.com/y-ken/fluent-plugin-geoip
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Flow
+metadata:
+name:demo-flow
+spec:
+filters:
+- tag_normaliser:{}
+- parser:
+remove_key_name_field:true
+reserve_data:true
+parse:
+type:nginx
+- prometheus:
+metrics:
+- name:total_counter
+desc:The total number of foo in message.
+type:counter
+labels:
+foo:bar
+labels:
+host:${hostname}
+tag:${tag}
+namespace:$.kubernetes.namespace
+selectors:{}
+localOutputRefs:
+- demo-output
Fluentd config result:
<filter**>
+ @type prometheus
+ @id logging-demo-flow_2_prometheus
+<metric>
+ desc The total number of foo in message.
+ name total_counter
+ type counter
+<labels>
+ foo bar
+</labels>
+</metric>
+<labels>
+ host ${hostname}
+ namespace $.kubernetes.namespace
+ tag ${tag}
+</labels>
+</filter>
A sentry plugin to throttle logs. Logs are grouped by a configurable key. When a group exceeds a configuration rate, logs are dropped for this group.
Configuration
Throttle
group_bucket_limit (int, optional)
Maximum number logs allowed per groups over the period of group_bucket_period_s
Default: 6000
group_bucket_period_s (int, optional)
This is the period of of time over which group_bucket_limit applies
Default: 60
group_drop_logs (bool, optional)
When a group reaches its limit, logs will be dropped from further processing if this value is true
Default: true
group_key (string, optional)
Used to group logs. Groups are rate limited independently
Default: kubernetes.container_name
group_reset_rate_s (int, optional)
After a group has exceeded its bucket limit, logs are dropped until the rate per second falls below or equal to group_reset_rate_s.
Default: group_bucket_limit/group_bucket_period_s
group_warning_delay_s (int, optional)
When a group reaches its limit and as long as it is not reset, a warning message with the current log rate of the group is emitted repeatedly. This is the delay between every repetition.
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. For details, see https://github.com/aliyun/fluent-plugin-oss.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
If true, put_log_events_retry_limit will be ignored
put_log_events_retry_limit (int, optional)
Maximum count of retry (if exceeding this, the events will be discarded)
put_log_events_retry_wait (string, optional)
Time before retrying PutLogEvents (retry interval increases exponentially like put_log_events_retry_wait * (2 ^ retry_count))
region (string, required)
AWS Region
remove_log_group_aws_tags_key (string, optional)
Remove field specified by log_group_aws_tags_key
remove_log_group_name_key (string, optional)
Remove field specified by log_group_name_key
remove_log_stream_name_key (string, optional)
Remove field specified by log_stream_name_key
remove_retention_in_days (string, optional)
Remove field specified by retention_in_days
retention_in_days (string, optional)
Use to set the expiry time for log group when created with auto_create_stream. (default to no expiry)
retention_in_days_key (string, optional)
Use specified field of records as retention period
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
append_new_line (*bool, optional)
If it is enabled, the plugin adds new line character (\n) to each serialized record. Before appending \n, plugin calls chomp and removes separator from the end of each record as chomp_record is true. Therefore, you don’t need to enable chomp_record option when you use kinesis_firehose output with default configuration (append_new_line is true). If you want to set append_new_line false, you can choose chomp_record false (default) or true (compatible format with plugin v2). (Default:true)
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required) {#assume role credentials-role_arn}
The Amazon Resource Name (ARN) of the role to assume
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
stream_name (string, required)
Name of the stream to put data.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required)
The Amazon Resource Name (ARN) of the role to assume
The s3 output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log ‘2011-01-02 message B’ is reached, and then another log ‘2011-01-03 message B’ is reached in this order, the former one is stored in “20110102.gz” file, and latter one in “20110103.gz” file.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sse_customer_algorithm (string, optional)
Specifies the algorithm to use to when encrypting the object
sse_customer_key (string, optional)
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data
sse_customer_key_md5 (string, optional)
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
If false, the certificate of endpoint will not be verified
storage_class (string, optional)
The type of storage to use for the object, for example STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR For a complete list of possible values, see the Amazon S3 API reference.
store_as (string, optional)
Archive format on S3
use_bundled_cert (string, optional)
Use aws-sdk-ruby bundled cert
use_server_side_encryption (string, optional)
The Server-side encryption algorithm used when storing this object in S3 (AES256, aws:kms)
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into s3
Available in Logging operator version 4.5 and later. Azure Cloud to use, for example, AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud, AZURESTACKCLOUD (in uppercase). This field is supported only if the fluentd plugin honors it, for example, https://github.com/elsesiy/fluent-plugin-azure-storage-append-blob-lts
Compat format type: out_file, json, ltsv (default: out_file)
Default: json
path (string, optional)
Path prefix of the files on Azure
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
8.4.8 - Buffer
Buffer
chunk_full_threshold (string, optional)
The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size * chunk_full_threshold (== 8MB * 0.95 in default)
chunk_limit_records (int, optional)
The max number of events that each chunks can store in it
chunk_limit_size (string, optional)
The max size of each chunks: events will be written into chunks until the size of chunks become this size (default: 8MB)
Default: 8MB
compress (string, optional)
If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
delayed_commit_timeout (string, optional)
The timeout seconds until output plugin decides that async write operation fails
disable_chunk_backup (bool, optional)
Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
disabled (bool, optional)
Disable buffer section (default: false)
Default: false,hidden
flush_at_shutdown (bool, optional)
The value to specify to flush/write all buffer chunks at shutdown, or not
flush_interval (string, optional)
Default: 60s
flush_mode (string, optional)
Default: default (equals to lazy if time is specified as chunk key, interval otherwise) lazy: flush/write chunks once per timekey interval: flush/write chunks per specified time via flush_interval immediate: flush/write chunks immediately after events are appended into chunks
flush_thread_burst_interval (string, optional)
The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
flush_thread_count (int, optional)
The number of threads of output plugins, which is used to write chunks in parallel
flush_thread_interval (string, optional)
The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
overflow_action (string, optional)
How output plugin behaves when its buffer queue is full throw_exception: raise exception to show this error in log block: block processing of input plugin to emit events into that buffer drop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
path (string, optional)
The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Default: operator generated
queue_limit_length (int, optional)
The queue length limitation of this buffer plugin instance
queued_chunks_limit_size (int, optional)
Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
retry_exponential_backoff_base (string, optional)
The base number of exponential backoff for retries
retry_forever (*bool, optional)
If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Default: true
retry_max_interval (string, optional)
The maximum interval seconds for exponential backoff between retries while failing
retry_max_times (int, optional)
The maximum number of times to retry to flush while failing
retry_randomize (bool, optional)
If true, output plugin will retry after randomized interval not to do burst retries
retry_secondary_threshold (string, optional)
The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
retry_timeout (string, optional)
The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
retry_type (string, optional)
exponential_backoff: wait seconds will become large exponentially per failures periodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
retry_wait (string, optional)
Seconds to wait before next retry to flush, or constant factor of exponential backoff
tags (*string, optional)
When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
Default: tag,time
timekey (string, required)
Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Default: 10m
timekey_use_utc (bool, optional)
Output plugin decides to use UTC or not to format placeholders using timekey
timekey_wait (string, optional)
Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Default: 1m
timekey_zone (string, optional)
The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
total_limit_size (string, optional)
The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
type (string, optional)
Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
+
8.4.9 - Datadog
Datadog output plugin for Fluentd
Overview
It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Datadog - so no need to use a log shipper if you don’t want to.
+For details, see https://github.com/DataDog/fluent-plugin-datadog.
Example
spec:
+datadog:
+api_key:
+value:'<YOUR_API_KEY>'# For referencing a secret, see https://kube-logging.dev/docs/configuration/plugins/outputs/secret/
+dd_source:'<INTEGRATION_NAME>'
+dd_tags:'<KEY1:VALUE1>,<KEY2:VALUE2>'
+dd_sourcecategory:'<YOUR_SOURCE_CATEGORY>'
+
Configuration
Output Config
api_key (*secret.Secret, required)
This parameter is required in order to authenticate your fluent agent.
Set the log compression level for HTTP (1 to 9, 9 being the best ratio)
Default: “6”
dd_hostname (string, optional)
Used by Datadog to identify the host submitting the logs.
Default: “hostname -f”
dd_source (string, optional)
This tells Datadog what integration it is
Default: nil
dd_sourcecategory (string, optional)
Multiple value attribute. Can be used to refine the source attribute
Default: nil
dd_tags (string, optional)
Custom tags with the following format “key1:value1, key2:value2”
Default: nil
host (string, optional)
Proxy endpoint when logs are not directly forwarded to Datadog
Default: “http-intake.logs.datadoghq.com”
include_tag_key (bool, optional)
Automatically include the Fluentd tag in the record.
Default: false
max_backoff (string, optional)
The maximum time waited between each retry in seconds
Default: “30”
max_retries (string, optional)
The number of retries before the output plugin stops. Set to -1 for unlimited retries
Default: “-1”
no_ssl_validation (bool, optional)
Disable SSL validation (useful for proxy forwarding)
Default: false
port (string, optional)
Proxy port when logs are not directly forwarded to Datadog and ssl is not used
Default: “80”
service (string, optional)
Used by Datadog to correlate between logs, traces and metrics.
Default: nil
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
ssl_port (string, optional)
Port used to send logs over a SSL encrypted connection to Datadog. If use_http is disabled, use 10516 for the US region and 443 for the EU region.
Default: “443”
tag_key (string, optional)
Where to store the Fluentd tag.
Default: “tag”
timestamp_key (string, optional)
Name of the attribute which will contain timestamp of the log event. If nil, timestamp attribute is not added.
Default: “@timestamp”
use_compression (bool, optional)
Enable log compression for HTTP
Default: true
use_http (bool, optional)
Enable HTTP forwarding. If you disable it, make sure to change the port to 10514 or ssl_port to 10516
Default: true
use_json (bool, optional)
Event format, if true, the event is sent in json format. Othwerwise, in plain text.
Default: true
use_ssl (bool, optional)
If true, the agent initializes a secure connection to Datadog. In clear TCP otherwise.
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
content_type (string, optional)
With content_type application/x-ndjson, elasticsearch plugin adds application/x-ndjson as Content-Profile in payload.
Default: application/json
custom_headers (string, optional)
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
flatten_hashes (bool, optional)
Elasticsearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places: {“people” => 100} {“people” => {“some” => “thing”}} The second log line will be rejected by the Elasticsearch parser because objects and concrete values can’t live in the same field. To combat this, you can enable hash flattening.
flatten_hashes_separator (string, optional)
Flatten separator
host (string, optional)
You can specify the Elasticsearch host using this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple Elasticsearch hosts with separator “,”. If you specify the hosts option, the host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, elasticsearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy. For example ignore_exceptions ["Elasticsearch::Transport::Transport::ServerError"] will match all subclasses of ServerError - Elasticsearch::Transport::Transport::Errors::BadRequest, Elasticsearch::Transport::Transport::Errors::ServiceUnavailable, etc.
ilm_policy (string, optional)
Specify ILM policy contents as Hash.
ilm_policy_id (string, optional)
Specify ILM policy id.
ilm_policy_overwrite (bool, optional)
Specify whether overwriting ilm policy or not.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_name (string, optional)
The index name to write events to
Default: fluentd
index_prefix (string, optional)
Specify the index prefix for the rollover index to be created.
Default: logstash
log_es_400_reason (bool, optional)
By default, the error logger won’t record the reason for a 400 error from the Elasticsearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn’t desirable if all you want is the 400 error reasons. You can set this true to capture the 400 error reasons without all the other debug logs.
Default: false
logstash_dateformat (string, optional)
Set the Logstash date format.
Default: %Y.%m.%d
logstash_format (bool, optional)
Enable Logstash log format.
Default: false
logstash_prefix (string, optional)
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional)
Set the Logstash prefix separator.
Default: -
max_retry_get_es_version (string, optional)
You can specify the number of times to retry fetching the Elasticsearch version.
This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify the Elasticsearch port using this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, Elasticsearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, Elasticsearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of elasticsearch shield.
Default: false
reload_after (string, optional)
When reload_connections is true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the elasticsearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the request. This can be useful to quickly remove a dead node from the list of addresses.
Remove keys on update will not update the configured keys in elasticsearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Similar to parent_key config, will add _routing into elasticsearch command if routing_key is set and the field does exist in input event.
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sniffer_class_name (string, optional)
The default Sniffer used by the Elasticsearch::Transport class works well when Fluentd has a direct connection to all of the Elasticsearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::ElasticsearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. https://github.com/uken/fluent-plugin-elasticsearch#sniffer-class-name
ssl_max_version (string, optional)
Specify min/max SSL/TLS version
ssl_min_version (string, optional)
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in Elasticsearch 7.x
Similar to target_index_key config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to type_name.
Default: fluentd
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, elasticsearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
type_name (string, optional)
Set the index type for elasticsearch. This is the fallback if target_type_key is missing.
Default: fluentd
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because es_rejected_execution_exception is caused by exceeding Elasticsearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior. If you want to increase it and forcibly retrying bulk request, please consider to change unrecoverable_error_types parameter from default value. Change default value of thread_pool.bulk.queue_size in elasticsearch.yml)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders, for example, %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
validate_client_version (bool, optional)
When you use mismatched Elasticsearch server and client libraries, fluent-plugin-elasticsearch cannot send data into Elasticsearch.
Default: false
verify_es_version_at_startup (*bool, optional)
Because Elasticsearch plugin should change behavior each of Elasticsearch major versions. For example, Elasticsearch 6 starts to prohibit multiple type_names in one index, and Elasticsearch 7 will handle only _doc type_name in index. If you want to disable to verify Elasticsearch version at start up, set it as false. When using the following configuration, ES plugin intends to communicate into Elasticsearch 6. (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
The Path of the file. The actual path is path + time + “.log” by default.
path_suffix (string, optional)
The suffix of output result.
Default: “.log”
recompress (bool, optional)
Performs compression again even if the buffer chunk is already compressed.
Default: false
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
symlink_path (bool, optional)
Create symlink to temporary buffered file when buffer_type is file. This is useful for tailing file content to check logs.
The timeout time for socket connect. When the connection timed out during establishment, Errno::ETIMEDOUT is raised.
dns_round_robin (bool, optional)
Enable client-side DNS round robin. Uniform randomly pick an IP address to send data when a hostname has several IP addresses. heartbeat_type udp is not available with dns_round_robin true. Use heartbeat_type tcp or heartbeat_type none.
expire_dns_cache (int, optional)
Set TTL to expire DNS cache in seconds. Set 0 not to use DNS Cache.
Default: 0
hard_timeout (int, optional)
The hard timeout used to detect server failure. The default value is equal to the send_timeout parameter.
Default: 60
heartbeat_interval (int, optional)
The interval of the heartbeat packer.
Default: 1
heartbeat_type (string, optional)
The transport protocol to use for heartbeats. Set “none” to disable heartbeat. [transport, tcp, udp, none]
ignore_network_errors_at_startup (bool, optional)
Ignore DNS resolution and errors at startup time.
keepalive (bool, optional)
Enable keepalive connection.
Default: false
keepalive_timeout (int, optional)
Expired time of keepalive. Default value is nil, which means to keep connection as long as possible.
Default: 0
phi_failure_detector (bool, optional)
Use the “Phi accrual failure detector” to detect server failure.
Default: true
phi_threshold (int, optional)
The threshold parameter used to detect server faults. phi_threshold is deeply related to heartbeat_interval. If you are using longer heartbeat_interval, please use the larger phi_threshold. Otherwise you will see frequent detachments of destination servers. The default value 16 is tuned for heartbeat_interval 1s.
Default: 16
recover_wait (int, optional)
The wait time before accepting a server fault recovery.
Default: 10
require_ack_response (bool, optional)
Change the protocol to at-least-once. The plugin waits the ack from destination’s in_forward plugin.
Server definitions at least one is required Server
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
tls_allow_self_signed_cert (bool, optional)
Allow self signed certificates or not.
Default: false
tls_cert_logical_store_name (string, optional)
The certificate logical store name on Windows system certstore. This parameter is for Windows only.
tls_cert_path (*secret.Secret, optional)
The additional CA certificate path for TLS.
tls_cert_thumbprint (string, optional)
The certificate thumbprint for searching from Windows system certstore This parameter is for Windows only.
tls_cert_use_enterprise_store (bool, optional)
Enable to use certificate enterprise store on Windows system certstore. This parameter is for Windows only.
Verify hostname of servers and certificates or not in TLS transport.
Default: true
tls_version (string, optional)
The default version of TLS transport. [TLSv1_1, TLSv1_2]
Default: TLSv1_2
transport (string, optional)
The transport protocol to use [ tcp, tls ]
verify_connection_at_startup (bool, optional)
Verify that a connection can be made with one of out_forward nodes at the time of startup.
Default: false
Fluentd Server
server
host (string, required)
The IP address or host name of the server.
name (string, optional)
The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
password (*secret.Secret, optional)
The password for authentication.
port (int, optional)
The port number of the host. Note that both TCP packets (event stream) and UDP packets (heartbeat message) are sent to this port.
Default: 24224
shared_key (*secret.Secret, optional)
The shared key per server.
standby (bool, optional)
Marks a node as the standby node for an Active-Standby model between Fluentd nodes. When an active node goes down, the standby node is promoted to an active node. The standby node is not used by the out_forward plugin until then.
username (*secret.Secret, optional)
The username for authentication.
weight (int, optional)
The load balancing weight. If the weight of one server is 20 and the weight of the other server is 30, events are sent in a 2:3 ratio. .
User provided web-safe keys and arbitrary string values that will returned with requests for the file as “x-goog-meta-” response headers. Object Metadata
overwrite (bool, optional)
Overwrite already existing path
Default: false
path (string, optional)
Path prefix of the files on GCS
project (string, required)
Project identifier for GCS
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
storage_class (string, optional)
Storage class of the file: dranearlinecoldlinemulti_regionalregionalstandard
TLS: CA certificate file for server certificate verification Secret
cert (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
configure_kubernetes_labels (*bool, optional)
Configure Kubernetes metadata in a Prometheus like format
Default: false
drop_single_key (*bool, optional)
If a record only has 1 key, then just set the log line to the value and discard the key.
Default: false
extra_labels (map[string]string, optional)
Set of extra labels to include with every Loki stream.
extract_kubernetes_labels (*bool, optional)
Extract kubernetes labels as loki labels
Default: false
include_thread_label (*bool, optional)
whether to include the fluentd_thread label when multiple threads are used for flushing.
Default: true
insecure_tls (*bool, optional)
TLS: disable server certificate verification
Default: false
key (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
labels (Label, optional)
Set of labels to include with every Loki stream.
line_format (string, optional)
Format to use when flattening the record to a log line: json, key_value (default: key_value)
Default: json
password (*secret.Secret, optional)
Specify password if the Loki server requires authentication. Secret
remove_keys ([]string, optional)
Comma separated list of needless record keys to remove
Default: []
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
tenant (string, optional)
Loki is a multi-tenant log storage platform and all requests sent must include a tenant.
url (string, optional)
The url of the Loki server to send logs to.
Default: https://logs-us-west1.grafana.net
username (*secret.Secret, optional)
Specify a username if the Loki server requires authentication. Secret
Raise UnrecoverableError when the response code is non success, 1xx/3xx/4xx/5xx. If false, the plugin logs error message instead of raising UnrecoverableError.
Using array format of JSON. This parameter is used and valid only for json format. When json_array as true, Content-Profile should be application/json and be able to use JSON data for the HTTP request body.
Default: false
open_timeout (int, optional)
Connection open timeout in seconds.
proxy (string, optional)
Proxy for HTTP request.
read_timeout (int, optional)
Read timeout in seconds.
retryable_response_codes ([]int, optional)
List of retryable response codes. If the response code is included in this list, the plugin retries the buffer flush. Since Fluentd v2 the Status code 503 is going to be removed from default.
Default: [503]
ssl_timeout (int, optional)
TLS timeout in seconds.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Maximum value of total message size to be included in one batch transmission. .
Default: 4096
kafka_agg_max_messages (int, optional)
Maximum number of messages to include in one batch transmission. .
Default: nil
keytab (*secret.Secret, optional)
max_send_retries (int, optional)
Number of times to retry sending of messages to a leader
Default: 1
message_key_key (string, optional)
Message Key
Default: “message_key”
partition_key (string, optional)
Partition
Default: “partition”
partition_key_key (string, optional)
Partition Key
Default: “partition_key”
password (*secret.Secret, optional)
Password when using PLAIN/SCRAM SASL authentication
principal (string, optional)
required_acks (int, optional)
The number of acks required per request .
Default: -1
ssl_ca_cert (*secret.Secret, optional)
CA certificate
ssl_ca_certs_from_system (*bool, optional)
System’s CA cert store
Default: false
ssl_client_cert (*secret.Secret, optional)
Client certificate
ssl_client_cert_chain (*secret.Secret, optional)
Client certificate chain
ssl_client_cert_key (*secret.Secret, optional)
Client certificate key
ssl_verify_hostname (*bool, optional)
Verify certificate hostname
sasl_over_ssl (bool, required)
SASL over SSL
Default: true
scram_mechanism (string, optional)
If set, use SCRAM authentication with specified mechanism. When unset, default to PLAIN authentication
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
topic_key (string, optional)
Topic Key
Default: “topic”
use_default_for_unknown_topic (bool, optional)
Use default for unknown topics
Default: false
username (*secret.Secret, optional)
Username when using PLAIN/SCRAM SASL authentication
HTTPS POST Request Timeout, Optional. Supports s and ms Suffices
Default: 30 s
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Limit to the size of the Logz.io upload bulk. Defaults to 1000000 bytes leaving about 24kB for overhead.
bulk_limit_warning_limit (int, optional)
Limit to the size of the Logz.io warning message when a record exceeds bulk_limit to prevent a recursion when Fluent warnings are sent to the Logz.io output.
endpoint (*Endpoint, required)
Define LogZ endpoint URL
gzip (bool, optional)
Should the plugin ship the logs in gzip compression. Default is false.
http_idle_timeout (int, optional)
Timeout in seconds that the http persistent connection will stay open without traffic.
output_include_tags (bool, optional)
Should the appender add the fluentd tag to the document, called “fluentd_tag”
output_include_time (bool, optional)
Should the appender add a timestamp to your logs on their process time (recommended).
retry_count (int, optional)
How many times to resend failed bulks.
retry_sleep (int, optional)
How long to sleep initially between retries, exponential step-off.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Specify the application name for the rollover index to be created.
Default: default
buffer (*Buffer, optional)
bulk_message_request_threshold (string, optional)
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
This parameter adds additional headers to request. Example: {"token":"secret"}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
data_stream_enable (*bool, optional)
Use @type opensearch_data_stream
data_stream_name (string, optional)
You can specify Opensearch data stream name by this parameter. This parameter is mandatory for opensearch_data_stream.
data_stream_template_name (string, optional)
Specify an existing index template for the data stream. If not present, a new template is created and named after the data stream.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on Fluentd statup.(default: true)
You can specify OpenSearch host by this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, the opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional)
http_backend_excon_nonblock
Default: true
id_key (string, optional)
Field on your data to identify the data uniquely
ignore_exceptions (string, optional)
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
This param is to set a pipeline ID of your OpenSearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify OpenSearch port by this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
reload_after (string, optional)
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
remove_keys_on_update (string, optional)
Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
routing_key (string, optional)
routing_key
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
selector_class_name (string, optional)
selector_class_name
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
sniffer_class_name (string, optional)
The default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The sniffer_class_name parameter gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name.
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in OpenSearch
tag_key (string, optional)
This will add the Fluentd tag in the JSON record.
Default: tag
target_index_affinity (bool, optional)
target_index_affinity
Default: false
target_index_key (string, optional)
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.’) as a separator.
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_exclude_timestamp (bool, optional)
time_key_exclude_timestamp
Default: false
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
truncate_caches_interval (string, optional)
truncate_caches_interval
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
unrecoverable_record_types (string, optional)
unrecoverable_record_types
use_legacy_template (*bool, optional)
Specify wether to use legacy template or not.
Default: true
user (string, optional)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
Default: true
validate_client_version (bool, optional)
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
verify_os_version_at_startup (*bool, optional)
verify_os_version_at_startup (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
Default: index
OpenSearchEndpointCredentials
access_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
assume_role_arn (*secret.Secret, optional)
Typically, you can use AssumeRole for cross-account access or federation.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
strftime_format (string, optional)
Users can set strftime format.
Default: “%s”
ttl (int, optional)
If 0 or negative value is set, ttl is not set in each key.
+
8.4.26 - Relabel
Available in Logging Operator version 4.2 and later.
The relabel output uses the relabel output plugin of Fluentd to route events back to a specific Flow, where they can be processed again.
This is useful, for example, if you need to preprocess a subset of logs differently, but then do the same processing on all messages at the end. In this case, you can create multiple flows for preprocessing based on specific log matchers and then aggregate everything into a single final flow for postprocessing.
The value of the label parameter of the relabel output must be the same as the value of the flowLabel parameter of the Flow (or ClusterFlow) where you want to send the messages.
Using the relabel output also makes it possible to pass the messages emitted by the Concat plugin in case of a timeout. Set the timeout_label of the concat plugin to the flowLabel of the flow where you want to send the timeout messages.
Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. .
Default: true
data_type (string, optional)
The type of data that will be sent to Sumo Logic, either event or metric
Default: event
fields (Fields, optional)
In this case, parameters inside <fields> are used as indexed fields and removed from the original input events
The host location for events. Cannot set both host and host_key parameters at the same time. (Default:hostname)
host_key (string, optional)
Key for the host location. Cannot set both host and host_key parameters at the same time.
idle_timeout (int, optional)
If a connection has not been used for this number of seconds it will automatically be reset upon the next use to avoid attempting to send to a closed connection. nil means no timeout.
index (string, optional)
Identifier for the Splunk index to be used for indexing events. If this parameter is not set, the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.
index_key (string, optional)
The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.
insecure_ssl (*bool, optional)
Indicates if insecure SSL connection is allowed
Default: false
keep_keys (bool, optional)
By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.
metric_name_key (string, optional)
Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event parameter. When this prameter is set, the metrics_from_event parameter is automatically set to false.
Default: true
metric_value_key (string, optional)
Field name that contains the metric value, this parameter is required when metric_name_key is configured.
metrics_from_event (*bool, optional)
When data_type is set to “metric”, the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. (Default:true)
non_utf8_replacement_string (string, optional)
If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. .
Default: ’ '
open_timeout (int, optional)
The amount of time to wait for a connection to be opened.
protocol (string, optional)
This is the protocol to use for calling the Hec API. Available values are: http, https.
Default: https
read_timeout (int, optional)
The amount of time allowed between reading two chunks from the socket.
ssl_ciphers (string, optional)
List of SSL ciphers allowed.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source (string, optional)
The source field for events. If this parameter is not set, the source will be decided by HEC. Cannot set both source and source_key parameters at the same time.
source_key (string, optional)
Field name to contain source. Cannot set both source and source_key parameters at the same time.
sourcetype (string, optional)
The sourcetype field for events. When not set, the sourcetype is decided by HEC. Cannot set both source and source_key parameters at the same time.
sourcetype_key (string, optional)
Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.
SQS queue url e.g. https://sqs.us-west-2.amazonaws.com/123456789012/myqueue
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Used to specify the key when merging json or sending logs in text format
Default: message
metric_data_format (string, optional)
The format of metrics you will be sending, either graphite or carbon2 or prometheus
Default: graphite
open_timeout (int, optional)
Set timeout seconds to wait until connection is opened.
Default: 60
proxy_uri (string, optional)
Add the uri of the proxy environment if present.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source_category (string, optional)
Set _sourceCategory metadata field within SumoLogic
Default: nil
source_host (string, optional)
Set _sourceHost metadata field within SumoLogic
Default: nil
source_name (string, required)
Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is nil)
source_name_key (string, optional)
Set as source::path_key’s value so that the source_name can be extracted from Fluentd’s buffer
Default: source_name
sumo_client (string, optional)
Name of sumo client which is send as X-Sumo-Client header
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Authorization Bearer token for http request to VMware Log Intelligence Secret
content_type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Default: simple
LogIntelligenceHeadersOut
LogIntelligenceHeadersOut is used to convert the input LogIntelligenceHeaders to a fluentd
+output that uses the correct key names for the VMware Log Intelligence plugin. This allows the
+Ouput to accept the config is snake_case (as other output plugins do) but output the fluentd
+ config with the proper key names (ie. content_type -> Content-Type)
Authorization (*secret.Secret, required)
Authorization Bearer token for http request to VMware Log Intelligence
Content-Type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Flatten hashes to create one key/val pair w/o losing log data
Default: true
flatten_hashes_separator (string, optional)
Separator to use for joining flattened keys
Default: _
http_conn_debug (bool, optional)
If set, enables debug logs for http connection
Default: false
http_method (string, optional)
HTTP method (post)
Default: post
host (string, optional)
VMware Aria Operations For Logs Host ex. localhost
log_text_keys ([]string, optional)
Keys from log event whose values should be added as log message/text to VMware Aria Operations For Logs. These key/value pairs won’t be expanded/flattened and won’t be added as metadata/fields.
VMware Aria Operations For Logs ingestion api path ex. ‘api/v1/events/ingest’
Default: api/v1/events/ingest
port (int, optional)
VMware Aria Operations For Logs port ex. 9000
Default: 80
raise_on_error (bool, optional)
Raise errors that were rescued during HTTP requests?
Default: false
rate_limit_msec (int, optional)
Simple rate limiting: ignore any records within rate_limit_msec since the last one
Default: 0
request_retries (int, optional)
Number of retries
Default: 3
request_timeout (int, optional)
http connection ttl for each request
Default: 5
ssl_verify (*bool, optional)
SSL verification flag
Default: true
scheme (string, optional)
HTTP scheme (http,https)
Default: http
serializer (string, optional)
Serialization (json)
Default: json
shorten_keys (map[string]string, optional)
Keys from log event to rewrite for instance from ‘kubernetes_namespace’ to ‘k8s_namespace’ tags will be rewritten with substring substitution and applied in the order present in the hash. Hashes enumerate their values in the order that the corresponding keys were inserted, see: https://ruby-doc.org/core-2.2.2/Hash.html
The annotation format is logging.banzaicloud.io/<loggingRef>: watched. Since the name part of the an annotation can’t be empty the default applies to empty loggingRef value as well.
The mount path is generated from the secret information
The name of the counter to create. Note that the value of this option is always prefixed with syslogng_, so for example key("my-custom-key") becomes syslogng_my-custom-key.
labels (ArrowMap, optional)
The labels used to create separate counters, based on the fields of the messages processed by metrics-probe(). The keys of the map are the name of the label, and the values are syslog-ng templates.
level (int, optional)
Sets the stats level of the generated metrics (default 0).
- (struct{}, required)
+
8.5.3 - Rewrite
Rewrite filters can be used to modify record contents. Logging operator currently supports the following rewrite functions:
SyslogNGOutput and SyslogNGClusterOutput resources have almost the same structure as Output and ClusterOutput resources, with the main difference being the number and kind of supported destinations.
You can use the following syslog-ng outputs in your SyslogNGOutput and SyslogNGClusterOutput resources.
+
8.6.1 - Authentication for syslog-ng outputs
Overview
GRPC-based outputs use this configuration instead of the simple tls field found at most HTTP based destinations. For details, see the documentation of a related syslog-ng destination, for example, Grafana Loki.
Configuration
Auth
Authentication settings. Only one authentication method can be set. Default: Insecure
adc (*ADC, optional)
Application Default Credentials (ADC).
alts (*ALTS, optional)
Application Layer Transport Security (ALTS) is a simple to use authentication, only available within Google’s infrastructure.
insecure (*Insecure, optional)
This is the default method, authentication is disabled (auth(insecure())).
Prunes the unused space in the LogMessage representation
dir (string, optional)
Description: Defines the folder where the disk-buffer files are stored.
disk_buf_size (int64, required)
This is a required option. The maximum size of the disk-buffer in bytes. The minimum value is 1048576 bytes.
mem_buf_length (*int64, optional)
Use this option if the option reliable() is set to no. This option contains the number of messages stored in overflow queue.
mem_buf_size (*int64, optional)
Use this option if the option reliable() is set to yes. This option contains the size of the messages in bytes that is used in the memory part of the disk buffer.
q_out_size (*int64, optional)
The number of messages stored in the output buffer of the destination.
reliable (bool, required)
If set to yes, syslog-ng OSE cannot lose logs in case of reload/restart, unreachable destination or syslog-ng OSE crash. This solution provides a slower, but reliable disk-buffer option.
The group of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-group().
Default: Use the global settings
dir_owner (string, optional)
The owner of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-owner().
Default: Use the global settings
dir_perm (int, optional)
The permission mask of directories created by syslog-ng. Log directories are only created if a file after macro expansion refers to a non-existing directory, and directory creation is enabled (see also the create-dirs() option). For octal numbers prefix the number with 0, for example, use 0755 for rwxr-xr-x.
Default: Use the global settings
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
The body of the HTTP request, for example, body("${ISODATE} ${MESSAGE}"). You can use strings, macros, and template functions in the body. If not set, it will contain the message received from the source by default.
body-prefix (string, optional)
The string syslog-ng OSE puts at the beginning of the body of the HTTP request, before the log message.
body-suffix (string, optional)
The string syslog-ng OSE puts to the end of the body of the HTTP request, after the log message.
delimiter (string, optional)
By default, syslog-ng OSE separates the log messages of the batch with a newline character.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
log-fifo-size (int, optional)
The number of messages that the output queue can store.
method (string, optional)
Specifies the HTTP method to use when sending the message to the server. POST | PUT
password (secret.Secret, optional)
The password that syslog-ng OSE uses to authenticate on the server where it sends the messages.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timeout (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: http://127.0.0.1:8000
user (string, optional)
The username that syslog-ng OSE uses to authenticate on the server where it sends the messages.
user-agent (string, optional)
The value of the USER-AGENT header in the messages sent to the server.
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
Batch
batch-bytes (int, optional)
Description: Sets the maximum size of payload in a batch. If the size of the messages reaches this value, syslog-ng OSE sends the batch to the destination even if the number of messages is less than the value of the batch-lines() option. Note that if the batch-timeout() option is enabled and the queue becomes empty, syslog-ng OSE flushes the messages only if batch-timeout() expires, or the batch reaches the limit set in batch-bytes().
batch-lines (int, optional)
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
+
8.6.6 - Loggly output
Overview
The loggly() destination sends log messages to the Loggly Logging-as-a-Service provider.
+You can send log messages over TCP, or encrypted with TLS for syslog-ng outputs.
A JSON object representing key-value pairs for the Event. These key-value pairs adds structure to Events, making it easier to search. Attributes can be nested JSON objects, however, we recommend limiting the amount of nesting.
Default: "--scope rfc5424 --exclude MESSAGE --exclude DATE --leave-initial-dot"
batch_bytes (int, optional)
batch_lines (int, optional)
batch_timeout (int, optional)
body (string, optional)
content_type (string, optional)
This field specifies the content type of the log records being sent to Falcon’s LogScale.
Default: "application/json"
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
extra_headers (string, optional)
This field represents additional headers that can be included in the HTTP request when sending log records to Falcon’s LogScale.
Default: empty
persist_name (string, optional)
rawstring (string, optional)
The raw string representing the Event. The default display for an Event in LogScale is the rawstring. If you do not provide the rawstring field, then the response defaults to a JSON representation of the attributes field.
Default: empty
timezone (string, optional)
The timezone is only required if you specify the timestamp in milliseconds. The timezone specifies the local timezone for the event. Note that you must still specify the timestamp in UTC time.
token (*secret.Secret, optional)
An Ingest Token is a unique string that identifies a repository and allows you to send data to that repository.
Default: empty
url (*secret.Secret, optional)
Ingester URL is the URL of the Humio cluster you want to send data to.
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
labels (filter.ArrowMap, optional)
Using the Labels map, Kubernetes label to Loki label mapping can be configured. Example: {"app" : "$PROGRAM"}
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See syslog-ng docs for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
template (string, optional)
Template for customizing the log message format.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timestamp (string, optional)
The timestamp that will be applied to the outgoing messages (possible values: current|received|msg default: current). Loki does not accept events, in which the timestamp is not monotonically increasing.
url (string, optional)
Specifies the hostname or IP address and optionally the port number of the service that can receive log data via gRPC. Use a colon (:) after the address to specify the port number of the server. For example: grpc://127.0.0.1:8000
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The name of the MongoDB collection where the log messages are stored (collections are similar to SQL tables). Note that the name of the collection must not start with a dollar sign ($), and that it may contain dot (.) characters.
dir (string, optional)
Defines the folder where the disk-buffer files are stored.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
fallback-topic is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).
qos (int, optional)
qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered.
template (string, optional)
Template where you can configure the message template sent to the MQTT broker. By default, the template is: $ISODATE $HOST $MSGHDR$MSG
topic (string, optional)
Topic defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.
The password used for authentication on a password-protected Redis server.
command (StringList, optional)
Internal rendered form of the CommandAndArguments field
command_and_arguments ([]string, optional)
The Redis command to execute, for example, LPUSH, INCR, or HINCRBY. Using the HINCRBY command with an increment value of 1 allows you to create various statistics. For example, the command("HINCRBY" "${HOST}/programs" "${PROGRAM}" "1") command counts the number of log messages on each host for each program.
Default: ""
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
host (string, optional)
The hostname or IP address of the Redis server.
Default: 127.0.0.1
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
Persistname
port (int, optional)
The port number of the Redis server.
Default: 6379
retries (int, optional)
If syslog-ng OSE cannot send a message, it will try again until the number of attempts reaches retries().
Default: 3
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Default: 0
time-reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The number of messages that the output queue can store.
max_object_size (int, optional)
Set the maximum object size size.
Default: 5120GiB
max_pending_uploads (int, optional)
Set the maximum number of pending uploads.
Default: 32
object_key (string, optional)
The object_key for the S3 server.
object_key_timestamp (RawString, optional)
Set object_key_timestamp
persist_name (string, optional)
Persistname
region (string, optional)
Set the region option.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
secret_key (*secret.Secret, optional)
The secret_key for the S3 server.
storage_class (string, optional)
Set the storage_class option.
template (RawString, optional)
Template
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
persist_name (string, optional)
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
persist_name (string, optional)
port (int, optional)
This option sets the port number of the Sumo Logic server to connect to.
Default: 6514
tag (string, optional)
This option specifies the list of tags to add as the tags fields of Sumo Logic messages. If not specified, syslog-ng OSE automatically adds the tags already assigned to the message. If you set the tag() option, only the tags you specify will be added to the messages.
By default, syslog-ng OSE closes destination sockets if it receives any input from the socket (for example, a reply). If this option is set to no, syslog-ng OSE just ignores the input, but does not close the socket. For details, see the documentation of the AxoSyslog syslog-ng distribution.
disk_buffer (*DiskBuffer, optional)
Enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Unique name for the syslog-ng driver. If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The name of a directory that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. (Optional) For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file, that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
cipher-suite (string, optional)
Description: Specifies the cipher, hash, and key-exchange algorithms used for the encryption, for example, ECDHE-ECDSA-AES256-SHA384. The list of available algorithms depends on the version of OpenSSL used to compile syslog-ng.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Use the certificate store of the system for verifying HTTPS certificates. For details, see the AxoSyslog Core documentation.
GrpcTLS
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Global resources: ClusterFlow, ClusterOutput, SyslogNGClusterFlow, SyslogNGClusterOutput
The namespaced resources are only effective in their own namespace. Global resources are cluster wide.
+
You can create ClusterFlow, ClusterOutput, SyslogNGClusterFlow, and SyslogNGClusterOutput resources only in the controlNamespace, unless the allowClusterResourcesFromAllNamespaces option is enabled in the logging resource. This namespace MUST be a protected namespace so that only administrators can access it.
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
customParsers (string, optional)
Available in Logging operator version 4.2 and later. Specify a custom parser file to load in addition to the default parsers file. It must be a valid key in the configmap specified by customConfig.
The following example defines a Fluentd parser that places the parsed containerd log messages into the log field instead of the message field.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+name:containerd
+spec:
+inputTail:
+Parser:cri-log-key
+# Parser that populates `log` instead of `message` to enable the Kubernetes filter's Merge_Log feature to work
+# Mind the indentation, otherwise Fluent Bit will parse the whole message into the `log` key
+customParsers:|
+ [PARSER]
+ Name cri-log-key
+ Format regex
+ Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L%z
+# Required key remap if one wants to rely on the existing auto-detected log key in the fluentd parser and concat filter otherwise should be omitted
+filterModify:
+- rules:
+- Rename:
+key:log
+value:message
+
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit.
Default: 5
healthCheck (*HealthCheck, optional)
Available in Logging operator version 4.4 and later.
HostNetwork (bool, optional)
image (ImageSpec, optional)
inputTail (InputTail, optional)
labels (map[string]string, optional)
livenessDefaultCheck (bool, optional)
livenessProbe (*corev1.Probe, optional)
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
FluentbitStatus defines the resource status for FluentbitAgent
FluentbitTLS
FluentbitTLS defines the TLS configs
enabled (*bool, required)
secretName (string, optional)
sharedKey (string, optional)
FluentbitTCPOutput
FluentbitTCPOutput defines the TLS configs
json_date_format (string, optional)
Default: iso8601
json_date_key (string, optional)
Default: ts
Workers (*int, optional)
Available in Logging operator version 4.4 and later.
FluentbitNetwork
FluentbitNetwork defines network configuration for fluentbit
connectTimeout (*uint32, optional)
Sets the timeout for connecting to an upstream
Default: 10
connectTimeoutLogError (*bool, optional)
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message
Default: true
dnsMode (string, optional)
Sets the primary transport layer protocol used by the asynchronous DNS resolver for connections established
Default: UDP, UDP or TCP
dnsPreferIpv4 (*bool, optional)
Prioritize IPv4 DNS results when trying to establish a connection
Default: false
dnsResolver (string, optional)
Select the primary DNS resolver type
Default: ASYNC, LEGACY or ASYNC
keepalive (*bool, optional)
Whether or not TCP keepalive is used for the upstream connection
Default: true
keepaliveIdleTimeout (*uint32, optional)
How long in seconds a TCP keepalive connection can be idle before being recycled
Default: 30
keepaliveMaxRecycle (*uint32, optional)
How many times a TCP keepalive connection can be used before being recycled
Default: 0, disabled
sourceAddress (string, optional)
Specify network address (interface) to use for connection and data traffic.
Default: disabled
BufferStorage
BufferStorage is the Service Section Configuration of fluent-bit
storage.backlog.mem_limit (string, optional)
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
Default: 5M
storage.checksum (string, optional)
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts.
Default: Off
storage.metrics (string, optional)
Available in Logging operator version 4.4 and later. If the http_server option has been enabled in the main Service configuration section, this option registers a new endpoint where internal metrics of the storage layer can be consumed.
Default: Off
storage.path (string, optional)
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync (string, optional)
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
Default: normal
HealthCheck
HealthCheck configuration. Available in Logging operator version 4.4 and later.
hcErrorsCount (int, optional)
The error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period.
Default: 5
hcPeriod (int, optional)
The time period (in seconds) to count the error and retry failure data point.
Default: 60
hcRetryFailureCount (int, optional)
The retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period
Default: 5
HotReload
HotReload configuration
image (ImageSpec, optional)
resources (corev1.ResourceRequirements, optional)
InputTail
InputTail defines FluentbitAgent tail input configuration The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
Buffer_Chunk_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
Default: 32k
Buffer_Max_Size (string, optional)
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
Default: Buffer_Chunk_Size
DB (*string, optional)
Specify the database file to keep track of monitored files and offsets.
DB.journal_mode (string, optional)
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
Default: WAL
DB.locking (*bool, optional)
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
Default: true
DB_Sync (string, optional)
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.
Default: Full
Docker_Mode (string, optional)
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Default: Off
Docker_Mode_Flush (string, optional)
Wait period time in seconds to flush queued unfinished split lines.
Default: 4
Docker_Mode_Parser (string, optional)
Specify an optional parser for the first line of the docker multiline mode.
Exclude_Path (string, optional)
Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=.gz,.zip
Ignore_Older (string, optional)
Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.
Key (string, optional)
When a message is unstructured (no parser applied), it’s appended as a string under the key name log. This option allows to define an alternative name for that key.
Default: log
Mem_Buf_Limit (string, optional)
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Multiline (string, optional)
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Default: Off
Multiline_Flush (string, optional)
Wait period time in seconds to process queued multiline messages
Default: 4
multiline.parser ([]string, optional)
Specify one or multiple parser definitions to apply to the content. Part of the new Multiline Core support in 1.8
Default: ""
Parser (string, optional)
Specify the name of a parser to interpret the entry as a structured message.
Parser_Firstline (string, optional)
Name of the parser that machs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)
Parser_N ([]string, optional)
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Path (string, optional)
Pattern specifying a specific log files or multiple ones through the use of common wildcards.
Path_Key (string, optional)
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Read_From_Head (bool, optional)
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
Refresh_Interval (string, optional)
The interval of refreshing the list of watched files in seconds.
Default: 60
Rotate_Wait (string, optional)
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
Default: 5
Skip_Long_Lines (string, optional)
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Default: Off
storage.type (string, optional)
Specify the buffering mechanism to use. It can be memory or filesystem.
Default: memory
Tag (string, optional)
Set a tag (with regex-extract fields) that will be placed on lines read.
Tag_Regex (string, optional)
Set a regex to extract fields from the file.
FilterKubernetes
FilterKubernetes Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
Annotations (string, optional)
Include Kubernetes resource annotations in the extra metadata.
Default: On
Buffer_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs. If this value is empty we will set it “0”.
Default: “0”
Cache_Use_Docker_Id (string, optional)
When enabled, metadata will be fetched from K8s when docker_id is changed.
Default: Off
DNS_Retries (string, optional)
DNS lookup retries N times until the network start working
Default: 6
DNS_Wait_Time (string, optional)
DNS lookup interval between network status checks
Default: 30
Dummy_Meta (string, optional)
If set, use dummy-meta data (for test/dev purposes)
Default: Off
K8S-Logging.Exclude (string, optional)
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Default: On
K8S-Logging.Parser (string, optional)
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Default: Off
Keep_Log (string, optional)
When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).
Default: On
Kube_CA_File (string, optional)
CA certificate file (default:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
Configurable TTL for K8s cached metadata. By default, it is set to 0 which means TTL for cache entries is disabled and cache entries are evicted at random when capacity is reached. In order to enable this option, you should set the number to a time interval. For example, set this value to 60 or 60s and cache entries which have been created more than 60s will be evicted.
Default: 0
Kube_meta_preload_cache_dir (string, optional)
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Kube_Tag_Prefix (string, optional)
When the source records comes from Tail input plugin, this option allows to specify what’s the prefix used in Tail configuration. (default:kube.var.log.containers.)
Token TTL configurable ’time to live’ for the K8s token. By default, it is set to 600 seconds. After this time, the token is reloaded from Kube_Token_File or the Kube_Token_Command. (default:“600”)
Default: 600
Kube_URL (string, optional)
API Server end-point.
Default: https://kubernetes.default.svc:443
Kubelet_Port (string, optional)
kubelet port using for HTTP request, this only works when Use_Kubelet set to On
Default: 10250
Labels (string, optional)
Include Kubernetes resource labels in the extra metadata.
Default: On
Match (string, optional)
Match filtered records (default:kube.*)
Default: kubernetes.*
Merge_Log (string, optional)
When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure. (default:Off)
Default: On
Merge_Log_Key (string, optional)
When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.
Merge_Log_Trim (string, optional)
When Merge_Log is enabled, trim (remove possible \n or \r) field values.
Default: On
Merge_Parser (string, optional)
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Regex_Parser (string, optional)
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
tls.debug (string, optional)
Debug level between 0 (nothing) and 4 (every detail).
Default: -1
tls.verify (string, optional)
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
Default: On
Use_Journal (string, optional)
When enabled, the filter reads logs coming in Journald format.
Default: Off
Use_Kubelet (string, optional)
This is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log.
Default: Off
FilterAws
FilterAws The AWS Filter Enriches logs with AWS Metadata.
az (*bool, optional)
The availability zone (default:true).
Default: true
account_id (*bool, optional)
The account ID for current EC2 instance. (default:false)
Default: false
ami_id (*bool, optional)
The EC2 instance image id. (default:false)
Default: false
ec2_instance_id (*bool, optional)
The EC2 instance ID. (default:true)
Default: true
ec2_instance_type (*bool, optional)
The EC2 instance type. (default:false)
Default: false
hostname (*bool, optional)
The hostname for current EC2 instance. (default:false)
Default: false
imds_version (string, optional)
Specify which version of the instance metadata service to use. Valid values are ‘v1’ or ‘v2’ (default).
Default: v2
Match (string, optional)
Match filtered records (default:*)
Default: *
private_ip (*bool, optional)
The EC2 instance private ip. (default:false)
Default: false
vpc_id (*bool, optional)
The VPC ID for current EC2 instance. (default:false)
Default: false
FilterModify
FilterModify The Modify Filter plugin allows you to change records using rules and conditions.
conditions ([]FilterModifyCondition, optional)
FluentbitAgent Filter Modification Condition
rules ([]FilterModifyRule, optional)
FluentbitAgent Filter Modification Rule
FilterModifyRule
FilterModifyRule The Modify Filter plugin allows you to change records using rules and conditions.
Add (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE if KEY does not exist
Copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist
Hard_copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten
Hard_rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten
Remove (*FilterKey, optional)
Remove a key/value pair with key KEY if it exists
Remove_regex (*FilterKey, optional)
Remove all key/value pairs with key matching regexp KEY
Remove_wildcard (*FilterKey, optional)
Remove all key/value pairs with key matching wildcard KEY
Rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist
Set (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten
FilterModifyCondition
FilterModifyCondition The Modify Filter plugin allows you to change records using rules and conditions.
storage.total_limit_size Limit the maximum number of Chunks in the filesystem for the current output logical destination.
Tag (string, optional)
Time_as_Integer (bool, optional)
Workers (*int, optional)
Available in Logging operator version 4.4 and later. Enables dedicated thread(s) for this output. Default value (2) is set since version 1.8.13. For previous versions is 0.
Fluentd port inside the container (24240 by default). The headless service port is controlled by this field as well. Note that the default ClusterIP service port is always 24240, regardless of this field.
Available in Logging operator version 4.4 and later. Configurable resource requirements for the drainer sidecar container. Default 20m cpu request, 20M memory limit
LoggingRouteSpec defines the desired state of LoggingRoute
source (string, required)
Source identifies the logging that this policy applies to
targets (metav1.LabelSelector, required)
Targets refers to the list of logging resources specified by a label selector to forward logs to. Filtering of namespaces will happen based on the watchNamespaces and watchNamespaceSelector fields of the target logging resource.
LoggingRouteStatus
LoggingRouteStatus defines the actual state of the LoggingRoute
notices ([]string, optional)
Enumerate non-blocker issues the user should pay attention to
noticesCount (int, optional)
Summarize the number of notices for the CLI output
problems ([]string, optional)
Enumerate problems that prohibits this route to take effect and populate the tenants field
problemsCount (int, optional)
Summarize the number of problems for the CLI output
tenants ([]Tenant, optional)
Enumerate all loggings with all the destination namespaces expanded
Tenant
name (string, required)
namespaces ([]string, optional)
LoggingRoute
LoggingRoute (experimental)
+Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
Allow configuration of cluster resources from any namespace. Mutually exclusive with ControlNamespace restriction of Cluster resources
clusterDomain (*string, optional)
Cluster domain name to be used when templating URLs to services .
Default: “cluster.local.”
configCheck (ConfigCheck, optional)
ConfigCheck settings that apply to both fluentd and syslog-ng
controlNamespace (string, required)
Namespace for cluster wide configuration resources like ClusterFlow and ClusterOutput. This should be a protected namespace from regular users. Resources like fluentbit and fluentd will run in this namespace as well.
defaultFlow (*DefaultFlowSpec, optional)
Default flow for unmatched logs. This Flow configuration collects all logs that didn’t matched any other Flow.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
errorOutputRef (string, optional)
GlobalOutput name to flush ERROR events to
flowConfigCheckDisabled (bool, optional)
Disable configuration check before applying new fluentd configuration.
flowConfigOverride (string, optional)
Override generated config. This is a raw configuration string for troubleshooting purposes.
fluentbit (*FluentbitSpec, optional)
FluentbitAgent daemonset configuration. Deprecated, will be removed with next major version Migrate to the standalone NodeAgent resource
WatchNamespaceSelector is a LabelSelector to find matching namespaces to watch as in WatchNamespaces
watchNamespaces ([]string, optional)
Limit namespaces to watch Flow and Output custom resources.
ConfigCheck
labels (map[string]string, optional)
Labels to use for the configcheck pods on top of labels added by the operator by default. Default values can be overwritten.
strategy (ConfigCheckStrategy, optional)
Select the config check strategy to use. DryRun: Parse and validate configuration. StartWithTimeout: Start with given configuration and exit after specified timeout. Default: DryRun
timeoutSeconds (int, optional)
Configure timeout in seconds if strategy is StartWithTimeout
LoggingStatus
LoggingStatus defines the observed state of Logging
configCheckResults (map[string]bool, optional)
Result of the config check. Under normal conditions there is a single item in the map with a bool value.
fluentdConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached fluentd configuration object.
problems ([]string, optional)
Problems with the logging resource
problemsCount (int, optional)
Count of problems for printcolumn
syslogNGConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached SyslogNG configuration object.
watchNamespaces ([]string, optional)
List of namespaces that watchNamespaces + watchNamespaceSelector is resolving to. Not set means all namespaces.
Logging
Logging is the Schema for the loggings API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (LoggingSpec, optional)
status (LoggingStatus, optional)
LoggingList
LoggingList contains a list of Logging
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]Logging, required)
DefaultFlowSpec
DefaultFlowSpec is a Flow for logs that did not match any other Flow
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
daemonSet (*typeoverride.DaemonSet, optional)
disableKubernetesFilter (*bool, optional)
enableUpstream (*bool, optional)
enabled (*bool, optional)
extraVolumeMounts ([]*VolumeMount, optional)
filterAws (*FilterAws, optional)
filterKubernetes (FilterKubernetes, optional)
flush (int32, optional)
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit (default: 5)
Default: 5
inputTail (InputTail, optional)
livenessDefaultCheck (*bool, optional)
Default: true
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled. (default: info)
SyslogNGClusterFlow is the Schema for the syslog-ng clusterflows API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGClusterFlowSpec
SyslogNGClusterFlowSpec is the Kubernetes spec for Flows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGClusterFlowList
SyslogNGClusterFlowList contains a list of SyslogNGClusterFlow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterFlow, required)
+
1.13 - SyslogNGClusterOutput
SyslogNGClusterOutput
SyslogNGClusterOutput is the Schema for the syslog-ng clusteroutputs API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterOutputSpec, required)
status (SyslogNGOutputStatus, optional)
SyslogNGClusterOutputSpec
SyslogNGClusterOutputSpec contains Kubernetes spec for SyslogNGClusterOutput
(SyslogNGOutputSpec, required)
enabledNamespaces ([]string, optional)
SyslogNGClusterOutputList
SyslogNGClusterOutputList contains a list of SyslogNGClusterOutput
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterOutput, required)
+
1.14 - SyslogNGConfig
SyslogNGConfig
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGSpec, optional)
status (SyslogNGConfigStatus, optional)
SyslogNGConfigStatus
active (*bool, optional)
logging (string, optional)
problems ([]string, optional)
problemsCount (int, optional)
SyslogNGConfigList
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGConfig, required)
+
1.15 - SyslogNGFlowSpec
SyslogNGFlowSpec
SyslogNGFlowSpec is the Kubernetes spec for SyslogNGFlows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
localOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGFilter
Filter definition for SyslogNGFlowSpec
id (string, optional)
match (*filter.MatchConfig, optional)
parser (*filter.ParserConfig, optional)
rewrite ([]filter.RewriteConfig, optional)
SyslogNGFlow
Flow Kubernetes object
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGFlowList
FlowList contains a list of Flow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGFlow, required)
+
1.16 - SyslogNGOutputSpec
SyslogNGOutputSpec
SyslogNGOutputSpec defines the desired state of SyslogNGOutput
Available in Logging operator version 4.5 and later. Parses date automatically from the timestamp registered by the container runtime. Note: jsonKeyPrefix and jsonKeyDelim are respected.
Available in Logging operator version 4.5 and later.
Parses date automatically from the timestamp registered by the container runtime.
+Note: jsonKeyPrefix and jsonKeyDelim are respected.
+It is disabled by default, but if enabled, then the default settings parse the timestamp written by the container runtime and parsed by Fluent Bit using the cri or the docker parser.
format (*string, optional)
Default: “%FT%T.%f%z”
template (*string, optional)
Default(depending on JSONKeyPrefix): “${json.time}”
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the daemonset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the daemonset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the daemonset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
Global resources: ClusterFlow, ClusterOutput, SyslogNGClusterFlow, SyslogNGClusterOutput
The namespaced resources are only effective in their own namespace. Global resources are cluster wide.
+
You can create ClusterFlow, ClusterOutput, SyslogNGClusterFlow, and SyslogNGClusterOutput resources only in the controlNamespace, unless the allowClusterResourcesFromAllNamespaces option is enabled in the logging resource. This namespace MUST be a protected namespace so that only administrators can access it.
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
customParsers (string, optional)
Available in Logging operator version 4.2 and later. Specify a custom parser file to load in addition to the default parsers file. It must be a valid key in the configmap specified by customConfig.
The following example defines a Fluentd parser that places the parsed containerd log messages into the log field instead of the message field.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+name:containerd
+spec:
+inputTail:
+Parser:cri-log-key
+# Parser that populates `log` instead of `message` to enable the Kubernetes filter's Merge_Log feature to work
+# Mind the indentation, otherwise Fluent Bit will parse the whole message into the `log` key
+customParsers:|
+ [PARSER]
+ Name cri-log-key
+ Format regex
+ Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L%z
+# Required key remap if one wants to rely on the existing auto-detected log key in the fluentd parser and concat filter otherwise should be omitted
+filterModify:
+- rules:
+- Rename:
+key:log
+value:message
+
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit.
Default: 5
healthCheck (*HealthCheck, optional)
Available in Logging operator version 4.4 and later.
HostNetwork (bool, optional)
image (ImageSpec, optional)
inputTail (InputTail, optional)
labels (map[string]string, optional)
livenessDefaultCheck (bool, optional)
livenessProbe (*corev1.Probe, optional)
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
FluentbitStatus defines the resource status for FluentbitAgent
FluentbitTLS
FluentbitTLS defines the TLS configs
enabled (*bool, required)
secretName (string, optional)
sharedKey (string, optional)
FluentbitTCPOutput
FluentbitTCPOutput defines the TLS configs
json_date_format (string, optional)
Default: iso8601
json_date_key (string, optional)
Default: ts
Workers (*int, optional)
Available in Logging operator version 4.4 and later.
FluentbitNetwork
FluentbitNetwork defines network configuration for fluentbit
connectTimeout (*uint32, optional)
Sets the timeout for connecting to an upstream
Default: 10
connectTimeoutLogError (*bool, optional)
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message
Default: true
dnsMode (string, optional)
Sets the primary transport layer protocol used by the asynchronous DNS resolver for connections established
Default: UDP, UDP or TCP
dnsPreferIpv4 (*bool, optional)
Prioritize IPv4 DNS results when trying to establish a connection
Default: false
dnsResolver (string, optional)
Select the primary DNS resolver type
Default: ASYNC, LEGACY or ASYNC
keepalive (*bool, optional)
Whether or not TCP keepalive is used for the upstream connection
Default: true
keepaliveIdleTimeout (*uint32, optional)
How long in seconds a TCP keepalive connection can be idle before being recycled
Default: 30
keepaliveMaxRecycle (*uint32, optional)
How many times a TCP keepalive connection can be used before being recycled
Default: 0, disabled
sourceAddress (string, optional)
Specify network address (interface) to use for connection and data traffic.
Default: disabled
BufferStorage
BufferStorage is the Service Section Configuration of fluent-bit
storage.backlog.mem_limit (string, optional)
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
Default: 5M
storage.checksum (string, optional)
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts.
Default: Off
storage.metrics (string, optional)
Available in Logging operator version 4.4 and later. If the http_server option has been enabled in the main Service configuration section, this option registers a new endpoint where internal metrics of the storage layer can be consumed.
Default: Off
storage.path (string, optional)
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync (string, optional)
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
Default: normal
HealthCheck
HealthCheck configuration. Available in Logging operator version 4.4 and later.
hcErrorsCount (int, optional)
The error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period.
Default: 5
hcPeriod (int, optional)
The time period (in seconds) to count the error and retry failure data point.
Default: 60
hcRetryFailureCount (int, optional)
The retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period
Default: 5
HotReload
HotReload configuration
image (ImageSpec, optional)
resources (corev1.ResourceRequirements, optional)
InputTail
InputTail defines FluentbitAgent tail input configuration The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
Buffer_Chunk_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
Default: 32k
Buffer_Max_Size (string, optional)
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
Default: Buffer_Chunk_Size
DB (*string, optional)
Specify the database file to keep track of monitored files and offsets.
DB.journal_mode (string, optional)
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
Default: WAL
DB.locking (*bool, optional)
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
Default: true
DB_Sync (string, optional)
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.
Default: Full
Docker_Mode (string, optional)
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Default: Off
Docker_Mode_Flush (string, optional)
Wait period time in seconds to flush queued unfinished split lines.
Default: 4
Docker_Mode_Parser (string, optional)
Specify an optional parser for the first line of the docker multiline mode.
Exclude_Path (string, optional)
Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=.gz,.zip
Ignore_Older (string, optional)
Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.
Key (string, optional)
When a message is unstructured (no parser applied), it’s appended as a string under the key name log. This option allows to define an alternative name for that key.
Default: log
Mem_Buf_Limit (string, optional)
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Multiline (string, optional)
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Default: Off
Multiline_Flush (string, optional)
Wait period time in seconds to process queued multiline messages
Default: 4
multiline.parser ([]string, optional)
Specify one or multiple parser definitions to apply to the content. Part of the new Multiline Core support in 1.8
Default: ""
Parser (string, optional)
Specify the name of a parser to interpret the entry as a structured message.
Parser_Firstline (string, optional)
Name of the parser that machs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)
Parser_N ([]string, optional)
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Path (string, optional)
Pattern specifying a specific log files or multiple ones through the use of common wildcards.
Path_Key (string, optional)
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Read_From_Head (bool, optional)
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
Refresh_Interval (string, optional)
The interval of refreshing the list of watched files in seconds.
Default: 60
Rotate_Wait (string, optional)
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
Default: 5
Skip_Long_Lines (string, optional)
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Default: Off
storage.type (string, optional)
Specify the buffering mechanism to use. It can be memory or filesystem.
Default: memory
Tag (string, optional)
Set a tag (with regex-extract fields) that will be placed on lines read.
Tag_Regex (string, optional)
Set a regex to extract fields from the file.
FilterKubernetes
FilterKubernetes Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
Annotations (string, optional)
Include Kubernetes resource annotations in the extra metadata.
Default: On
Buffer_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs. If this value is empty we will set it “0”.
Default: “0”
Cache_Use_Docker_Id (string, optional)
When enabled, metadata will be fetched from K8s when docker_id is changed.
Default: Off
DNS_Retries (string, optional)
DNS lookup retries N times until the network start working
Default: 6
DNS_Wait_Time (string, optional)
DNS lookup interval between network status checks
Default: 30
Dummy_Meta (string, optional)
If set, use dummy-meta data (for test/dev purposes)
Default: Off
K8S-Logging.Exclude (string, optional)
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Default: On
K8S-Logging.Parser (string, optional)
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Default: Off
Keep_Log (string, optional)
When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).
Default: On
Kube_CA_File (string, optional)
CA certificate file (default:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
Configurable TTL for K8s cached metadata. By default, it is set to 0 which means TTL for cache entries is disabled and cache entries are evicted at random when capacity is reached. In order to enable this option, you should set the number to a time interval. For example, set this value to 60 or 60s and cache entries which have been created more than 60s will be evicted.
Default: 0
Kube_meta_preload_cache_dir (string, optional)
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Kube_Tag_Prefix (string, optional)
When the source records comes from Tail input plugin, this option allows to specify what’s the prefix used in Tail configuration. (default:kube.var.log.containers.)
Token TTL configurable ’time to live’ for the K8s token. By default, it is set to 600 seconds. After this time, the token is reloaded from Kube_Token_File or the Kube_Token_Command. (default:“600”)
Default: 600
Kube_URL (string, optional)
API Server end-point.
Default: https://kubernetes.default.svc:443
Kubelet_Port (string, optional)
kubelet port using for HTTP request, this only works when Use_Kubelet set to On
Default: 10250
Labels (string, optional)
Include Kubernetes resource labels in the extra metadata.
Default: On
Match (string, optional)
Match filtered records (default:kube.*)
Default: kubernetes.*
Merge_Log (string, optional)
When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure. (default:Off)
Default: On
Merge_Log_Key (string, optional)
When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.
Merge_Log_Trim (string, optional)
When Merge_Log is enabled, trim (remove possible \n or \r) field values.
Default: On
Merge_Parser (string, optional)
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Regex_Parser (string, optional)
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
tls.debug (string, optional)
Debug level between 0 (nothing) and 4 (every detail).
Default: -1
tls.verify (string, optional)
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
Default: On
Use_Journal (string, optional)
When enabled, the filter reads logs coming in Journald format.
Default: Off
Use_Kubelet (string, optional)
This is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log.
Default: Off
FilterAws
FilterAws The AWS Filter Enriches logs with AWS Metadata.
az (*bool, optional)
The availability zone (default:true).
Default: true
account_id (*bool, optional)
The account ID for current EC2 instance. (default:false)
Default: false
ami_id (*bool, optional)
The EC2 instance image id. (default:false)
Default: false
ec2_instance_id (*bool, optional)
The EC2 instance ID. (default:true)
Default: true
ec2_instance_type (*bool, optional)
The EC2 instance type. (default:false)
Default: false
hostname (*bool, optional)
The hostname for current EC2 instance. (default:false)
Default: false
imds_version (string, optional)
Specify which version of the instance metadata service to use. Valid values are ‘v1’ or ‘v2’ (default).
Default: v2
Match (string, optional)
Match filtered records (default:*)
Default: *
private_ip (*bool, optional)
The EC2 instance private ip. (default:false)
Default: false
vpc_id (*bool, optional)
The VPC ID for current EC2 instance. (default:false)
Default: false
FilterModify
FilterModify The Modify Filter plugin allows you to change records using rules and conditions.
conditions ([]FilterModifyCondition, optional)
FluentbitAgent Filter Modification Condition
rules ([]FilterModifyRule, optional)
FluentbitAgent Filter Modification Rule
FilterModifyRule
FilterModifyRule The Modify Filter plugin allows you to change records using rules and conditions.
Add (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE if KEY does not exist
Copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist
Hard_copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten
Hard_rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten
Remove (*FilterKey, optional)
Remove a key/value pair with key KEY if it exists
Remove_regex (*FilterKey, optional)
Remove all key/value pairs with key matching regexp KEY
Remove_wildcard (*FilterKey, optional)
Remove all key/value pairs with key matching wildcard KEY
Rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist
Set (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten
FilterModifyCondition
FilterModifyCondition The Modify Filter plugin allows you to change records using rules and conditions.
storage.total_limit_size Limit the maximum number of Chunks in the filesystem for the current output logical destination.
Tag (string, optional)
Time_as_Integer (bool, optional)
Workers (*int, optional)
Available in Logging operator version 4.4 and later. Enables dedicated thread(s) for this output. Default value (2) is set since version 1.8.13. For previous versions is 0.
Fluentd port inside the container (24240 by default). The headless service port is controlled by this field as well. Note that the default ClusterIP service port is always 24240, regardless of this field.
Available in Logging operator version 4.4 and later. Configurable resource requirements for the drainer sidecar container. Default 20m cpu request, 20M memory limit
LoggingRouteSpec defines the desired state of LoggingRoute
source (string, required)
Source identifies the logging that this policy applies to
targets (metav1.LabelSelector, required)
Targets refers to the list of logging resources specified by a label selector to forward logs to. Filtering of namespaces will happen based on the watchNamespaces and watchNamespaceSelector fields of the target logging resource.
LoggingRouteStatus
LoggingRouteStatus defines the actual state of the LoggingRoute
notices ([]string, optional)
Enumerate non-blocker issues the user should pay attention to
noticesCount (int, optional)
Summarize the number of notices for the CLI output
problems ([]string, optional)
Enumerate problems that prohibits this route to take effect and populate the tenants field
problemsCount (int, optional)
Summarize the number of problems for the CLI output
tenants ([]Tenant, optional)
Enumerate all loggings with all the destination namespaces expanded
Tenant
name (string, required)
namespaces ([]string, optional)
LoggingRoute
LoggingRoute (experimental)
+Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
Allow configuration of cluster resources from any namespace. Mutually exclusive with ControlNamespace restriction of Cluster resources
clusterDomain (*string, optional)
Cluster domain name to be used when templating URLs to services .
Default: “cluster.local.”
configCheck (ConfigCheck, optional)
ConfigCheck settings that apply to both fluentd and syslog-ng
controlNamespace (string, required)
Namespace for cluster wide configuration resources like ClusterFlow and ClusterOutput. This should be a protected namespace from regular users. Resources like fluentbit and fluentd will run in this namespace as well.
defaultFlow (*DefaultFlowSpec, optional)
Default flow for unmatched logs. This Flow configuration collects all logs that didn’t matched any other Flow.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
errorOutputRef (string, optional)
GlobalOutput name to flush ERROR events to
flowConfigCheckDisabled (bool, optional)
Disable configuration check before applying new fluentd configuration.
flowConfigOverride (string, optional)
Override generated config. This is a raw configuration string for troubleshooting purposes.
fluentbit (*FluentbitSpec, optional)
FluentbitAgent daemonset configuration. Deprecated, will be removed with next major version Migrate to the standalone NodeAgent resource
WatchNamespaceSelector is a LabelSelector to find matching namespaces to watch as in WatchNamespaces
watchNamespaces ([]string, optional)
Limit namespaces to watch Flow and Output custom resources.
ConfigCheck
labels (map[string]string, optional)
Labels to use for the configcheck pods on top of labels added by the operator by default. Default values can be overwritten.
strategy (ConfigCheckStrategy, optional)
Select the config check strategy to use. DryRun: Parse and validate configuration. StartWithTimeout: Start with given configuration and exit after specified timeout. Default: DryRun
timeoutSeconds (int, optional)
Configure timeout in seconds if strategy is StartWithTimeout
LoggingStatus
LoggingStatus defines the observed state of Logging
configCheckResults (map[string]bool, optional)
Result of the config check. Under normal conditions there is a single item in the map with a bool value.
fluentdConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached fluentd configuration object.
problems ([]string, optional)
Problems with the logging resource
problemsCount (int, optional)
Count of problems for printcolumn
syslogNGConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached SyslogNG configuration object.
watchNamespaces ([]string, optional)
List of namespaces that watchNamespaces + watchNamespaceSelector is resolving to. Not set means all namespaces.
Logging
Logging is the Schema for the loggings API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (LoggingSpec, optional)
status (LoggingStatus, optional)
LoggingList
LoggingList contains a list of Logging
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]Logging, required)
DefaultFlowSpec
DefaultFlowSpec is a Flow for logs that did not match any other Flow
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
daemonSet (*typeoverride.DaemonSet, optional)
disableKubernetesFilter (*bool, optional)
enableUpstream (*bool, optional)
enabled (*bool, optional)
extraVolumeMounts ([]*VolumeMount, optional)
filterAws (*FilterAws, optional)
filterKubernetes (FilterKubernetes, optional)
flush (int32, optional)
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit (default: 5)
Default: 5
inputTail (InputTail, optional)
livenessDefaultCheck (*bool, optional)
Default: true
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled. (default: info)
SyslogNGClusterFlow is the Schema for the syslog-ng clusterflows API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGClusterFlowSpec
SyslogNGClusterFlowSpec is the Kubernetes spec for Flows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGClusterFlowList
SyslogNGClusterFlowList contains a list of SyslogNGClusterFlow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterFlow, required)
+
13 - SyslogNGClusterOutput
SyslogNGClusterOutput
SyslogNGClusterOutput is the Schema for the syslog-ng clusteroutputs API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterOutputSpec, required)
status (SyslogNGOutputStatus, optional)
SyslogNGClusterOutputSpec
SyslogNGClusterOutputSpec contains Kubernetes spec for SyslogNGClusterOutput
(SyslogNGOutputSpec, required)
enabledNamespaces ([]string, optional)
SyslogNGClusterOutputList
SyslogNGClusterOutputList contains a list of SyslogNGClusterOutput
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGClusterOutput, required)
+
14 - SyslogNGConfig
SyslogNGConfig
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGSpec, optional)
status (SyslogNGConfigStatus, optional)
SyslogNGConfigStatus
active (*bool, optional)
logging (string, optional)
problems ([]string, optional)
problemsCount (int, optional)
SyslogNGConfigList
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGConfig, required)
+
15 - SyslogNGFlowSpec
SyslogNGFlowSpec
SyslogNGFlowSpec is the Kubernetes spec for SyslogNGFlows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
localOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGFilter
Filter definition for SyslogNGFlowSpec
id (string, optional)
match (*filter.MatchConfig, optional)
parser (*filter.ParserConfig, optional)
rewrite ([]filter.RewriteConfig, optional)
SyslogNGFlow
Flow Kubernetes object
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGFlowList
FlowList contains a list of Flow
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]SyslogNGFlow, required)
+
16 - SyslogNGOutputSpec
SyslogNGOutputSpec
SyslogNGOutputSpec defines the desired state of SyslogNGOutput
Available in Logging operator version 4.5 and later. Parses date automatically from the timestamp registered by the container runtime. Note: jsonKeyPrefix and jsonKeyDelim are respected.
Available in Logging operator version 4.5 and later.
Parses date automatically from the timestamp registered by the container runtime.
+Note: jsonKeyPrefix and jsonKeyDelim are respected.
+It is disabled by default, but if enabled, then the default settings parse the timestamp written by the container runtime and parsed by Fluent Bit using the cri or the docker parser.
format (*string, optional)
Default: “%FT%T.%f%z”
template (*string, optional)
Default(depending on JSONKeyPrefix): “${json.time}”
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
customParsers (string, optional)
Available in Logging operator version 4.2 and later. Specify a custom parser file to load in addition to the default parsers file. It must be a valid key in the configmap specified by customConfig.
The following example defines a Fluentd parser that places the parsed containerd log messages into the log field instead of the message field.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+name:containerd
+spec:
+inputTail:
+Parser:cri-log-key
+# Parser that populates `log` instead of `message` to enable the Kubernetes filter's Merge_Log feature to work
+# Mind the indentation, otherwise Fluent Bit will parse the whole message into the `log` key
+customParsers:|
+ [PARSER]
+ Name cri-log-key
+ Format regex
+ Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
+ Time_Key time
+ Time_Format %Y-%m-%dT%H:%M:%S.%L%z
+# Required key remap if one wants to rely on the existing auto-detected log key in the fluentd parser and concat filter otherwise should be omitted
+filterModify:
+- rules:
+- Rename:
+key:log
+value:message
+
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit.
Default: 5
healthCheck (*HealthCheck, optional)
Available in Logging operator version 4.4 and later.
HostNetwork (bool, optional)
image (ImageSpec, optional)
inputTail (InputTail, optional)
labels (map[string]string, optional)
livenessDefaultCheck (bool, optional)
livenessProbe (*corev1.Probe, optional)
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled.
FluentbitStatus defines the resource status for FluentbitAgent
FluentbitTLS
FluentbitTLS defines the TLS configs
enabled (*bool, required)
secretName (string, optional)
sharedKey (string, optional)
FluentbitTCPOutput
FluentbitTCPOutput defines the TLS configs
json_date_format (string, optional)
Default: iso8601
json_date_key (string, optional)
Default: ts
Workers (*int, optional)
Available in Logging operator version 4.4 and later.
FluentbitNetwork
FluentbitNetwork defines network configuration for fluentbit
connectTimeout (*uint32, optional)
Sets the timeout for connecting to an upstream
Default: 10
connectTimeoutLogError (*bool, optional)
On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message
Default: true
dnsMode (string, optional)
Sets the primary transport layer protocol used by the asynchronous DNS resolver for connections established
Default: UDP, UDP or TCP
dnsPreferIpv4 (*bool, optional)
Prioritize IPv4 DNS results when trying to establish a connection
Default: false
dnsResolver (string, optional)
Select the primary DNS resolver type
Default: ASYNC, LEGACY or ASYNC
keepalive (*bool, optional)
Whether or not TCP keepalive is used for the upstream connection
Default: true
keepaliveIdleTimeout (*uint32, optional)
How long in seconds a TCP keepalive connection can be idle before being recycled
Default: 30
keepaliveMaxRecycle (*uint32, optional)
How many times a TCP keepalive connection can be used before being recycled
Default: 0, disabled
sourceAddress (string, optional)
Specify network address (interface) to use for connection and data traffic.
Default: disabled
BufferStorage
BufferStorage is the Service Section Configuration of fluent-bit
storage.backlog.mem_limit (string, optional)
If storage.path is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called backlog data. This option configure a hint of maximum value of memory to use when processing these records.
Default: 5M
storage.checksum (string, optional)
Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm.
When enabled, irrecoverable chunks will be deleted during runtime, and any other irrecoverable chunk located in the configured storage path directory will be deleted when Fluent Bit starts.
Default: Off
storage.metrics (string, optional)
Available in Logging operator version 4.4 and later. If the http_server option has been enabled in the main Service configuration section, this option registers a new endpoint where internal metrics of the storage layer can be consumed.
Default: Off
storage.path (string, optional)
Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering.
storage.sync (string, optional)
Configure the synchronization mode used to store the data into the file system. It can take the values normal or full.
Default: normal
HealthCheck
HealthCheck configuration. Available in Logging operator version 4.4 and later.
hcErrorsCount (int, optional)
The error count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period.
Default: 5
hcPeriod (int, optional)
The time period (in seconds) to count the error and retry failure data point.
Default: 60
hcRetryFailureCount (int, optional)
The retry failure count to meet the unhealthy requirement, this is a sum for all output plugins in a defined HC_Period
Default: 5
HotReload
HotReload configuration
image (ImageSpec, optional)
resources (corev1.ResourceRequirements, optional)
InputTail
InputTail defines FluentbitAgent tail input configuration The tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
Buffer_Chunk_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification.
Default: 32k
Buffer_Max_Size (string, optional)
Set the limit of the buffer size per monitored file. When a buffer needs to be increased (e.g: very long lines), this value is used to restrict how much the memory buffer can grow. If reading a file exceed this limit, the file is removed from the monitored file list. The value must be according to the Unit Size specification.
Default: Buffer_Chunk_Size
DB (*string, optional)
Specify the database file to keep track of monitored files and offsets.
DB.journal_mode (string, optional)
sets the journal mode for databases (WAL). Enabling WAL provides higher performance. Note that WAL is not compatible with shared network file systems.
Default: WAL
DB.locking (*bool, optional)
Specify that the database will be accessed only by Fluent Bit. Enabling this feature helps to increase performance when accessing the database but it restrict any external tool to query the content.
Default: true
DB_Sync (string, optional)
Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. This flag affects how the internal SQLite engine do synchronization to disk, for more details about each option please refer to this section.
Default: Full
Docker_Mode (string, optional)
If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. This mode cannot be used at the same time as Multiline.
Default: Off
Docker_Mode_Flush (string, optional)
Wait period time in seconds to flush queued unfinished split lines.
Default: 4
Docker_Mode_Parser (string, optional)
Specify an optional parser for the first line of the docker multiline mode.
Exclude_Path (string, optional)
Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, e.g: exclude_path=.gz,.zip
Ignore_Older (string, optional)
Ignores files that have been last modified before this time in seconds. Supports m,h,d (minutes, hours,days) syntax. Default behavior is to read all specified files.
Key (string, optional)
When a message is unstructured (no parser applied), it’s appended as a string under the key name log. This option allows to define an alternative name for that key.
Default: log
Mem_Buf_Limit (string, optional)
Set a limit of memory that Tail plugin can use when appending data to the Engine. If the limit is reach, it will be paused; when the data is flushed it resumes.
Multiline (string, optional)
If enabled, the plugin will try to discover multiline messages and use the proper parsers to compose the outgoing messages. Note that when this option is enabled the Parser option is not used.
Default: Off
Multiline_Flush (string, optional)
Wait period time in seconds to process queued multiline messages
Default: 4
multiline.parser ([]string, optional)
Specify one or multiple parser definitions to apply to the content. Part of the new Multiline Core support in 1.8
Default: ""
Parser (string, optional)
Specify the name of a parser to interpret the entry as a structured message.
Parser_Firstline (string, optional)
Name of the parser that machs the beginning of a multiline message. Note that the regular expression defined in the parser must include a group name (named capture)
Parser_N ([]string, optional)
Optional-extra parser to interpret and structure multiline entries. This option can be used to define multiple parsers, e.g: Parser_1 ab1, Parser_2 ab2, Parser_N abN.
Path (string, optional)
Pattern specifying a specific log files or multiple ones through the use of common wildcards.
Path_Key (string, optional)
If enabled, it appends the name of the monitored file as part of the record. The value assigned becomes the key in the map.
Read_From_Head (bool, optional)
For new discovered files on start (without a database offset/position), read the content from the head of the file, not tail.
Refresh_Interval (string, optional)
The interval of refreshing the list of watched files in seconds.
Default: 60
Rotate_Wait (string, optional)
Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed.
Default: 5
Skip_Long_Lines (string, optional)
When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), the default behavior is to stop monitoring that file. Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines and continue processing other lines that fits into the buffer size.
Default: Off
storage.type (string, optional)
Specify the buffering mechanism to use. It can be memory or filesystem.
Default: memory
Tag (string, optional)
Set a tag (with regex-extract fields) that will be placed on lines read.
Tag_Regex (string, optional)
Set a regex to extract fields from the file.
FilterKubernetes
FilterKubernetes Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata.
Annotations (string, optional)
Include Kubernetes resource annotations in the extra metadata.
Default: On
Buffer_Size (string, optional)
Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must be according to the Unit Size specification. A value of 0 results in no limit, and the buffer will expand as-needed. Note that if pod specifications exceed the buffer limit, the API response will be discarded when retrieving metadata, and some kubernetes metadata will fail to be injected to the logs. If this value is empty we will set it “0”.
Default: “0”
Cache_Use_Docker_Id (string, optional)
When enabled, metadata will be fetched from K8s when docker_id is changed.
Default: Off
DNS_Retries (string, optional)
DNS lookup retries N times until the network start working
Default: 6
DNS_Wait_Time (string, optional)
DNS lookup interval between network status checks
Default: 30
Dummy_Meta (string, optional)
If set, use dummy-meta data (for test/dev purposes)
Default: Off
K8S-Logging.Exclude (string, optional)
Allow Kubernetes Pods to exclude their logs from the log processor (read more about it in Kubernetes Annotations section).
Default: On
K8S-Logging.Parser (string, optional)
Allow Kubernetes Pods to suggest a pre-defined Parser (read more about it in Kubernetes Annotations section)
Default: Off
Keep_Log (string, optional)
When Keep_Log is disabled, the log field is removed from the incoming message once it has been successfully merged (Merge_Log must be enabled as well).
Default: On
Kube_CA_File (string, optional)
CA certificate file (default:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
Configurable TTL for K8s cached metadata. By default, it is set to 0 which means TTL for cache entries is disabled and cache entries are evicted at random when capacity is reached. In order to enable this option, you should set the number to a time interval. For example, set this value to 60 or 60s and cache entries which have been created more than 60s will be evicted.
Default: 0
Kube_meta_preload_cache_dir (string, optional)
If set, Kubernetes meta-data can be cached/pre-loaded from files in JSON format in this directory, named as namespace-pod.meta
Kube_Tag_Prefix (string, optional)
When the source records comes from Tail input plugin, this option allows to specify what’s the prefix used in Tail configuration. (default:kube.var.log.containers.)
Token TTL configurable ’time to live’ for the K8s token. By default, it is set to 600 seconds. After this time, the token is reloaded from Kube_Token_File or the Kube_Token_Command. (default:“600”)
Default: 600
Kube_URL (string, optional)
API Server end-point.
Default: https://kubernetes.default.svc:443
Kubelet_Port (string, optional)
kubelet port using for HTTP request, this only works when Use_Kubelet set to On
Default: 10250
Labels (string, optional)
Include Kubernetes resource labels in the extra metadata.
Default: On
Match (string, optional)
Match filtered records (default:kube.*)
Default: kubernetes.*
Merge_Log (string, optional)
When enabled, it checks if the log field content is a JSON string map, if so, it append the map fields as part of the log structure. (default:Off)
Default: On
Merge_Log_Key (string, optional)
When Merge_Log is enabled, the filter tries to assume the log field from the incoming message is a JSON string message and make a structured representation of it at the same level of the log field in the map. Now if Merge_Log_Key is set (a string name), all the new structured fields taken from the original log content are inserted under the new key.
Merge_Log_Trim (string, optional)
When Merge_Log is enabled, trim (remove possible \n or \r) field values.
Default: On
Merge_Parser (string, optional)
Optional parser name to specify how to parse the data contained in the log key. Recommended use is for developers or testing only.
Regex_Parser (string, optional)
Set an alternative Parser to process record Tag and extract pod_name, namespace_name, container_name and docker_id. The parser must be registered in a parsers file (refer to parser filter-kube-test as an example).
tls.debug (string, optional)
Debug level between 0 (nothing) and 4 (every detail).
Default: -1
tls.verify (string, optional)
When enabled, turns on certificate validation when connecting to the Kubernetes API server.
Default: On
Use_Journal (string, optional)
When enabled, the filter reads logs coming in Journald format.
Default: Off
Use_Kubelet (string, optional)
This is an optional feature flag to get metadata information from kubelet instead of calling Kube Server API to enhance the log.
Default: Off
FilterAws
FilterAws The AWS Filter Enriches logs with AWS Metadata.
az (*bool, optional)
The availability zone (default:true).
Default: true
account_id (*bool, optional)
The account ID for current EC2 instance. (default:false)
Default: false
ami_id (*bool, optional)
The EC2 instance image id. (default:false)
Default: false
ec2_instance_id (*bool, optional)
The EC2 instance ID. (default:true)
Default: true
ec2_instance_type (*bool, optional)
The EC2 instance type. (default:false)
Default: false
hostname (*bool, optional)
The hostname for current EC2 instance. (default:false)
Default: false
imds_version (string, optional)
Specify which version of the instance metadata service to use. Valid values are ‘v1’ or ‘v2’ (default).
Default: v2
Match (string, optional)
Match filtered records (default:*)
Default: *
private_ip (*bool, optional)
The EC2 instance private ip. (default:false)
Default: false
vpc_id (*bool, optional)
The VPC ID for current EC2 instance. (default:false)
Default: false
FilterModify
FilterModify The Modify Filter plugin allows you to change records using rules and conditions.
conditions ([]FilterModifyCondition, optional)
FluentbitAgent Filter Modification Condition
rules ([]FilterModifyRule, optional)
FluentbitAgent Filter Modification Rule
FilterModifyRule
FilterModifyRule The Modify Filter plugin allows you to change records using rules and conditions.
Add (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE if KEY does not exist
Copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists AND COPIED_KEY does not exist
Hard_copy (*FilterKeyValue, optional)
Copy a key/value pair with key KEY to COPIED_KEY if KEY exists. If COPIED_KEY already exists, this field is overwritten
Hard_rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists. If RENAMED_KEY already exists, this field is overwritten
Remove (*FilterKey, optional)
Remove a key/value pair with key KEY if it exists
Remove_regex (*FilterKey, optional)
Remove all key/value pairs with key matching regexp KEY
Remove_wildcard (*FilterKey, optional)
Remove all key/value pairs with key matching wildcard KEY
Rename (*FilterKeyValue, optional)
Rename a key/value pair with key KEY to RENAMED_KEY if KEY exists AND RENAMED_KEY does not exist
Set (*FilterKeyValue, optional)
Add a key/value pair with key KEY and value VALUE. If KEY already exists, this field is overwritten
FilterModifyCondition
FilterModifyCondition The Modify Filter plugin allows you to change records using rules and conditions.
storage.total_limit_size Limit the maximum number of Chunks in the filesystem for the current output logical destination.
Tag (string, optional)
Time_as_Integer (bool, optional)
Workers (*int, optional)
Available in Logging operator version 4.4 and later. Enables dedicated thread(s) for this output. Default value (2) is set since version 1.8.13. For previous versions is 0.
Fluentd port inside the container (24240 by default). The headless service port is controlled by this field as well. Note that the default ClusterIP service port is always 24240, regardless of this field.
Available in Logging operator version 4.4 and later. Configurable resource requirements for the drainer sidecar container. Default 20m cpu request, 20M memory limit
Allow configuration of cluster resources from any namespace. Mutually exclusive with ControlNamespace restriction of Cluster resources
clusterDomain (*string, optional)
Cluster domain name to be used when templating URLs to services .
Default: “cluster.local.”
configCheck (ConfigCheck, optional)
ConfigCheck settings that apply to both fluentd and syslog-ng
controlNamespace (string, required)
Namespace for cluster wide configuration resources like ClusterFlow and ClusterOutput. This should be a protected namespace from regular users. Resources like fluentbit and fluentd will run in this namespace as well.
defaultFlow (*DefaultFlowSpec, optional)
Default flow for unmatched logs. This Flow configuration collects all logs that didn’t matched any other Flow.
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
errorOutputRef (string, optional)
GlobalOutput name to flush ERROR events to
flowConfigCheckDisabled (bool, optional)
Disable configuration check before applying new fluentd configuration.
flowConfigOverride (string, optional)
Override generated config. This is a raw configuration string for troubleshooting purposes.
fluentbit (*FluentbitSpec, optional)
FluentbitAgent daemonset configuration. Deprecated, will be removed with next major version Migrate to the standalone NodeAgent resource
WatchNamespaceSelector is a LabelSelector to find matching namespaces to watch as in WatchNamespaces
watchNamespaces ([]string, optional)
Limit namespaces to watch Flow and Output custom resources.
ConfigCheck
labels (map[string]string, optional)
Labels to use for the configcheck pods on top of labels added by the operator by default. Default values can be overwritten.
strategy (ConfigCheckStrategy, optional)
Select the config check strategy to use. DryRun: Parse and validate configuration. StartWithTimeout: Start with given configuration and exit after specified timeout. Default: DryRun
timeoutSeconds (int, optional)
Configure timeout in seconds if strategy is StartWithTimeout
LoggingStatus
LoggingStatus defines the observed state of Logging
configCheckResults (map[string]bool, optional)
Result of the config check. Under normal conditions there is a single item in the map with a bool value.
fluentdConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached fluentd configuration object.
problems ([]string, optional)
Problems with the logging resource
problemsCount (int, optional)
Count of problems for printcolumn
syslogNGConfigName (string, optional)
Available in Logging operator version 4.5 and later. Name of the matched detached SyslogNG configuration object.
watchNamespaces ([]string, optional)
List of namespaces that watchNamespaces + watchNamespaceSelector is resolving to. Not set means all namespaces.
Logging
Logging is the Schema for the loggings API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (LoggingSpec, optional)
status (LoggingStatus, optional)
LoggingList
LoggingList contains a list of Logging
(metav1.TypeMeta, required)
metadata (metav1.ListMeta, optional)
items ([]Logging, required)
DefaultFlowSpec
DefaultFlowSpec is a Flow for logs that did not match any other Flow
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/logging_types/releases.releases b/4.6/docs/configuration/crds/v1beta1/logging_types/releases.releases
new file mode 100644
index 000000000..6cda3d9cc
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/logging_types/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/loggingroute_types/index.html b/4.6/docs/configuration/crds/v1beta1/loggingroute_types/index.html
new file mode 100644
index 000000000..3c6a99e1f
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/loggingroute_types/index.html
@@ -0,0 +1,643 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+LoggingRouteSpec | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
LoggingRouteSpec
+
LoggingRouteSpec
LoggingRouteSpec defines the desired state of LoggingRoute
source (string, required)
Source identifies the logging that this policy applies to
targets (metav1.LabelSelector, required)
Targets refers to the list of logging resources specified by a label selector to forward logs to. Filtering of namespaces will happen based on the watchNamespaces and watchNamespaceSelector fields of the target logging resource.
LoggingRouteStatus
LoggingRouteStatus defines the actual state of the LoggingRoute
notices ([]string, optional)
Enumerate non-blocker issues the user should pay attention to
noticesCount (int, optional)
Summarize the number of notices for the CLI output
problems ([]string, optional)
Enumerate problems that prohibits this route to take effect and populate the tenants field
problemsCount (int, optional)
Summarize the number of problems for the CLI output
tenants ([]Tenant, optional)
Enumerate all loggings with all the destination namespaces expanded
Tenant
name (string, required)
namespaces ([]string, optional)
LoggingRoute
LoggingRoute (experimental)
+Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces
Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don’t set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. (default: 24576)
Default: 24576
customConfigSecret (string, optional)
daemonSet (*typeoverride.DaemonSet, optional)
disableKubernetesFilter (*bool, optional)
enableUpstream (*bool, optional)
enabled (*bool, optional)
extraVolumeMounts ([]*VolumeMount, optional)
filterAws (*FilterAws, optional)
filterKubernetes (FilterKubernetes, optional)
flush (int32, optional)
Set the flush time in seconds.nanoseconds. The engine loop uses a Flush timeout to define when is required to flush the records ingested by input plugins through the defined output plugins. (default: 1)
Default: 1
forwardOptions (*ForwardOptions, optional)
grace (int32, optional)
Set the grace time in seconds as Integer value. The engine loop uses a Grace timeout to define wait time on exit (default: 5)
Default: 5
inputTail (InputTail, optional)
livenessDefaultCheck (*bool, optional)
Default: true
logLevel (string, optional)
Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if ‘debug’ is set, it will include error, warning, info and debug. Note that trace mode is only available if Fluent Bit was built with the WITH_TRACE option enabled. (default: info)
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/output_types/releases.releases b/4.6/docs/configuration/crds/v1beta1/output_types/releases.releases
new file mode 100644
index 000000000..6deebe2ca
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/output_types/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/releases.releases b/4.6/docs/configuration/crds/v1beta1/releases.releases
new file mode 100644
index 000000000..6f8d21702
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/syslogng_clusterflow_types/index.html b/4.6/docs/configuration/crds/v1beta1/syslogng_clusterflow_types/index.html
new file mode 100644
index 000000000..17c69b37a
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/syslogng_clusterflow_types/index.html
@@ -0,0 +1,630 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+SyslogNGClusterFlow | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
SyslogNGClusterFlow
+
SyslogNGClusterFlow
SyslogNGClusterFlow is the Schema for the syslog-ng clusterflows API
(metav1.TypeMeta, required)
metadata (metav1.ObjectMeta, optional)
spec (SyslogNGClusterFlowSpec, optional)
status (SyslogNGFlowStatus, optional)
SyslogNGClusterFlowSpec
SyslogNGClusterFlowSpec is the Kubernetes spec for Flows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
SyslogNGClusterFlowList
SyslogNGClusterFlowList contains a list of SyslogNGClusterFlow
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/syslogng_config_types/releases.releases b/4.6/docs/configuration/crds/v1beta1/syslogng_config_types/releases.releases
new file mode 100644
index 000000000..911a93f51
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/syslogng_config_types/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/crds/v1beta1/syslogng_flow_types/index.html b/4.6/docs/configuration/crds/v1beta1/syslogng_flow_types/index.html
new file mode 100644
index 000000000..78485303c
--- /dev/null
+++ b/4.6/docs/configuration/crds/v1beta1/syslogng_flow_types/index.html
@@ -0,0 +1,636 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+SyslogNGFlowSpec | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
SyslogNGFlowSpec
+
SyslogNGFlowSpec
SyslogNGFlowSpec is the Kubernetes spec for SyslogNGFlows
filters ([]SyslogNGFilter, optional)
globalOutputRefs ([]string, optional)
localOutputRefs ([]string, optional)
loggingRef (string, optional)
match (*SyslogNGMatch, optional)
outputMetrics ([]filter.MetricsProbe, optional)
Output metrics are applied before the log reaches the destination and contain output metadata like: name,namespace and scope. Scope shows whether the output is a local or global one. Available in Logging operator version 4.5 and later.
Available in Logging operator version 4.5 and later. Parses date automatically from the timestamp registered by the container runtime. Note: jsonKeyPrefix and jsonKeyDelim are respected.
Available in Logging operator version 4.5 and later.
Parses date automatically from the timestamp registered by the container runtime.
+Note: jsonKeyPrefix and jsonKeyDelim are respected.
+It is disabled by default, but if enabled, then the default settings parse the timestamp written by the container runtime and parsed by Fluent Bit using the cri or the docker parser.
format (*string, optional)
Default: “%FT%T.%f%z”
template (*string, optional)
Default(depending on JSONKeyPrefix): “${json.time}”
The Logging extensions part of the Logging operator solves the following problems:
+
Collect Kubernetes events to provide insight into what is happening inside a cluster, such as decisions made by the scheduler, or why some pods were evicted from the node.
Collect logs from the nodes like kubelet logs.
Collect logs from files on the nodes, for example, audit logs, or the systemd journal.
Collect logs from legacy application log files.
Starting with Logging operator version 3.17.0, logging-extensions are open source and part of Logging operator.
Features
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
+
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
Event-tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
+
Host-tailer tails custom files and transmits their changes to stdout. This way the Logging operator can process them.
+Kubernetes host tailer allows you to tail logs like kubelet, audit logs, or the systemd journal from the nodes.
+
Tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. Instead of running a host-tailer instance on every node, tailer-webhook attaches a sidecar container to the pod, and reads the specified file(s).
Kubernetes events are objects that provide insight into what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. Event tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
The operator handles this CR and generates the following required resources:
+
ServiceAccount: new account for event-tailer
ClusterRole: sets the event-tailer's roles
ClusterRoleBinding: links the account with the roles
ConfigMap: contains the configuration for the event-tailer pod
StatefulSet: manages the lifecycle of the event-tailer pod, which uses the banzaicloud/eventrouter:v0.1.0 image to tail events
Create event tailer
+
+
The simplest way to init an event-tailer is to create a new event-tailer resource with a name and controlNamespace field specified. The following command creates an event tailer called sample:
Check that the new object has been created by running:
kubectl get eventtailer
+
Expected output:
NAME AGE
+sample 22m
+
+
You can see the events in JSON format by checking the log of the event-tailer pod. This way Logging operator can collect the events, and handle them as any other log. Run:
kubectl logs -l app.kubernetes.io/instance=sample-event-tailer | head -1 | jq
+
Once you have an event-tailer, you can bind your events to a specific logging flow. The following example configures a flow to route the previously created sample-eventtailer to the sample-output.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: eventtailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ match:
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ - select:
+ labels:
+ app.kubernetes.io/name: sample-event-tailer
+ outputRefs:
+ - sample-output
+EOF
+
Delete event tailer
To remove an unwanted tailer, delete the related event-tailer custom resource. This terminates the event-tailer pod. For example, run the following command to delete the event tailer called sample:
kubectl delete eventtailer sample && kubectl get pod
+
Expected output:
eventtailer.logging-extensions.banzaicloud.io "sample" deleted
+NAME READY STATUS RESTARTS AGE
+sample-event-tailer-0 1/1 Terminating 0 12s
+
Persist event logs
Event-tailer supports persist mode. In this case, the logs generated from events are stored on a persistent volume. Add the following configuration to your event-tailer spec. In this example, the event tailer is called sample:
Logging operator manages the persistent volume of event-tailer automatically, you don’t have any further task with it. To check that the persistent volume has been created, run:
kubectl get pvc && kubectl get pv
+
The output should be similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+sample-event-tailer-sample-event-tailer-0 Bound pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO standard 43s
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO Delete Bound default/sample-event-tailer-sample-event-tailer-0 standard 42s
+
When an application (mostly legacy programs) is not logging in a Kubernetes-native way, Logging operator cannot process its logs. (For example, an old application does not send its logs to stdout, but uses some log files instead.) File-tailer helps to solve this problem: It configures Fluent Bit to tail the given file(s), and sends the logs to the stdout, to implement Kubernetes-native logging.
However, file-tailer cannot access the pod’s local dir, so the logfiles need to be written on a mounted volume.
Let’s assume the following code represents a legacy application that generates logs into the /legacy-logs/date.log file. While the legacy-logs directory is mounted, it’s accessible from other pods by mounting the same volume.
Logging operator configure the environment and start a file-tailer pod. It’s also able to deal with multi-node clusters, since is starts the host-tailer pod through a daemonset.
Check the created file tailer pod:
kubectl get pod
+
The output should be similar to:
NAME READY STATUS RESTARTS AGE
+file-hosttailer-sample-host-tailer-5tqhv 1/1 Running 0 117s
+test-pod 1/1 Running 0 5m40s
+
Checking the logs of the file-tailer's pod. You will see the logfile’s content on stdout. This way Logging operator can process those logs as well.
Filter to select systemd unit example: kubelet.service
+
maxEntries
int
No
-
Maximum entries to read when starting to tail logs to avoid high pressure
+
containerOverrides
*types.ContainerBase
No
-
Override container fields for the given tailer
Example: Configure logging Flow to route logs from a host tailer
The following example uses the flow’s match term to listen the previously created file-hosttailer-sample Hosttailer’s log.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: hosttailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ match:
+ - select:
+ labels:
+ app.kubernetes.io/name: file-hosttailer-sample
+ # there might be a need to match on container name too (in case of multiple containers)
+ container_names:
+ - nginx-access
+ outputRefs:
+ - sample-output
+EOF
+
Example: Kubernetes host tailer with multiple tailers
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
+
workloadMetaOverrides
*types.MetaBase
No
-
Override metadata of the created resources
+
workloadOverrides
*types.PodSpecBase
No
-
Override podSpec fields for the given daemonset
Advanced configuration overrides
MetaBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
annotations
map[string]string
No
-
+
labels
map[string]string
No
-
PodSpecBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
tolerations
[]corev1.Toleration
No
-
+
nodeSelector
map[string]string
No
-
+
serviceAccountName
string
No
-
+
affinity
*corev1.Affinity
No
-
+
securityContext
*corev1.PodSecurityContext
No
-
+
volumes
[]corev1.Volume
No
-
+
priorityClassName
string
No
-
ContainerBase
+
+
+
Variable Name
Type
Required
Default
Description
+
+
resources
*corev1.ResourceRequirements
No
-
+
image
string
No
-
+
pullPolicy
corev1.PullPolicy
No
-
+
command
[]string
No
-
+
volumeMounts
[]corev1.VolumeMount
No
-
+
securityContext
*corev1.SecurityContext
No
-
+
3 - Tail logfiles with a webhook
The tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. As an alternative to using a host file tailer service, you can use a file tailer webhook service.
+While the containers of the host file tailers run in a separated pod, file tailer webhook uses a different approach: if a pod has a specific annotation, the webhook injects a sidecar container for every tailed file into the pod.
The tailer-webhook behaves differently compared to the host-tailer:
Pros:
+
A simple annotation on the pod initiates the file tailing.
There is no need to use mounted volumes, Logging operator will manage the volumes and mounts between your containers.
Cons:
+
Required to start the Logging operator with webhooks service enabled. This requires additional configuration, especially on certificates since webhook services are allowed over TLS only.
Possibly uses more resources, since every tailed file attaches a new sidecar container to the pod.
Enable webhooks in Logging operator
+
We recommend using cert-manager to manage your certificates. Below is a really simple command that bootstraps generates the required resources for the tailer-webhook.
Alternatively, instead of using the values.yaml file, you can run the installation from command line also by passing the values with the set and set-string parameters:
You also need a service which points to the webhook port (9443) of Logging operator, and where the mutatingwebhookconfiuration will point to. Running the following command in shell will create the required service:
Furthermore, you need to tell Kubernetes to send admission requests to our webhook service. To do that, create a mutatingwebhookconfiguration Kubernetes resource, and:
+
Set the configuration to call /tailer-webhook path on your logging-webhooks service when v1.Pod is created.
Set failurePolicy to ignore, which means that the original pod will be created on webhook errors.
Set sideEffects to none, because we won’t cause any out-of-band changes in Kubernetes.
Unfortunately, mutatingwebhookconfiguration requires the caBundle field to be filled because we used a self-signed certificate, and the certificate cannot be validated through the system trust roots. If your certificate was generated with a system trust root CA, remove the caBundle line, because the certificate will be validated automatically.
+There are more sophisticated ways to load the CA into this field, but this solution requires no further components.
+
For example: you can inject the CA with a simple cert-manager cert-manager.io/inject-ca-from: logging/webhook-tls annotation on the mutatingwebhookconfiguration resource.
Note: If the pod with the sidecar annotation is in the default namespace, Logging operator handles tailer-webhook annotations clusterwide. To restrict the webhook callbacks to the current namespace, change the scope of the mutatingwebhookconfiguration to namespaced.
File tailer example
The following example creates a pod that is running a shell in infinite loop that appends the date command’s output to a file every second. The annotation sidecar.logging-extensions.banzaicloud.io/tail notifies Logging operator to attach a sidecar container to the pod. The sidecar tails the /var/log/date file and sends its output to the stdout.
After you have created the pod with the required annotation, make sure that the test-pod contains two containers by running kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
+test-pod 2/2 Running 0 29m
+
Check the container names in the pod to see that the Logging operator has created the sidecar container called legacy-logs-date-log. The sidecar containers’ name is always built from the path and name of the tailed file. Run the following command:
kubectl get pod test-pod -o json | jq '.spec.containers | map(.name)'
+
Check the logs of the test container. Since it writes the logs into a file, it does not produce any logs on stdout.
kubectl logs test-pod sample-container;echo$?
+
Expected output:
0
+
Check the logs of the legacy-logs-date-log container. This container exposes the logs of the test container on its stdout.
kubectl logs test-pod legacy-logs-date-log
+
Expected output:
Fluent Bit v1.9.5
+* Copyright (C) 2015-2022 The Fluent Bit Authors
+* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
+* https://fluentbit.io
+
+[2022/09/15 11:26:11][ info][fluent bit]version=1.9.5, commit=9ec43447b6, pid=1
+[2022/09/15 11:26:11][ info][storage]version=1.2.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
+[2022/09/15 11:26:11][ info][cmetrics]version=0.3.4
+[2022/09/15 11:26:11][ info][sp] stream processor started
+[2022/09/15 11:26:11][ info][input:tail:tail.0] inotify_fs_add(): inode=938627watch_fd=1name=/legacy-logs/date.log
+[2022/09/15 11:26:11][ info][output:file:file.0] worker #0 started
+Thu Sep 15 11:26:11 UTC 2022
+Thu Sep 15 11:26:12 UTC 2022
+...
+
Multi-container pods
In some cases you have multiple containers in your pod and you want to distinguish which file annotation belongs to which container. You can order every file annotations to particular container by prefixing the annotation with a ${ContainerName}: container key. For example:
The Logging extensions part of the Logging operator solves the following problems:
+
Collect Kubernetes events to provide insight into what is happening inside a cluster, such as decisions made by the scheduler, or why some pods were evicted from the node.
Collect logs from the nodes like kubelet logs.
Collect logs from files on the nodes, for example, audit logs, or the systemd journal.
Collect logs from legacy application log files.
Starting with Logging operator version 3.17.0, logging-extensions are open source and part of Logging operator.
Features
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
+
Logging-operator handles the new features the well-known way: it uses custom resources to access the features. This way a simple kubectl apply with a particular parameter set initiates a new feature. Extensions supports three different custom resource types:
+
Event-tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
+
Host-tailer tails custom files and transmits their changes to stdout. This way the Logging operator can process them.
+Kubernetes host tailer allows you to tail logs like kubelet, audit logs, or the systemd journal from the nodes.
+
Tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. Instead of running a host-tailer instance on every node, tailer-webhook attaches a sidecar container to the pod, and reads the specified file(s).
Kubernetes events are objects that provide insight into what is happening inside a cluster, such as what decisions were made by the scheduler or why some pods were evicted from the node. Event tailer listens for Kubernetes events and transmits their changes to stdout, so the Logging operator can process them.
The operator handles this CR and generates the following required resources:
+
ServiceAccount: new account for event-tailer
ClusterRole: sets the event-tailer's roles
ClusterRoleBinding: links the account with the roles
ConfigMap: contains the configuration for the event-tailer pod
StatefulSet: manages the lifecycle of the event-tailer pod, which uses the banzaicloud/eventrouter:v0.1.0 image to tail events
Create event tailer
+
+
The simplest way to init an event-tailer is to create a new event-tailer resource with a name and controlNamespace field specified. The following command creates an event tailer called sample:
Check that the new object has been created by running:
kubectl get eventtailer
+
Expected output:
NAME AGE
+sample 22m
+
+
You can see the events in JSON format by checking the log of the event-tailer pod. This way Logging operator can collect the events, and handle them as any other log. Run:
kubectl logs -l app.kubernetes.io/instance=sample-event-tailer | head -1 | jq
+
Once you have an event-tailer, you can bind your events to a specific logging flow. The following example configures a flow to route the previously created sample-eventtailer to the sample-output.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: eventtailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ match:
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ - select:
+ labels:
+ app.kubernetes.io/name: sample-event-tailer
+ outputRefs:
+ - sample-output
+EOF
+
Delete event tailer
To remove an unwanted tailer, delete the related event-tailer custom resource. This terminates the event-tailer pod. For example, run the following command to delete the event tailer called sample:
kubectl delete eventtailer sample && kubectl get pod
+
Expected output:
eventtailer.logging-extensions.banzaicloud.io "sample" deleted
+NAME READY STATUS RESTARTS AGE
+sample-event-tailer-0 1/1 Terminating 0 12s
+
Persist event logs
Event-tailer supports persist mode. In this case, the logs generated from events are stored on a persistent volume. Add the following configuration to your event-tailer spec. In this example, the event tailer is called sample:
Logging operator manages the persistent volume of event-tailer automatically, you don’t have any further task with it. To check that the persistent volume has been created, run:
kubectl get pvc && kubectl get pv
+
The output should be similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+sample-event-tailer-sample-event-tailer-0 Bound pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO standard 43s
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-6af02cb2-3a62-4d24-8201-dc749034651e 1Gi RWO Delete Bound default/sample-event-tailer-sample-event-tailer-0 standard 42s
+
When an application (mostly legacy programs) is not logging in a Kubernetes-native way, Logging operator cannot process its logs. (For example, an old application does not send its logs to stdout, but uses some log files instead.) File-tailer helps to solve this problem: It configures Fluent Bit to tail the given file(s), and sends the logs to the stdout, to implement Kubernetes-native logging.
However, file-tailer cannot access the pod’s local dir, so the logfiles need to be written on a mounted volume.
Let’s assume the following code represents a legacy application that generates logs into the /legacy-logs/date.log file. While the legacy-logs directory is mounted, it’s accessible from other pods by mounting the same volume.
Logging operator configure the environment and start a file-tailer pod. It’s also able to deal with multi-node clusters, since is starts the host-tailer pod through a daemonset.
Check the created file tailer pod:
kubectl get pod
+
The output should be similar to:
NAME READY STATUS RESTARTS AGE
+file-hosttailer-sample-host-tailer-5tqhv 1/1 Running 0 117s
+test-pod 1/1 Running 0 5m40s
+
Checking the logs of the file-tailer's pod. You will see the logfile’s content on stdout. This way Logging operator can process those logs as well.
Filter to select systemd unit example: kubelet.service
+
maxEntries
int
No
-
Maximum entries to read when starting to tail logs to avoid high pressure
+
containerOverrides
*types.ContainerBase
No
-
Override container fields for the given tailer
Example: Configure logging Flow to route logs from a host tailer
The following example uses the flow’s match term to listen the previously created file-hosttailer-sample Hosttailer’s log.
kubectl apply -f - <<EOF
+apiVersion: logging.banzaicloud.io/v1beta1
+kind: Flow
+metadata:
+ name: hosttailer-flow
+ namespace: default
+spec:
+ filters:
+ - tag_normaliser: {}
+ # keeps data matching to label, the rest of the data will be discarded by this flow implicitly
+ match:
+ - select:
+ labels:
+ app.kubernetes.io/name: file-hosttailer-sample
+ # there might be a need to match on container name too (in case of multiple containers)
+ container_names:
+ - nginx-access
+ outputRefs:
+ - sample-output
+EOF
+
Example: Kubernetes host tailer with multiple tailers
EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn’t be managed with a simple update.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/extensions/kubernetes-host-tailer/releases.releases b/4.6/docs/configuration/extensions/kubernetes-host-tailer/releases.releases
new file mode 100644
index 000000000..a86a610b4
--- /dev/null
+++ b/4.6/docs/configuration/extensions/kubernetes-host-tailer/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/extensions/logging-extensions-event-tailer.png b/4.6/docs/configuration/extensions/logging-extensions-event-tailer.png
new file mode 100644
index 000000000..b99a02490
Binary files /dev/null and b/4.6/docs/configuration/extensions/logging-extensions-event-tailer.png differ
diff --git a/4.6/docs/configuration/extensions/logging-extensions-host-tailer.png b/4.6/docs/configuration/extensions/logging-extensions-host-tailer.png
new file mode 100644
index 000000000..c50041145
Binary files /dev/null and b/4.6/docs/configuration/extensions/logging-extensions-host-tailer.png differ
diff --git a/4.6/docs/configuration/extensions/logging-extensions-host-tailer2.png b/4.6/docs/configuration/extensions/logging-extensions-host-tailer2.png
new file mode 100644
index 000000000..e7e4917ab
Binary files /dev/null and b/4.6/docs/configuration/extensions/logging-extensions-host-tailer2.png differ
diff --git a/4.6/docs/configuration/extensions/logging-extensions-tailer-webhook.png b/4.6/docs/configuration/extensions/logging-extensions-tailer-webhook.png
new file mode 100644
index 000000000..64c374c1f
Binary files /dev/null and b/4.6/docs/configuration/extensions/logging-extensions-tailer-webhook.png differ
diff --git a/4.6/docs/configuration/extensions/releases.releases b/4.6/docs/configuration/extensions/releases.releases
new file mode 100644
index 000000000..e72ed7fe3
--- /dev/null
+++ b/4.6/docs/configuration/extensions/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/extensions/tailer-webhook/index.html b/4.6/docs/configuration/extensions/tailer-webhook/index.html
new file mode 100644
index 000000000..cb12e9cf0
--- /dev/null
+++ b/4.6/docs/configuration/extensions/tailer-webhook/index.html
@@ -0,0 +1,790 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Tail logfiles with a webhook | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Tail logfiles with a webhook
+
The tailer-webhook is a different approach for the same problem: parsing legacy application’s log file. As an alternative to using a host file tailer service, you can use a file tailer webhook service.
+While the containers of the host file tailers run in a separated pod, file tailer webhook uses a different approach: if a pod has a specific annotation, the webhook injects a sidecar container for every tailed file into the pod.
The tailer-webhook behaves differently compared to the host-tailer:
Pros:
+
A simple annotation on the pod initiates the file tailing.
There is no need to use mounted volumes, Logging operator will manage the volumes and mounts between your containers.
Cons:
+
Required to start the Logging operator with webhooks service enabled. This requires additional configuration, especially on certificates since webhook services are allowed over TLS only.
Possibly uses more resources, since every tailed file attaches a new sidecar container to the pod.
Enable webhooks in Logging operator
+
We recommend using cert-manager to manage your certificates. Below is a really simple command that bootstraps generates the required resources for the tailer-webhook.
Alternatively, instead of using the values.yaml file, you can run the installation from command line also by passing the values with the set and set-string parameters:
You also need a service which points to the webhook port (9443) of Logging operator, and where the mutatingwebhookconfiuration will point to. Running the following command in shell will create the required service:
Furthermore, you need to tell Kubernetes to send admission requests to our webhook service. To do that, create a mutatingwebhookconfiguration Kubernetes resource, and:
+
Set the configuration to call /tailer-webhook path on your logging-webhooks service when v1.Pod is created.
Set failurePolicy to ignore, which means that the original pod will be created on webhook errors.
Set sideEffects to none, because we won’t cause any out-of-band changes in Kubernetes.
Unfortunately, mutatingwebhookconfiguration requires the caBundle field to be filled because we used a self-signed certificate, and the certificate cannot be validated through the system trust roots. If your certificate was generated with a system trust root CA, remove the caBundle line, because the certificate will be validated automatically.
+There are more sophisticated ways to load the CA into this field, but this solution requires no further components.
+
For example: you can inject the CA with a simple cert-manager cert-manager.io/inject-ca-from: logging/webhook-tls annotation on the mutatingwebhookconfiguration resource.
Note: If the pod with the sidecar annotation is in the default namespace, Logging operator handles tailer-webhook annotations clusterwide. To restrict the webhook callbacks to the current namespace, change the scope of the mutatingwebhookconfiguration to namespaced.
File tailer example
The following example creates a pod that is running a shell in infinite loop that appends the date command’s output to a file every second. The annotation sidecar.logging-extensions.banzaicloud.io/tail notifies Logging operator to attach a sidecar container to the pod. The sidecar tails the /var/log/date file and sends its output to the stdout.
After you have created the pod with the required annotation, make sure that the test-pod contains two containers by running kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
+test-pod 2/2 Running 0 29m
+
Check the container names in the pod to see that the Logging operator has created the sidecar container called legacy-logs-date-log. The sidecar containers’ name is always built from the path and name of the tailed file. Run the following command:
kubectl get pod test-pod -o json | jq '.spec.containers | map(.name)'
+
Check the logs of the test container. Since it writes the logs into a file, it does not produce any logs on stdout.
kubectl logs test-pod sample-container;echo$?
+
Expected output:
0
+
Check the logs of the legacy-logs-date-log container. This container exposes the logs of the test container on its stdout.
kubectl logs test-pod legacy-logs-date-log
+
Expected output:
Fluent Bit v1.9.5
+* Copyright (C) 2015-2022 The Fluent Bit Authors
+* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
+* https://fluentbit.io
+
+[2022/09/15 11:26:11][ info][fluent bit]version=1.9.5, commit=9ec43447b6, pid=1
+[2022/09/15 11:26:11][ info][storage]version=1.2.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
+[2022/09/15 11:26:11][ info][cmetrics]version=0.3.4
+[2022/09/15 11:26:11][ info][sp] stream processor started
+[2022/09/15 11:26:11][ info][input:tail:tail.0] inotify_fs_add(): inode=938627watch_fd=1name=/legacy-logs/date.log
+[2022/09/15 11:26:11][ info][output:file:file.0] worker #0 started
+Thu Sep 15 11:26:11 UTC 2022
+Thu Sep 15 11:26:12 UTC 2022
+...
+
Multi-container pods
In some cases you have multiple containers in your pod and you want to distinguish which file annotation belongs to which container. You can order every file annotations to particular container by prefixing the annotation with a ${ContainerName}: container key. For example:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/extensions/tailer-webhook/releases.releases b/4.6/docs/configuration/extensions/tailer-webhook/releases.releases
new file mode 100644
index 000000000..84b16519e
--- /dev/null
+++ b/4.6/docs/configuration/extensions/tailer-webhook/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/flow/index.html b/4.6/docs/configuration/flow/index.html
new file mode 100644
index 000000000..3283c8433
--- /dev/null
+++ b/4.6/docs/configuration/flow/index.html
@@ -0,0 +1,669 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Flow and ClusterFlow | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Flow and ClusterFlow
+
Flows route the selected log messages to the specified outputs. Depending on which log forwarder you use, you can use different filters and outputs, and have to configure different custom resources.
Fluentd flows
Flow defines a logging flow for Fluentd with filters and outputs.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. (Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies.) For detailed examples on using the match statement, see log routing.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
Flow resources are namespaced, the selector only select Pod logs within namespace.
+ClusterFlow defines a Flow without namespace restrictions. It is also only effective in the controlNamespace.
+ClusterFlow selects logs from ALL namespace.
The following example transforms the log messages from the default namespace and sends them to an S3 output.
Note: In a multi-cluster setup you cannot easily determine which cluster the logs come from. You can append your own labels to each log
+using the record modifier filter.
+
For the details of Flow custom resource, see FlowSpec.
For the details of ClusterFlow custom resource, see ClusterFlow.
SyslogNGFlow defines a logging flow for syslog-ng with filters and outputs.
syslog-ng is supported only in Logging operator 4.0 or newer.
The Flow is a namespaced resource, so only logs from the same namespaces are collected. You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. For detailed examples on using the match statement, see log routing with syslog-ng.
You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records.
+The filters in the flow are applied in the order in the definition. You can find the list of supported filters here.
At the end of the Flow, you can attach one or more outputs, which may also be Output or ClusterOutput resources.
+
SyslogNGFlow resources are namespaced, the selector only selects Pod logs within the namespace.
+SyslogNGClusterFlow defines a SyslogNGFlow without namespace restrictions. It is also only effective in the controlNamespace.
+SyslogNGClusterFlow selects logs from ALL namespaces.
The following example selects only messages sent by the log-generator application and forwards them to a syslog output.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/flow/releases.releases b/4.6/docs/configuration/flow/releases.releases
new file mode 100644
index 000000000..042923989
--- /dev/null
+++ b/4.6/docs/configuration/flow/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/fluentd-vs-syslog-ng/index.html b/4.6/docs/configuration/fluentd-vs-syslog-ng/index.html
new file mode 100644
index 000000000..1a37f67f8
--- /dev/null
+++ b/4.6/docs/configuration/fluentd-vs-syslog-ng/index.html
@@ -0,0 +1,619 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Which log forwarder to use | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Which log forwarder to use
+
The Logging operator supports Fluentd and syslog-ng (via the AxoSyslog syslog-ng distribution) as log forwarders. The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. Which one to use depends on your logging requirements.
The following points help you decide which forwarder to use.
+
The forwarders support different outputs. If the output you want to use is supported only by one forwarder, use that.
If the volume of incoming log messages is high, use syslog-ng, as its multithreaded processing provides higher performance.
If you have lots of logging flows or need complex routing or log message processing, use syslog-ng.
+
Note: Depending on which log forwarder you use, some of the CRDs you have to create and configure are different.
syslog-ng is supported only in Logging operator 4.0 or newer.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/fluentd-vs-syslog-ng/releases.releases b/4.6/docs/configuration/fluentd-vs-syslog-ng/releases.releases
new file mode 100644
index 000000000..f74317e45
--- /dev/null
+++ b/4.6/docs/configuration/fluentd-vs-syslog-ng/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/index.html b/4.6/docs/configuration/index.html
new file mode 100644
index 000000000..f9b96b459
--- /dev/null
+++ b/4.6/docs/configuration/index.html
@@ -0,0 +1,635 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Configure log routing | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Configure log routing
+
You can configure the various features and parameters of the Logging operator using Custom Resource Definitions (CRDs).
The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
The log collectors are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
The log forwarder (also called log aggregator) instance receives, filters, and transforms the incoming logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements. For tips, see Which log forwarder to use.
You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
You can configure the Logging operator using the following Custom Resource Definitions.
+
logging - The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It can also contain configurations for Fluent Bit, Fluentd, and syslog-ng. (Starting with Logging operator version 4.5, you can also configure Fluent Bit, Fluentd, and syslog-ng as separate resources.)
CRDs for Fluentd:
+
+
output - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also clusteroutput. To configure syslog-ng outputs, see SyslogNGOutput.
flow - Defines a Fluentd logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also clusterflow. To configure syslog-ng flows, see SyslogNGFlow.
clusteroutput - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
clusterflow - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure syslog-ng clusterflows, see SyslogNGClusterFlow.
CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+
+
SyslogNGOutput - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also SyslogNGClusterOutput. To configure Fluentd outputs, see output.
SyslogNGFlow - Defines a syslog-ng logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also SyslogNGClusterFlow. To configure Fluentd flows, see flow.
SyslogNGClusterOutput - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
SyslogNGClusterFlow - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure Fluentd clusterflows, see clusterflow.
The following sections show examples on configuring the various components to configure outputs and to filter and route your log messages to these outputs. For a list of available CRDs, see Custom Resource Definitions.
syslog-ng is supported only in Logging operator 4.0 or newer.
The first step to process your logs is to select which logs go where.
The match field of the SyslogNGFlow and SyslogNGClusterFlow resources define the routing rules of the logs.
+
Note: Fluentd can use only metadata to route the logs. When using syslog-ng filter expressions, you can filter both on metadata and log content as well.
The syntax of syslog-ng match statements is slightly different from the Fluentd match statements.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
Match expressions select messages by applying patterns on the content or metadata of the messages. You can use simple string matching, and also complex regular expressions. You can combine matches using the and, or, and not boolean operators to create complex expressions to select or exclude messages as needed for your use case.
Currently, only a pattern matching function is supported (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion).
The match field can have one of the following options:
+
+
regexp: A pattern that matches the value of a field or a templated value. For example:
match:
+regexp:<parameters>
+
+
and: Combines the nested match expressions with the logical AND operator.
match:
+and:<list of nested match expressions>
+
+
or: Combines the nested match expressions with the logical OR operator.
match:
+or:<list of nested match expressions>
+
+
not: Matches the logical NOT of the nested match expressions with the logical AND operator.
match:
+not:<list of nested match expressions>
+
regexp patterns
The regexp field (called match in syslog-ng parlance, but renamed to regexp in the CRD to avoid confusion) defines the pattern that selects the matching messages. You can do two different kinds of matching:
+
Find a pattern in the value of a field of the messages, for example, to select the messages of a specific application. To do that, set the pattern and value fields (and optionally the type and flags fields).
Find a pattern in a template expression created from multiple fields of the message. To do that, set the pattern and template fields (and optionally the type and flags fields).
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
The following example filters for specific Pod labels:
The regexp field can have the following parameters:
pattern (string)
Defines the pattern to match against the messages. The type field determines how the pattern is interpreted (for example, string or regular expression).
value (string)
References a field of the message. The pattern is applied to the value of this field. If the value field is set, you cannot use the template field.
+
CAUTION:
You need to use the json. prefix in field names.
+
You can reference fields using the dot notation. For example, if the log contains {"kubernetes": {"namespace_name": "default"}}, then you can reference the namespace_name field using json.kubernetes.namespace_name.
Specifies a template expression that combines fields. The pattern is matched against the value of these combined fields. If the template field is set, you cannot use the value field. For details on template expressions, see the syslog-ng documentation.
type (string)
Specifies how the pattern is interpreted. For details, see Types of regexp.
flags (list)
Specifies flags for the type field.
regexp types
By default, syslog-ng uses PCRE-style regular expressions. Since evaluating complex regular expressions can greatly increase CPU usage and are not always needed, you can following expression types:
Description: Use Perl Compatible Regular Expressions (PCRE). If the type() parameter is not specified, syslog-ng uses PCRE regular expressions by default.
pcre flags
PCRE regular expressions have the following flag options:
global: Usable only in rewrite rules: match for every occurrence of the expression, not only the first one.
+
ignore-case: Disable case-sensitivity.
+
newline: When configured, it changes the newline definition used in PCRE regular expressions to accept either of the following:
+
a single carriage-return
linefeed
the sequence carriage-return and linefeed (\r, \n and \r\n, respectively)
This newline definition is used when the circumflex and dollar patterns (^ and $) are matched against an input. By default, PCRE interprets the linefeed character as indicating the end of a line. It does not affect the \r, \n or \R characters used in patterns.
+
store-matches: Store the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
unicode: Use Unicode support for UTF-8 matches. UTF-8 character sequences are handled as single characters.
Description: Match the strings literally, without regular expression support. By default, only identical strings are matched. For partial matches, use the flags: prefix or flags: substring flags. For example, if the consider the following patterns.
The second matches labels beginning with log-generator, for example, log-generator-1.
The third one matches labels that contain the log-generator string, for example, my-log-generator.
string flags
Literal string searches have the following flags() options:
+
+
global: Usable only in rewrite rules, match for every occurrence of the expression, not only the first one.
+
ignore-case: Disables case-sensitivity.
+
prefix: During the matching process, patterns (also called search expressions) are matched against the input string starting from the beginning of the input string, and the input string is matched only for the maximum character length of the pattern. The initial characters of the pattern and the input string must be identical in the exact same order, and the pattern’s length is definitive for the matching process (that is, if the pattern is longer than the input string, the match will fail).
For example, for the input string exam:
+
the following patterns will match:
+
+
ex (the pattern contains the initial characters of the input string in the exact same order)
exam (the pattern is an exact match for the input string)
the following patterns will not match:
+
+
example (the pattern is longer than the input string)
hexameter (the pattern’s initial characters do not match the input string’s characters in the exact same order, and the pattern is longer than the input string)
+
store-matches: Stores the matches of the regular expression into the $0, … $255 variables. The $0 stores the entire match, $1 is the first group of the match (parentheses), and so on. Named matches (also called named subpatterns), for example, (?<name>...), are stored as well. Matches from the last filter expression can be referenced in regular expressions.
+
NOTE: To convert match variables into a syslog-ng list, use the $* macro, which can be further manipulated using List manipulation, or turned into a list in type-aware destinations.
+
substring: The given literal string will match when the pattern is found within the input. Unlike flags: prefix, the pattern does not have to be identical with the given literal string.
Description: Match the strings against a pattern containing ‘*’ and ‘?’ wildcards, without regular expression and character range support. The advantage of glob patterns to regular expressions is that globs can be processed much faster.
+
*: matches an arbitrary string, including an empty string
?: matches an arbitrary character
+
NOTE:
+
The wildcards can match the / character.
You cannot use the * and ? characters literally in the pattern.
Glob patterns cannot have any flags.
Examples
Select all logs
To select all logs, or if you only want to exclude some logs but retain others you need an empty select statement.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/log-routing-syslog-ng/releases.releases b/4.6/docs/configuration/log-routing-syslog-ng/releases.releases
new file mode 100644
index 000000000..a2e73405e
--- /dev/null
+++ b/4.6/docs/configuration/log-routing-syslog-ng/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/log-routing/index.html b/4.6/docs/configuration/log-routing/index.html
new file mode 100644
index 000000000..983332195
--- /dev/null
+++ b/4.6/docs/configuration/log-routing/index.html
@@ -0,0 +1,782 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Routing your logs with Fluentd match directives | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Routing your logs with Fluentd match directives
+
+
Note: This page describes routing logs with Fluentd. If you are using syslog-ng to route your log messages, see Routing your logs with syslog-ng.
The first step to process your logs is to select which logs go where.
+The Logging operator uses Kubernetes labels, namespaces and other metadata
+to separate different log flows.
Available routing metadata keys:
+
+
+
Name
Type
Description
Empty
+
+
namespaces
[]string
List of matching namespaces
All namespaces
+
labels
map[string]string
Key - Value pairs of labels
All labels
+
hosts
[]string
List of matching hosts
All hosts
+
container_names
[]string
List of matching containers (not Pods)
All containers
Match statement
To select or exclude logs you can use the match statement. Match is a collection
+of select and exclude expressions. In both expression you can use the labels
+attribute to filter for pod’s labels. Moreover, in Cluster flow you can use namespaces
+as a selecting or excluding criteria.
If you specify more than one label in a select or exclude expression, the labels have a logical AND connection between them. For example, an exclude expression with two labels excludes messages that have both labels. If you want an OR connection between labels, list them in separate expressions. For example, to exclude messages that have one of two specified labels, create a separate exclude expression for each label.
The select and exclude statements are evaluated in order!
Without at least one select criteria, no messages will be selected!
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/log-routing/releases.releases b/4.6/docs/configuration/log-routing/releases.releases
new file mode 100644
index 000000000..e16a6c507
--- /dev/null
+++ b/4.6/docs/configuration/log-routing/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/output/index.html b/4.6/docs/configuration/output/index.html
new file mode 100644
index 000000000..334a77b6c
--- /dev/null
+++ b/4.6/docs/configuration/output/index.html
@@ -0,0 +1,683 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Output and ClusterOutput | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Output and ClusterOutput
+
Outputs are the destinations where your log forwarder sends the log messages, for example, to Sumo Logic, or to a file. Depending on which log forwarder you use, you have to configure different custom resources.
Fluentd outputs
+
The Output resource defines an output where your Fluentd Flows can send the log messages. The output is a namespaced resource which means only a Flow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple outputs and attach them to multiple flows.
ClusterOutput defines an Output without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: Flow can be connected to Output and ClusterOutput, but ClusterFlow can be attached only to ClusterOutput.
+
For the details of the supported output plugins, see Fluentd outputs.
For the details of Output custom resource, see OutputSpec.
For the details of ClusterOutput custom resource, see ClusterOutput.
Fluentd S3 output example
The following snippet defines an Amazon S3 bucket as an output.
The SyslogNGOutput resource defines an output for syslog-ng where your SyslogNGFlows can send the log messages. The output is a namespaced resource which means only a SyslogNGFlow within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace.
+Outputs are the final stage for a logging flow. You can define multiple SyslogNGoutputs and attach them to multiple SyslogNGFlows.
SyslogNGClusterOutput defines a SyslogNGOutput without namespace restrictions. It is only evaluated in the controlNamespace by default unless allowClusterResourcesFromAllNamespaces is set to true.
+
Note: SyslogNGFlow can be connected to SyslogNGOutput and SyslogNGClusterOutput, but SyslogNGClusterFlow can be attached only to SyslogNGClusterOutput.
RFC5424 syslog-ng output example
The following example defines a simple SyslogNGOutput resource that sends the logs to the specified syslog server using the RFC5424 Syslog protocol in a TLS-encrypted connection.
Allow anonymous source. sections are required if disabled.
self_hostname (string, required)
Hostname
shared_key (string, required)
Shared key for authentication.
user_auth (bool, optional)
If true, use user based authentication.
+
2 - Transport
Transport
ca_cert_path (string, optional)
Specify private CA contained path
ca_path (string, optional)
Specify path to CA certificate file
ca_private_key_passphrase (string, optional)
private CA private key passphrase contained path
ca_private_key_path (string, optional)
private CA private key contained path
cert_path (string, optional)
Specify path to Certificate file
ciphers (string, optional)
Ciphers Default: “ALL:!aNULL:!eNULL:!SSLv2”
client_cert_auth (bool, optional)
When this is set Fluentd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don’t supply a valid client certificate will fail.
insecure (bool, optional)
Use secure connection when use tls) Default: false
private_key_passphrase (string, optional)
public CA private key passphrase contained path
private_key_path (string, optional)
Specify path to private Key file
protocol (string, optional)
Protocol Default: :tcp
version (string, optional)
Version Default: ‘TLSv1_2’
+
3 - Fluentd filters
You can use the following Fluentd filters in your Flow and ClusterFlow CRDs.
Fluentd Filter plugin to fetch several metadata for a Pod
Configuration
EnhanceK8s
api_groups ([]string, optional)
Kubernetes resources api groups
Default: ["apps/v1", "extensions/v1beta1"]
bearer_token_file (string, optional)
Bearer token path
Default: nil
ca_file (secret.Secret, optional)
Kubernetes API CA file
Default: nil
cache_refresh (int, optional)
Cache refresh
Default: 60*60
cache_refresh_variation (int, optional)
Cache refresh variation
Default: 60*15
cache_size (int, optional)
Cache size
Default: 1000
cache_ttl (int, optional)
Cache TTL
Default: 60602
client_cert (secret.Secret, optional)
Kubernetes API Client certificate
Default: nil
client_key (secret.Secret, optional)
Kubernetes API Client certificate key
Default: nil
core_api_versions ([]string, optional)
Kubernetes core API version (for different Kubernetes versions)
Default: [‘v1’]
data_type (string, optional)
Sumo Logic data type
Default: metrics
in_namespace_path ([]string, optional)
parameters for read/write record
Default: ['$.namespace']
in_pod_path ([]string, optional)
Default: ['$.pod','$.pod_name']
kubernetes_url (string, optional)
Kubernetes API URL
Default: nil
ssl_partial_chain (*bool, optional)
If ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN
This filter plugin consumes a log stream of JSON objects which contain single-line log messages. If a consecutive sequence of log messages form an exception stack trace, they forwarded as a single, combined JSON object. Otherwise, the input log data is forwarded as is. More info at https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions
+
Note: As Tag management is not supported yet, this Plugin is mutually exclusive with Tag normaliser
Fluentd Filter plugin to add information about geographical location of IP addresses with Maxmind GeoIP databases.
+More information at https://github.com/y-ken/fluent-plugin-geoip
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Flow
+metadata:
+name:demo-flow
+spec:
+filters:
+- tag_normaliser:{}
+- parser:
+remove_key_name_field:true
+reserve_data:true
+parse:
+type:nginx
+- prometheus:
+metrics:
+- name:total_counter
+desc:The total number of foo in message.
+type:counter
+labels:
+foo:bar
+labels:
+host:${hostname}
+tag:${tag}
+namespace:$.kubernetes.namespace
+selectors:{}
+localOutputRefs:
+- demo-output
Fluentd config result:
<filter**>
+ @type prometheus
+ @id logging-demo-flow_2_prometheus
+<metric>
+ desc The total number of foo in message.
+ name total_counter
+ type counter
+<labels>
+ foo bar
+</labels>
+</metric>
+<labels>
+ host ${hostname}
+ namespace $.kubernetes.namespace
+ tag ${tag}
+</labels>
+</filter>
A sentry plugin to throttle logs. Logs are grouped by a configurable key. When a group exceeds a configuration rate, logs are dropped for this group.
Configuration
Throttle
group_bucket_limit (int, optional)
Maximum number logs allowed per groups over the period of group_bucket_period_s
Default: 6000
group_bucket_period_s (int, optional)
This is the period of of time over which group_bucket_limit applies
Default: 60
group_drop_logs (bool, optional)
When a group reaches its limit, logs will be dropped from further processing if this value is true
Default: true
group_key (string, optional)
Used to group logs. Groups are rate limited independently
Default: kubernetes.container_name
group_reset_rate_s (int, optional)
After a group has exceeded its bucket limit, logs are dropped until the rate per second falls below or equal to group_reset_rate_s.
Default: group_bucket_limit/group_bucket_period_s
group_warning_delay_s (int, optional)
When a group reaches its limit and as long as it is not reset, a warning message with the current log rate of the group is emitted repeatedly. This is the delay between every repetition.
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. For details, see https://github.com/aliyun/fluent-plugin-oss.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
If true, put_log_events_retry_limit will be ignored
put_log_events_retry_limit (int, optional)
Maximum count of retry (if exceeding this, the events will be discarded)
put_log_events_retry_wait (string, optional)
Time before retrying PutLogEvents (retry interval increases exponentially like put_log_events_retry_wait * (2 ^ retry_count))
region (string, required)
AWS Region
remove_log_group_aws_tags_key (string, optional)
Remove field specified by log_group_aws_tags_key
remove_log_group_name_key (string, optional)
Remove field specified by log_group_name_key
remove_log_stream_name_key (string, optional)
Remove field specified by log_stream_name_key
remove_retention_in_days (string, optional)
Remove field specified by retention_in_days
retention_in_days (string, optional)
Use to set the expiry time for log group when created with auto_create_stream. (default to no expiry)
retention_in_days_key (string, optional)
Use specified field of records as retention period
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
append_new_line (*bool, optional)
If it is enabled, the plugin adds new line character (\n) to each serialized record. Before appending \n, plugin calls chomp and removes separator from the end of each record as chomp_record is true. Therefore, you don’t need to enable chomp_record option when you use kinesis_firehose output with default configuration (append_new_line is true). If you want to set append_new_line false, you can choose chomp_record false (default) or true (compatible format with plugin v2). (Default:true)
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required) {#assume role credentials-role_arn}
The Amazon Resource Name (ARN) of the role to assume
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
stream_name (string, required)
Name of the stream to put data.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required)
The Amazon Resource Name (ARN) of the role to assume
The s3 output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log ‘2011-01-02 message B’ is reached, and then another log ‘2011-01-03 message B’ is reached in this order, the former one is stored in “20110102.gz” file, and latter one in “20110103.gz” file.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sse_customer_algorithm (string, optional)
Specifies the algorithm to use to when encrypting the object
sse_customer_key (string, optional)
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data
sse_customer_key_md5 (string, optional)
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
If false, the certificate of endpoint will not be verified
storage_class (string, optional)
The type of storage to use for the object, for example STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR For a complete list of possible values, see the Amazon S3 API reference.
store_as (string, optional)
Archive format on S3
use_bundled_cert (string, optional)
Use aws-sdk-ruby bundled cert
use_server_side_encryption (string, optional)
The Server-side encryption algorithm used when storing this object in S3 (AES256, aws:kms)
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into s3
Available in Logging operator version 4.5 and later. Azure Cloud to use, for example, AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud, AZURESTACKCLOUD (in uppercase). This field is supported only if the fluentd plugin honors it, for example, https://github.com/elsesiy/fluent-plugin-azure-storage-append-blob-lts
Compat format type: out_file, json, ltsv (default: out_file)
Default: json
path (string, optional)
Path prefix of the files on Azure
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
4.8 - Buffer
Buffer
chunk_full_threshold (string, optional)
The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size * chunk_full_threshold (== 8MB * 0.95 in default)
chunk_limit_records (int, optional)
The max number of events that each chunks can store in it
chunk_limit_size (string, optional)
The max size of each chunks: events will be written into chunks until the size of chunks become this size (default: 8MB)
Default: 8MB
compress (string, optional)
If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
delayed_commit_timeout (string, optional)
The timeout seconds until output plugin decides that async write operation fails
disable_chunk_backup (bool, optional)
Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
disabled (bool, optional)
Disable buffer section (default: false)
Default: false,hidden
flush_at_shutdown (bool, optional)
The value to specify to flush/write all buffer chunks at shutdown, or not
flush_interval (string, optional)
Default: 60s
flush_mode (string, optional)
Default: default (equals to lazy if time is specified as chunk key, interval otherwise) lazy: flush/write chunks once per timekey interval: flush/write chunks per specified time via flush_interval immediate: flush/write chunks immediately after events are appended into chunks
flush_thread_burst_interval (string, optional)
The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
flush_thread_count (int, optional)
The number of threads of output plugins, which is used to write chunks in parallel
flush_thread_interval (string, optional)
The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
overflow_action (string, optional)
How output plugin behaves when its buffer queue is full throw_exception: raise exception to show this error in log block: block processing of input plugin to emit events into that buffer drop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
path (string, optional)
The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Default: operator generated
queue_limit_length (int, optional)
The queue length limitation of this buffer plugin instance
queued_chunks_limit_size (int, optional)
Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
retry_exponential_backoff_base (string, optional)
The base number of exponential backoff for retries
retry_forever (*bool, optional)
If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Default: true
retry_max_interval (string, optional)
The maximum interval seconds for exponential backoff between retries while failing
retry_max_times (int, optional)
The maximum number of times to retry to flush while failing
retry_randomize (bool, optional)
If true, output plugin will retry after randomized interval not to do burst retries
retry_secondary_threshold (string, optional)
The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
retry_timeout (string, optional)
The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
retry_type (string, optional)
exponential_backoff: wait seconds will become large exponentially per failures periodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
retry_wait (string, optional)
Seconds to wait before next retry to flush, or constant factor of exponential backoff
tags (*string, optional)
When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
Default: tag,time
timekey (string, required)
Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Default: 10m
timekey_use_utc (bool, optional)
Output plugin decides to use UTC or not to format placeholders using timekey
timekey_wait (string, optional)
Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Default: 1m
timekey_zone (string, optional)
The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
total_limit_size (string, optional)
The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
type (string, optional)
Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
+
4.9 - Datadog
Datadog output plugin for Fluentd
Overview
It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Datadog - so no need to use a log shipper if you don’t want to.
+For details, see https://github.com/DataDog/fluent-plugin-datadog.
Example
spec:
+datadog:
+api_key:
+value:'<YOUR_API_KEY>'# For referencing a secret, see https://kube-logging.dev/docs/configuration/plugins/outputs/secret/
+dd_source:'<INTEGRATION_NAME>'
+dd_tags:'<KEY1:VALUE1>,<KEY2:VALUE2>'
+dd_sourcecategory:'<YOUR_SOURCE_CATEGORY>'
+
Configuration
Output Config
api_key (*secret.Secret, required)
This parameter is required in order to authenticate your fluent agent.
Set the log compression level for HTTP (1 to 9, 9 being the best ratio)
Default: “6”
dd_hostname (string, optional)
Used by Datadog to identify the host submitting the logs.
Default: “hostname -f”
dd_source (string, optional)
This tells Datadog what integration it is
Default: nil
dd_sourcecategory (string, optional)
Multiple value attribute. Can be used to refine the source attribute
Default: nil
dd_tags (string, optional)
Custom tags with the following format “key1:value1, key2:value2”
Default: nil
host (string, optional)
Proxy endpoint when logs are not directly forwarded to Datadog
Default: “http-intake.logs.datadoghq.com”
include_tag_key (bool, optional)
Automatically include the Fluentd tag in the record.
Default: false
max_backoff (string, optional)
The maximum time waited between each retry in seconds
Default: “30”
max_retries (string, optional)
The number of retries before the output plugin stops. Set to -1 for unlimited retries
Default: “-1”
no_ssl_validation (bool, optional)
Disable SSL validation (useful for proxy forwarding)
Default: false
port (string, optional)
Proxy port when logs are not directly forwarded to Datadog and ssl is not used
Default: “80”
service (string, optional)
Used by Datadog to correlate between logs, traces and metrics.
Default: nil
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
ssl_port (string, optional)
Port used to send logs over a SSL encrypted connection to Datadog. If use_http is disabled, use 10516 for the US region and 443 for the EU region.
Default: “443”
tag_key (string, optional)
Where to store the Fluentd tag.
Default: “tag”
timestamp_key (string, optional)
Name of the attribute which will contain timestamp of the log event. If nil, timestamp attribute is not added.
Default: “@timestamp”
use_compression (bool, optional)
Enable log compression for HTTP
Default: true
use_http (bool, optional)
Enable HTTP forwarding. If you disable it, make sure to change the port to 10514 or ssl_port to 10516
Default: true
use_json (bool, optional)
Event format, if true, the event is sent in json format. Othwerwise, in plain text.
Default: true
use_ssl (bool, optional)
If true, the agent initializes a secure connection to Datadog. In clear TCP otherwise.
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
content_type (string, optional)
With content_type application/x-ndjson, elasticsearch plugin adds application/x-ndjson as Content-Profile in payload.
Default: application/json
custom_headers (string, optional)
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
flatten_hashes (bool, optional)
Elasticsearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places: {“people” => 100} {“people” => {“some” => “thing”}} The second log line will be rejected by the Elasticsearch parser because objects and concrete values can’t live in the same field. To combat this, you can enable hash flattening.
flatten_hashes_separator (string, optional)
Flatten separator
host (string, optional)
You can specify the Elasticsearch host using this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple Elasticsearch hosts with separator “,”. If you specify the hosts option, the host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, elasticsearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy. For example ignore_exceptions ["Elasticsearch::Transport::Transport::ServerError"] will match all subclasses of ServerError - Elasticsearch::Transport::Transport::Errors::BadRequest, Elasticsearch::Transport::Transport::Errors::ServiceUnavailable, etc.
ilm_policy (string, optional)
Specify ILM policy contents as Hash.
ilm_policy_id (string, optional)
Specify ILM policy id.
ilm_policy_overwrite (bool, optional)
Specify whether overwriting ilm policy or not.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_name (string, optional)
The index name to write events to
Default: fluentd
index_prefix (string, optional)
Specify the index prefix for the rollover index to be created.
Default: logstash
log_es_400_reason (bool, optional)
By default, the error logger won’t record the reason for a 400 error from the Elasticsearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn’t desirable if all you want is the 400 error reasons. You can set this true to capture the 400 error reasons without all the other debug logs.
Default: false
logstash_dateformat (string, optional)
Set the Logstash date format.
Default: %Y.%m.%d
logstash_format (bool, optional)
Enable Logstash log format.
Default: false
logstash_prefix (string, optional)
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional)
Set the Logstash prefix separator.
Default: -
max_retry_get_es_version (string, optional)
You can specify the number of times to retry fetching the Elasticsearch version.
This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify the Elasticsearch port using this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, Elasticsearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, Elasticsearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of elasticsearch shield.
Default: false
reload_after (string, optional)
When reload_connections is true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the elasticsearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the request. This can be useful to quickly remove a dead node from the list of addresses.
Remove keys on update will not update the configured keys in elasticsearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Similar to parent_key config, will add _routing into elasticsearch command if routing_key is set and the field does exist in input event.
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sniffer_class_name (string, optional)
The default Sniffer used by the Elasticsearch::Transport class works well when Fluentd has a direct connection to all of the Elasticsearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::ElasticsearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. https://github.com/uken/fluent-plugin-elasticsearch#sniffer-class-name
ssl_max_version (string, optional)
Specify min/max SSL/TLS version
ssl_min_version (string, optional)
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in Elasticsearch 7.x
Similar to target_index_key config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to type_name.
Default: fluentd
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, elasticsearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
type_name (string, optional)
Set the index type for elasticsearch. This is the fallback if target_type_key is missing.
Default: fluentd
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because es_rejected_execution_exception is caused by exceeding Elasticsearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior. If you want to increase it and forcibly retrying bulk request, please consider to change unrecoverable_error_types parameter from default value. Change default value of thread_pool.bulk.queue_size in elasticsearch.yml)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders, for example, %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
validate_client_version (bool, optional)
When you use mismatched Elasticsearch server and client libraries, fluent-plugin-elasticsearch cannot send data into Elasticsearch.
Default: false
verify_es_version_at_startup (*bool, optional)
Because Elasticsearch plugin should change behavior each of Elasticsearch major versions. For example, Elasticsearch 6 starts to prohibit multiple type_names in one index, and Elasticsearch 7 will handle only _doc type_name in index. If you want to disable to verify Elasticsearch version at start up, set it as false. When using the following configuration, ES plugin intends to communicate into Elasticsearch 6. (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
The Path of the file. The actual path is path + time + “.log” by default.
path_suffix (string, optional)
The suffix of output result.
Default: “.log”
recompress (bool, optional)
Performs compression again even if the buffer chunk is already compressed.
Default: false
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
symlink_path (bool, optional)
Create symlink to temporary buffered file when buffer_type is file. This is useful for tailing file content to check logs.
The timeout time for socket connect. When the connection timed out during establishment, Errno::ETIMEDOUT is raised.
dns_round_robin (bool, optional)
Enable client-side DNS round robin. Uniform randomly pick an IP address to send data when a hostname has several IP addresses. heartbeat_type udp is not available with dns_round_robin true. Use heartbeat_type tcp or heartbeat_type none.
expire_dns_cache (int, optional)
Set TTL to expire DNS cache in seconds. Set 0 not to use DNS Cache.
Default: 0
hard_timeout (int, optional)
The hard timeout used to detect server failure. The default value is equal to the send_timeout parameter.
Default: 60
heartbeat_interval (int, optional)
The interval of the heartbeat packer.
Default: 1
heartbeat_type (string, optional)
The transport protocol to use for heartbeats. Set “none” to disable heartbeat. [transport, tcp, udp, none]
ignore_network_errors_at_startup (bool, optional)
Ignore DNS resolution and errors at startup time.
keepalive (bool, optional)
Enable keepalive connection.
Default: false
keepalive_timeout (int, optional)
Expired time of keepalive. Default value is nil, which means to keep connection as long as possible.
Default: 0
phi_failure_detector (bool, optional)
Use the “Phi accrual failure detector” to detect server failure.
Default: true
phi_threshold (int, optional)
The threshold parameter used to detect server faults. phi_threshold is deeply related to heartbeat_interval. If you are using longer heartbeat_interval, please use the larger phi_threshold. Otherwise you will see frequent detachments of destination servers. The default value 16 is tuned for heartbeat_interval 1s.
Default: 16
recover_wait (int, optional)
The wait time before accepting a server fault recovery.
Default: 10
require_ack_response (bool, optional)
Change the protocol to at-least-once. The plugin waits the ack from destination’s in_forward plugin.
Server definitions at least one is required Server
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
tls_allow_self_signed_cert (bool, optional)
Allow self signed certificates or not.
Default: false
tls_cert_logical_store_name (string, optional)
The certificate logical store name on Windows system certstore. This parameter is for Windows only.
tls_cert_path (*secret.Secret, optional)
The additional CA certificate path for TLS.
tls_cert_thumbprint (string, optional)
The certificate thumbprint for searching from Windows system certstore This parameter is for Windows only.
tls_cert_use_enterprise_store (bool, optional)
Enable to use certificate enterprise store on Windows system certstore. This parameter is for Windows only.
Verify hostname of servers and certificates or not in TLS transport.
Default: true
tls_version (string, optional)
The default version of TLS transport. [TLSv1_1, TLSv1_2]
Default: TLSv1_2
transport (string, optional)
The transport protocol to use [ tcp, tls ]
verify_connection_at_startup (bool, optional)
Verify that a connection can be made with one of out_forward nodes at the time of startup.
Default: false
Fluentd Server
server
host (string, required)
The IP address or host name of the server.
name (string, optional)
The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
password (*secret.Secret, optional)
The password for authentication.
port (int, optional)
The port number of the host. Note that both TCP packets (event stream) and UDP packets (heartbeat message) are sent to this port.
Default: 24224
shared_key (*secret.Secret, optional)
The shared key per server.
standby (bool, optional)
Marks a node as the standby node for an Active-Standby model between Fluentd nodes. When an active node goes down, the standby node is promoted to an active node. The standby node is not used by the out_forward plugin until then.
username (*secret.Secret, optional)
The username for authentication.
weight (int, optional)
The load balancing weight. If the weight of one server is 20 and the weight of the other server is 30, events are sent in a 2:3 ratio. .
User provided web-safe keys and arbitrary string values that will returned with requests for the file as “x-goog-meta-” response headers. Object Metadata
overwrite (bool, optional)
Overwrite already existing path
Default: false
path (string, optional)
Path prefix of the files on GCS
project (string, required)
Project identifier for GCS
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
storage_class (string, optional)
Storage class of the file: dranearlinecoldlinemulti_regionalregionalstandard
TLS: CA certificate file for server certificate verification Secret
cert (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
configure_kubernetes_labels (*bool, optional)
Configure Kubernetes metadata in a Prometheus like format
Default: false
drop_single_key (*bool, optional)
If a record only has 1 key, then just set the log line to the value and discard the key.
Default: false
extra_labels (map[string]string, optional)
Set of extra labels to include with every Loki stream.
extract_kubernetes_labels (*bool, optional)
Extract kubernetes labels as loki labels
Default: false
include_thread_label (*bool, optional)
whether to include the fluentd_thread label when multiple threads are used for flushing.
Default: true
insecure_tls (*bool, optional)
TLS: disable server certificate verification
Default: false
key (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
labels (Label, optional)
Set of labels to include with every Loki stream.
line_format (string, optional)
Format to use when flattening the record to a log line: json, key_value (default: key_value)
Default: json
password (*secret.Secret, optional)
Specify password if the Loki server requires authentication. Secret
remove_keys ([]string, optional)
Comma separated list of needless record keys to remove
Default: []
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
tenant (string, optional)
Loki is a multi-tenant log storage platform and all requests sent must include a tenant.
url (string, optional)
The url of the Loki server to send logs to.
Default: https://logs-us-west1.grafana.net
username (*secret.Secret, optional)
Specify a username if the Loki server requires authentication. Secret
Raise UnrecoverableError when the response code is non success, 1xx/3xx/4xx/5xx. If false, the plugin logs error message instead of raising UnrecoverableError.
Using array format of JSON. This parameter is used and valid only for json format. When json_array as true, Content-Profile should be application/json and be able to use JSON data for the HTTP request body.
Default: false
open_timeout (int, optional)
Connection open timeout in seconds.
proxy (string, optional)
Proxy for HTTP request.
read_timeout (int, optional)
Read timeout in seconds.
retryable_response_codes ([]int, optional)
List of retryable response codes. If the response code is included in this list, the plugin retries the buffer flush. Since Fluentd v2 the Status code 503 is going to be removed from default.
Default: [503]
ssl_timeout (int, optional)
TLS timeout in seconds.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Maximum value of total message size to be included in one batch transmission. .
Default: 4096
kafka_agg_max_messages (int, optional)
Maximum number of messages to include in one batch transmission. .
Default: nil
keytab (*secret.Secret, optional)
max_send_retries (int, optional)
Number of times to retry sending of messages to a leader
Default: 1
message_key_key (string, optional)
Message Key
Default: “message_key”
partition_key (string, optional)
Partition
Default: “partition”
partition_key_key (string, optional)
Partition Key
Default: “partition_key”
password (*secret.Secret, optional)
Password when using PLAIN/SCRAM SASL authentication
principal (string, optional)
required_acks (int, optional)
The number of acks required per request .
Default: -1
ssl_ca_cert (*secret.Secret, optional)
CA certificate
ssl_ca_certs_from_system (*bool, optional)
System’s CA cert store
Default: false
ssl_client_cert (*secret.Secret, optional)
Client certificate
ssl_client_cert_chain (*secret.Secret, optional)
Client certificate chain
ssl_client_cert_key (*secret.Secret, optional)
Client certificate key
ssl_verify_hostname (*bool, optional)
Verify certificate hostname
sasl_over_ssl (bool, required)
SASL over SSL
Default: true
scram_mechanism (string, optional)
If set, use SCRAM authentication with specified mechanism. When unset, default to PLAIN authentication
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
topic_key (string, optional)
Topic Key
Default: “topic”
use_default_for_unknown_topic (bool, optional)
Use default for unknown topics
Default: false
username (*secret.Secret, optional)
Username when using PLAIN/SCRAM SASL authentication
HTTPS POST Request Timeout, Optional. Supports s and ms Suffices
Default: 30 s
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Limit to the size of the Logz.io upload bulk. Defaults to 1000000 bytes leaving about 24kB for overhead.
bulk_limit_warning_limit (int, optional)
Limit to the size of the Logz.io warning message when a record exceeds bulk_limit to prevent a recursion when Fluent warnings are sent to the Logz.io output.
endpoint (*Endpoint, required)
Define LogZ endpoint URL
gzip (bool, optional)
Should the plugin ship the logs in gzip compression. Default is false.
http_idle_timeout (int, optional)
Timeout in seconds that the http persistent connection will stay open without traffic.
output_include_tags (bool, optional)
Should the appender add the fluentd tag to the document, called “fluentd_tag”
output_include_time (bool, optional)
Should the appender add a timestamp to your logs on their process time (recommended).
retry_count (int, optional)
How many times to resend failed bulks.
retry_sleep (int, optional)
How long to sleep initially between retries, exponential step-off.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Specify the application name for the rollover index to be created.
Default: default
buffer (*Buffer, optional)
bulk_message_request_threshold (string, optional)
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
This parameter adds additional headers to request. Example: {"token":"secret"}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
data_stream_enable (*bool, optional)
Use @type opensearch_data_stream
data_stream_name (string, optional)
You can specify Opensearch data stream name by this parameter. This parameter is mandatory for opensearch_data_stream.
data_stream_template_name (string, optional)
Specify an existing index template for the data stream. If not present, a new template is created and named after the data stream.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on Fluentd statup.(default: true)
You can specify OpenSearch host by this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, the opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional)
http_backend_excon_nonblock
Default: true
id_key (string, optional)
Field on your data to identify the data uniquely
ignore_exceptions (string, optional)
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
This param is to set a pipeline ID of your OpenSearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify OpenSearch port by this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
reload_after (string, optional)
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
remove_keys_on_update (string, optional)
Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
routing_key (string, optional)
routing_key
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
selector_class_name (string, optional)
selector_class_name
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
sniffer_class_name (string, optional)
The default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The sniffer_class_name parameter gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name.
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in OpenSearch
tag_key (string, optional)
This will add the Fluentd tag in the JSON record.
Default: tag
target_index_affinity (bool, optional)
target_index_affinity
Default: false
target_index_key (string, optional)
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.’) as a separator.
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_exclude_timestamp (bool, optional)
time_key_exclude_timestamp
Default: false
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
truncate_caches_interval (string, optional)
truncate_caches_interval
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
unrecoverable_record_types (string, optional)
unrecoverable_record_types
use_legacy_template (*bool, optional)
Specify wether to use legacy template or not.
Default: true
user (string, optional)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
Default: true
validate_client_version (bool, optional)
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
verify_os_version_at_startup (*bool, optional)
verify_os_version_at_startup (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
Default: index
OpenSearchEndpointCredentials
access_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
assume_role_arn (*secret.Secret, optional)
Typically, you can use AssumeRole for cross-account access or federation.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
strftime_format (string, optional)
Users can set strftime format.
Default: “%s”
ttl (int, optional)
If 0 or negative value is set, ttl is not set in each key.
+
4.26 - Relabel
Available in Logging Operator version 4.2 and later.
The relabel output uses the relabel output plugin of Fluentd to route events back to a specific Flow, where they can be processed again.
This is useful, for example, if you need to preprocess a subset of logs differently, but then do the same processing on all messages at the end. In this case, you can create multiple flows for preprocessing based on specific log matchers and then aggregate everything into a single final flow for postprocessing.
The value of the label parameter of the relabel output must be the same as the value of the flowLabel parameter of the Flow (or ClusterFlow) where you want to send the messages.
Using the relabel output also makes it possible to pass the messages emitted by the Concat plugin in case of a timeout. Set the timeout_label of the concat plugin to the flowLabel of the flow where you want to send the timeout messages.
Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. .
Default: true
data_type (string, optional)
The type of data that will be sent to Sumo Logic, either event or metric
Default: event
fields (Fields, optional)
In this case, parameters inside <fields> are used as indexed fields and removed from the original input events
The host location for events. Cannot set both host and host_key parameters at the same time. (Default:hostname)
host_key (string, optional)
Key for the host location. Cannot set both host and host_key parameters at the same time.
idle_timeout (int, optional)
If a connection has not been used for this number of seconds it will automatically be reset upon the next use to avoid attempting to send to a closed connection. nil means no timeout.
index (string, optional)
Identifier for the Splunk index to be used for indexing events. If this parameter is not set, the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.
index_key (string, optional)
The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.
insecure_ssl (*bool, optional)
Indicates if insecure SSL connection is allowed
Default: false
keep_keys (bool, optional)
By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.
metric_name_key (string, optional)
Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event parameter. When this prameter is set, the metrics_from_event parameter is automatically set to false.
Default: true
metric_value_key (string, optional)
Field name that contains the metric value, this parameter is required when metric_name_key is configured.
metrics_from_event (*bool, optional)
When data_type is set to “metric”, the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. (Default:true)
non_utf8_replacement_string (string, optional)
If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. .
Default: ’ '
open_timeout (int, optional)
The amount of time to wait for a connection to be opened.
protocol (string, optional)
This is the protocol to use for calling the Hec API. Available values are: http, https.
Default: https
read_timeout (int, optional)
The amount of time allowed between reading two chunks from the socket.
ssl_ciphers (string, optional)
List of SSL ciphers allowed.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source (string, optional)
The source field for events. If this parameter is not set, the source will be decided by HEC. Cannot set both source and source_key parameters at the same time.
source_key (string, optional)
Field name to contain source. Cannot set both source and source_key parameters at the same time.
sourcetype (string, optional)
The sourcetype field for events. When not set, the sourcetype is decided by HEC. Cannot set both source and source_key parameters at the same time.
sourcetype_key (string, optional)
Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.
SQS queue url e.g. https://sqs.us-west-2.amazonaws.com/123456789012/myqueue
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Used to specify the key when merging json or sending logs in text format
Default: message
metric_data_format (string, optional)
The format of metrics you will be sending, either graphite or carbon2 or prometheus
Default: graphite
open_timeout (int, optional)
Set timeout seconds to wait until connection is opened.
Default: 60
proxy_uri (string, optional)
Add the uri of the proxy environment if present.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source_category (string, optional)
Set _sourceCategory metadata field within SumoLogic
Default: nil
source_host (string, optional)
Set _sourceHost metadata field within SumoLogic
Default: nil
source_name (string, required)
Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is nil)
source_name_key (string, optional)
Set as source::path_key’s value so that the source_name can be extracted from Fluentd’s buffer
Default: source_name
sumo_client (string, optional)
Name of sumo client which is send as X-Sumo-Client header
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Authorization Bearer token for http request to VMware Log Intelligence Secret
content_type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Default: simple
LogIntelligenceHeadersOut
LogIntelligenceHeadersOut is used to convert the input LogIntelligenceHeaders to a fluentd
+output that uses the correct key names for the VMware Log Intelligence plugin. This allows the
+Ouput to accept the config is snake_case (as other output plugins do) but output the fluentd
+ config with the proper key names (ie. content_type -> Content-Type)
Authorization (*secret.Secret, required)
Authorization Bearer token for http request to VMware Log Intelligence
Content-Type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Flatten hashes to create one key/val pair w/o losing log data
Default: true
flatten_hashes_separator (string, optional)
Separator to use for joining flattened keys
Default: _
http_conn_debug (bool, optional)
If set, enables debug logs for http connection
Default: false
http_method (string, optional)
HTTP method (post)
Default: post
host (string, optional)
VMware Aria Operations For Logs Host ex. localhost
log_text_keys ([]string, optional)
Keys from log event whose values should be added as log message/text to VMware Aria Operations For Logs. These key/value pairs won’t be expanded/flattened and won’t be added as metadata/fields.
VMware Aria Operations For Logs ingestion api path ex. ‘api/v1/events/ingest’
Default: api/v1/events/ingest
port (int, optional)
VMware Aria Operations For Logs port ex. 9000
Default: 80
raise_on_error (bool, optional)
Raise errors that were rescued during HTTP requests?
Default: false
rate_limit_msec (int, optional)
Simple rate limiting: ignore any records within rate_limit_msec since the last one
Default: 0
request_retries (int, optional)
Number of retries
Default: 3
request_timeout (int, optional)
http connection ttl for each request
Default: 5
ssl_verify (*bool, optional)
SSL verification flag
Default: true
scheme (string, optional)
HTTP scheme (http,https)
Default: http
serializer (string, optional)
Serialization (json)
Default: json
shorten_keys (map[string]string, optional)
Keys from log event to rewrite for instance from ‘kubernetes_namespace’ to ‘k8s_namespace’ tags will be rewritten with substring substitution and applied in the order present in the hash. Hashes enumerate their values in the order that the corresponding keys were inserted, see: https://ruby-doc.org/core-2.2.2/Hash.html
The annotation format is logging.banzaicloud.io/<loggingRef>: watched. Since the name part of the an annotation can’t be empty the default applies to empty loggingRef value as well.
The mount path is generated from the secret information
The name of the counter to create. Note that the value of this option is always prefixed with syslogng_, so for example key("my-custom-key") becomes syslogng_my-custom-key.
labels (ArrowMap, optional)
The labels used to create separate counters, based on the fields of the messages processed by metrics-probe(). The keys of the map are the name of the label, and the values are syslog-ng templates.
level (int, optional)
Sets the stats level of the generated metrics (default 0).
- (struct{}, required)
+
5.3 - Rewrite
Rewrite filters can be used to modify record contents. Logging operator currently supports the following rewrite functions:
SyslogNGOutput and SyslogNGClusterOutput resources have almost the same structure as Output and ClusterOutput resources, with the main difference being the number and kind of supported destinations.
You can use the following syslog-ng outputs in your SyslogNGOutput and SyslogNGClusterOutput resources.
+
6.1 - Authentication for syslog-ng outputs
Overview
GRPC-based outputs use this configuration instead of the simple tls field found at most HTTP based destinations. For details, see the documentation of a related syslog-ng destination, for example, Grafana Loki.
Configuration
Auth
Authentication settings. Only one authentication method can be set. Default: Insecure
adc (*ADC, optional)
Application Default Credentials (ADC).
alts (*ALTS, optional)
Application Layer Transport Security (ALTS) is a simple to use authentication, only available within Google’s infrastructure.
insecure (*Insecure, optional)
This is the default method, authentication is disabled (auth(insecure())).
Prunes the unused space in the LogMessage representation
dir (string, optional)
Description: Defines the folder where the disk-buffer files are stored.
disk_buf_size (int64, required)
This is a required option. The maximum size of the disk-buffer in bytes. The minimum value is 1048576 bytes.
mem_buf_length (*int64, optional)
Use this option if the option reliable() is set to no. This option contains the number of messages stored in overflow queue.
mem_buf_size (*int64, optional)
Use this option if the option reliable() is set to yes. This option contains the size of the messages in bytes that is used in the memory part of the disk buffer.
q_out_size (*int64, optional)
The number of messages stored in the output buffer of the destination.
reliable (bool, required)
If set to yes, syslog-ng OSE cannot lose logs in case of reload/restart, unreachable destination or syslog-ng OSE crash. This solution provides a slower, but reliable disk-buffer option.
The group of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-group().
Default: Use the global settings
dir_owner (string, optional)
The owner of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-owner().
Default: Use the global settings
dir_perm (int, optional)
The permission mask of directories created by syslog-ng. Log directories are only created if a file after macro expansion refers to a non-existing directory, and directory creation is enabled (see also the create-dirs() option). For octal numbers prefix the number with 0, for example, use 0755 for rwxr-xr-x.
Default: Use the global settings
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
The body of the HTTP request, for example, body("${ISODATE} ${MESSAGE}"). You can use strings, macros, and template functions in the body. If not set, it will contain the message received from the source by default.
body-prefix (string, optional)
The string syslog-ng OSE puts at the beginning of the body of the HTTP request, before the log message.
body-suffix (string, optional)
The string syslog-ng OSE puts to the end of the body of the HTTP request, after the log message.
delimiter (string, optional)
By default, syslog-ng OSE separates the log messages of the batch with a newline character.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
log-fifo-size (int, optional)
The number of messages that the output queue can store.
method (string, optional)
Specifies the HTTP method to use when sending the message to the server. POST | PUT
password (secret.Secret, optional)
The password that syslog-ng OSE uses to authenticate on the server where it sends the messages.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timeout (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: http://127.0.0.1:8000
user (string, optional)
The username that syslog-ng OSE uses to authenticate on the server where it sends the messages.
user-agent (string, optional)
The value of the USER-AGENT header in the messages sent to the server.
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
Batch
batch-bytes (int, optional)
Description: Sets the maximum size of payload in a batch. If the size of the messages reaches this value, syslog-ng OSE sends the batch to the destination even if the number of messages is less than the value of the batch-lines() option. Note that if the batch-timeout() option is enabled and the queue becomes empty, syslog-ng OSE flushes the messages only if batch-timeout() expires, or the batch reaches the limit set in batch-bytes().
batch-lines (int, optional)
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
+
6.6 - Loggly output
Overview
The loggly() destination sends log messages to the Loggly Logging-as-a-Service provider.
+You can send log messages over TCP, or encrypted with TLS for syslog-ng outputs.
A JSON object representing key-value pairs for the Event. These key-value pairs adds structure to Events, making it easier to search. Attributes can be nested JSON objects, however, we recommend limiting the amount of nesting.
Default: "--scope rfc5424 --exclude MESSAGE --exclude DATE --leave-initial-dot"
batch_bytes (int, optional)
batch_lines (int, optional)
batch_timeout (int, optional)
body (string, optional)
content_type (string, optional)
This field specifies the content type of the log records being sent to Falcon’s LogScale.
Default: "application/json"
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
extra_headers (string, optional)
This field represents additional headers that can be included in the HTTP request when sending log records to Falcon’s LogScale.
Default: empty
persist_name (string, optional)
rawstring (string, optional)
The raw string representing the Event. The default display for an Event in LogScale is the rawstring. If you do not provide the rawstring field, then the response defaults to a JSON representation of the attributes field.
Default: empty
timezone (string, optional)
The timezone is only required if you specify the timestamp in milliseconds. The timezone specifies the local timezone for the event. Note that you must still specify the timestamp in UTC time.
token (*secret.Secret, optional)
An Ingest Token is a unique string that identifies a repository and allows you to send data to that repository.
Default: empty
url (*secret.Secret, optional)
Ingester URL is the URL of the Humio cluster you want to send data to.
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
labels (filter.ArrowMap, optional)
Using the Labels map, Kubernetes label to Loki label mapping can be configured. Example: {"app" : "$PROGRAM"}
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See syslog-ng docs for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
template (string, optional)
Template for customizing the log message format.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timestamp (string, optional)
The timestamp that will be applied to the outgoing messages (possible values: current|received|msg default: current). Loki does not accept events, in which the timestamp is not monotonically increasing.
url (string, optional)
Specifies the hostname or IP address and optionally the port number of the service that can receive log data via gRPC. Use a colon (:) after the address to specify the port number of the server. For example: grpc://127.0.0.1:8000
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The name of the MongoDB collection where the log messages are stored (collections are similar to SQL tables). Note that the name of the collection must not start with a dollar sign ($), and that it may contain dot (.) characters.
dir (string, optional)
Defines the folder where the disk-buffer files are stored.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
fallback-topic is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).
qos (int, optional)
qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered.
template (string, optional)
Template where you can configure the message template sent to the MQTT broker. By default, the template is: $ISODATE $HOST $MSGHDR$MSG
topic (string, optional)
Topic defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.
The password used for authentication on a password-protected Redis server.
command (StringList, optional)
Internal rendered form of the CommandAndArguments field
command_and_arguments ([]string, optional)
The Redis command to execute, for example, LPUSH, INCR, or HINCRBY. Using the HINCRBY command with an increment value of 1 allows you to create various statistics. For example, the command("HINCRBY" "${HOST}/programs" "${PROGRAM}" "1") command counts the number of log messages on each host for each program.
Default: ""
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
host (string, optional)
The hostname or IP address of the Redis server.
Default: 127.0.0.1
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
Persistname
port (int, optional)
The port number of the Redis server.
Default: 6379
retries (int, optional)
If syslog-ng OSE cannot send a message, it will try again until the number of attempts reaches retries().
Default: 3
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Default: 0
time-reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The number of messages that the output queue can store.
max_object_size (int, optional)
Set the maximum object size size.
Default: 5120GiB
max_pending_uploads (int, optional)
Set the maximum number of pending uploads.
Default: 32
object_key (string, optional)
The object_key for the S3 server.
object_key_timestamp (RawString, optional)
Set object_key_timestamp
persist_name (string, optional)
Persistname
region (string, optional)
Set the region option.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
secret_key (*secret.Secret, optional)
The secret_key for the S3 server.
storage_class (string, optional)
Set the storage_class option.
template (RawString, optional)
Template
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
persist_name (string, optional)
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
persist_name (string, optional)
port (int, optional)
This option sets the port number of the Sumo Logic server to connect to.
Default: 6514
tag (string, optional)
This option specifies the list of tags to add as the tags fields of Sumo Logic messages. If not specified, syslog-ng OSE automatically adds the tags already assigned to the message. If you set the tag() option, only the tags you specify will be added to the messages.
By default, syslog-ng OSE closes destination sockets if it receives any input from the socket (for example, a reply). If this option is set to no, syslog-ng OSE just ignores the input, but does not close the socket. For details, see the documentation of the AxoSyslog syslog-ng distribution.
disk_buffer (*DiskBuffer, optional)
Enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Unique name for the syslog-ng driver. If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The name of a directory that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. (Optional) For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file, that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
cipher-suite (string, optional)
Description: Specifies the cipher, hash, and key-exchange algorithms used for the encryption, for example, ECDHE-ECDSA-AES256-SHA384. The list of available algorithms depends on the version of OpenSSL used to compile syslog-ng.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Use the certificate store of the system for verifying HTTPS certificates. For details, see the AxoSyslog Core documentation.
GrpcTLS
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/common/security/releases.releases b/4.6/docs/configuration/plugins/common/security/releases.releases
new file mode 100644
index 000000000..8be8fb7c3
--- /dev/null
+++ b/4.6/docs/configuration/plugins/common/security/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/common/transport/index.html b/4.6/docs/configuration/plugins/common/transport/index.html
new file mode 100644
index 000000000..d95f2b3fe
--- /dev/null
+++ b/4.6/docs/configuration/plugins/common/transport/index.html
@@ -0,0 +1,640 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Transport | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Transport
+
Transport
ca_cert_path (string, optional)
Specify private CA contained path
ca_path (string, optional)
Specify path to CA certificate file
ca_private_key_passphrase (string, optional)
private CA private key passphrase contained path
ca_private_key_path (string, optional)
private CA private key contained path
cert_path (string, optional)
Specify path to Certificate file
ciphers (string, optional)
Ciphers Default: “ALL:!aNULL:!eNULL:!SSLv2”
client_cert_auth (bool, optional)
When this is set Fluentd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don’t supply a valid client certificate will fail.
insecure (bool, optional)
Use secure connection when use tls) Default: false
Fluentd Filter plugin to fetch several metadata for a Pod
Configuration
EnhanceK8s
api_groups ([]string, optional)
Kubernetes resources api groups
Default: ["apps/v1", "extensions/v1beta1"]
bearer_token_file (string, optional)
Bearer token path
Default: nil
ca_file (secret.Secret, optional)
Kubernetes API CA file
Default: nil
cache_refresh (int, optional)
Cache refresh
Default: 60*60
cache_refresh_variation (int, optional)
Cache refresh variation
Default: 60*15
cache_size (int, optional)
Cache size
Default: 1000
cache_ttl (int, optional)
Cache TTL
Default: 60602
client_cert (secret.Secret, optional)
Kubernetes API Client certificate
Default: nil
client_key (secret.Secret, optional)
Kubernetes API Client certificate key
Default: nil
core_api_versions ([]string, optional)
Kubernetes core API version (for different Kubernetes versions)
Default: [‘v1’]
data_type (string, optional)
Sumo Logic data type
Default: metrics
in_namespace_path ([]string, optional)
parameters for read/write record
Default: ['$.namespace']
in_pod_path ([]string, optional)
Default: ['$.pod','$.pod_name']
kubernetes_url (string, optional)
Kubernetes API URL
Default: nil
ssl_partial_chain (*bool, optional)
If ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN
This filter plugin consumes a log stream of JSON objects which contain single-line log messages. If a consecutive sequence of log messages form an exception stack trace, they forwarded as a single, combined JSON object. Otherwise, the input log data is forwarded as is. More info at https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions
+
Note: As Tag management is not supported yet, this Plugin is mutually exclusive with Tag normaliser
Fluentd Filter plugin to add information about geographical location of IP addresses with Maxmind GeoIP databases.
+More information at https://github.com/y-ken/fluent-plugin-geoip
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Flow
+metadata:
+name:demo-flow
+spec:
+filters:
+- tag_normaliser:{}
+- parser:
+remove_key_name_field:true
+reserve_data:true
+parse:
+type:nginx
+- prometheus:
+metrics:
+- name:total_counter
+desc:The total number of foo in message.
+type:counter
+labels:
+foo:bar
+labels:
+host:${hostname}
+tag:${tag}
+namespace:$.kubernetes.namespace
+selectors:{}
+localOutputRefs:
+- demo-output
Fluentd config result:
<filter**>
+ @type prometheus
+ @id logging-demo-flow_2_prometheus
+<metric>
+ desc The total number of foo in message.
+ name total_counter
+ type counter
+<labels>
+ foo bar
+</labels>
+</metric>
+<labels>
+ host ${hostname}
+ namespace $.kubernetes.namespace
+ tag ${tag}
+</labels>
+</filter>
A sentry plugin to throttle logs. Logs are grouped by a configurable key. When a group exceeds a configuration rate, logs are dropped for this group.
Configuration
Throttle
group_bucket_limit (int, optional)
Maximum number logs allowed per groups over the period of group_bucket_period_s
Default: 6000
group_bucket_period_s (int, optional)
This is the period of of time over which group_bucket_limit applies
Default: 60
group_drop_logs (bool, optional)
When a group reaches its limit, logs will be dropped from further processing if this value is true
Default: true
group_key (string, optional)
Used to group logs. Groups are rate limited independently
Default: kubernetes.container_name
group_reset_rate_s (int, optional)
After a group has exceeded its bucket limit, logs are dropped until the rate per second falls below or equal to group_reset_rate_s.
Default: group_bucket_limit/group_bucket_period_s
group_warning_delay_s (int, optional)
When a group reaches its limit and as long as it is not reset, a warning message with the current log rate of the group is emitted repeatedly. This is the delay between every repetition.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/filters/dedot/releases.releases b/4.6/docs/configuration/plugins/filters/dedot/releases.releases
new file mode 100644
index 000000000..26cf5a7f1
--- /dev/null
+++ b/4.6/docs/configuration/plugins/filters/dedot/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/filters/detect_exceptions/index.html b/4.6/docs/configuration/plugins/filters/detect_exceptions/index.html
new file mode 100644
index 000000000..1af516c78
--- /dev/null
+++ b/4.6/docs/configuration/plugins/filters/detect_exceptions/index.html
@@ -0,0 +1,651 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Exception Detector | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Exception Detector
+
Exception Detector
Overview
This filter plugin consumes a log stream of JSON objects which contain single-line log messages. If a consecutive sequence of log messages form an exception stack trace, they forwarded as a single, combined JSON object. Otherwise, the input log data is forwarded as is. More info at https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions
+
Note: As Tag management is not supported yet, this Plugin is mutually exclusive with Tag normaliser
Fluentd Filter plugin to fetch several metadata for a Pod
Configuration
EnhanceK8s
api_groups ([]string, optional)
Kubernetes resources api groups
Default: ["apps/v1", "extensions/v1beta1"]
bearer_token_file (string, optional)
Bearer token path
Default: nil
ca_file (secret.Secret, optional)
Kubernetes API CA file
Default: nil
cache_refresh (int, optional)
Cache refresh
Default: 60*60
cache_refresh_variation (int, optional)
Cache refresh variation
Default: 60*15
cache_size (int, optional)
Cache size
Default: 1000
cache_ttl (int, optional)
Cache TTL
Default: 60602
client_cert (secret.Secret, optional)
Kubernetes API Client certificate
Default: nil
client_key (secret.Secret, optional)
Kubernetes API Client certificate key
Default: nil
core_api_versions ([]string, optional)
Kubernetes core API version (for different Kubernetes versions)
Default: [‘v1’]
data_type (string, optional)
Sumo Logic data type
Default: metrics
in_namespace_path ([]string, optional)
parameters for read/write record
Default: ['$.namespace']
in_pod_path ([]string, optional)
Default: ['$.pod','$.pod_name']
kubernetes_url (string, optional)
Kubernetes API URL
Default: nil
ssl_partial_chain (*bool, optional)
If ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/filters/enhance_k8s/releases.releases b/4.6/docs/configuration/plugins/filters/enhance_k8s/releases.releases
new file mode 100644
index 000000000..c2a502fdd
--- /dev/null
+++ b/4.6/docs/configuration/plugins/filters/enhance_k8s/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/filters/geoip/index.html b/4.6/docs/configuration/plugins/filters/geoip/index.html
new file mode 100644
index 000000000..bd9ce3485
--- /dev/null
+++ b/4.6/docs/configuration/plugins/filters/geoip/index.html
@@ -0,0 +1,664 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Geo IP | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Geo IP
+
Fluentd GeoIP filter
Overview
Fluentd Filter plugin to add information about geographical location of IP addresses with Maxmind GeoIP databases.
+More information at https://github.com/y-ken/fluent-plugin-geoip
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Flow
+metadata:
+name:demo-flow
+spec:
+filters:
+- tag_normaliser:{}
+- parser:
+remove_key_name_field:true
+reserve_data:true
+parse:
+type:nginx
+- prometheus:
+metrics:
+- name:total_counter
+desc:The total number of foo in message.
+type:counter
+labels:
+foo:bar
+labels:
+host:${hostname}
+tag:${tag}
+namespace:$.kubernetes.namespace
+selectors:{}
+localOutputRefs:
+- demo-output
Fluentd config result:
<filter**>
+ @type prometheus
+ @id logging-demo-flow_2_prometheus
+<metric>
+ desc The total number of foo in message.
+ name total_counter
+ type counter
+<labels>
+ foo bar
+</labels>
+</metric>
+<labels>
+ host ${hostname}
+ namespace $.kubernetes.namespace
+ tag ${tag}
+</labels>
+</filter>
A sentry plugin to throttle logs. Logs are grouped by a configurable key. When a group exceeds a configuration rate, logs are dropped for this group.
Configuration
Throttle
group_bucket_limit (int, optional)
Maximum number logs allowed per groups over the period of group_bucket_period_s
Default: 6000
group_bucket_period_s (int, optional)
This is the period of of time over which group_bucket_limit applies
Default: 60
group_drop_logs (bool, optional)
When a group reaches its limit, logs will be dropped from further processing if this value is true
Default: true
group_key (string, optional)
Used to group logs. Groups are rate limited independently
Default: kubernetes.container_name
group_reset_rate_s (int, optional)
After a group has exceeded its bucket limit, logs are dropped until the rate per second falls below or equal to group_reset_rate_s.
Default: group_bucket_limit/group_bucket_period_s
group_warning_delay_s (int, optional)
When a group reaches its limit and as long as it is not reset, a warning message with the current log rate of the group is emitted repeatedly. This is the delay between every repetition.
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. For details, see https://github.com/aliyun/fluent-plugin-oss.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
If true, put_log_events_retry_limit will be ignored
put_log_events_retry_limit (int, optional)
Maximum count of retry (if exceeding this, the events will be discarded)
put_log_events_retry_wait (string, optional)
Time before retrying PutLogEvents (retry interval increases exponentially like put_log_events_retry_wait * (2 ^ retry_count))
region (string, required)
AWS Region
remove_log_group_aws_tags_key (string, optional)
Remove field specified by log_group_aws_tags_key
remove_log_group_name_key (string, optional)
Remove field specified by log_group_name_key
remove_log_stream_name_key (string, optional)
Remove field specified by log_stream_name_key
remove_retention_in_days (string, optional)
Remove field specified by retention_in_days
retention_in_days (string, optional)
Use to set the expiry time for log group when created with auto_create_stream. (default to no expiry)
retention_in_days_key (string, optional)
Use specified field of records as retention period
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
append_new_line (*bool, optional)
If it is enabled, the plugin adds new line character (\n) to each serialized record. Before appending \n, plugin calls chomp and removes separator from the end of each record as chomp_record is true. Therefore, you don’t need to enable chomp_record option when you use kinesis_firehose output with default configuration (append_new_line is true). If you want to set append_new_line false, you can choose chomp_record false (default) or true (compatible format with plugin v2). (Default:true)
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required) {#assume role credentials-role_arn}
The Amazon Resource Name (ARN) of the role to assume
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
stream_name (string, required)
Name of the stream to put data.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required)
The Amazon Resource Name (ARN) of the role to assume
The s3 output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log ‘2011-01-02 message B’ is reached, and then another log ‘2011-01-03 message B’ is reached in this order, the former one is stored in “20110102.gz” file, and latter one in “20110103.gz” file.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sse_customer_algorithm (string, optional)
Specifies the algorithm to use to when encrypting the object
sse_customer_key (string, optional)
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data
sse_customer_key_md5 (string, optional)
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
If false, the certificate of endpoint will not be verified
storage_class (string, optional)
The type of storage to use for the object, for example STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR For a complete list of possible values, see the Amazon S3 API reference.
store_as (string, optional)
Archive format on S3
use_bundled_cert (string, optional)
Use aws-sdk-ruby bundled cert
use_server_side_encryption (string, optional)
The Server-side encryption algorithm used when storing this object in S3 (AES256, aws:kms)
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into s3
Available in Logging operator version 4.5 and later. Azure Cloud to use, for example, AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud, AZURESTACKCLOUD (in uppercase). This field is supported only if the fluentd plugin honors it, for example, https://github.com/elsesiy/fluent-plugin-azure-storage-append-blob-lts
Compat format type: out_file, json, ltsv (default: out_file)
Default: json
path (string, optional)
Path prefix of the files on Azure
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
8 - Buffer
Buffer
chunk_full_threshold (string, optional)
The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size * chunk_full_threshold (== 8MB * 0.95 in default)
chunk_limit_records (int, optional)
The max number of events that each chunks can store in it
chunk_limit_size (string, optional)
The max size of each chunks: events will be written into chunks until the size of chunks become this size (default: 8MB)
Default: 8MB
compress (string, optional)
If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
delayed_commit_timeout (string, optional)
The timeout seconds until output plugin decides that async write operation fails
disable_chunk_backup (bool, optional)
Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
disabled (bool, optional)
Disable buffer section (default: false)
Default: false,hidden
flush_at_shutdown (bool, optional)
The value to specify to flush/write all buffer chunks at shutdown, or not
flush_interval (string, optional)
Default: 60s
flush_mode (string, optional)
Default: default (equals to lazy if time is specified as chunk key, interval otherwise) lazy: flush/write chunks once per timekey interval: flush/write chunks per specified time via flush_interval immediate: flush/write chunks immediately after events are appended into chunks
flush_thread_burst_interval (string, optional)
The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
flush_thread_count (int, optional)
The number of threads of output plugins, which is used to write chunks in parallel
flush_thread_interval (string, optional)
The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
overflow_action (string, optional)
How output plugin behaves when its buffer queue is full throw_exception: raise exception to show this error in log block: block processing of input plugin to emit events into that buffer drop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
path (string, optional)
The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Default: operator generated
queue_limit_length (int, optional)
The queue length limitation of this buffer plugin instance
queued_chunks_limit_size (int, optional)
Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
retry_exponential_backoff_base (string, optional)
The base number of exponential backoff for retries
retry_forever (*bool, optional)
If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Default: true
retry_max_interval (string, optional)
The maximum interval seconds for exponential backoff between retries while failing
retry_max_times (int, optional)
The maximum number of times to retry to flush while failing
retry_randomize (bool, optional)
If true, output plugin will retry after randomized interval not to do burst retries
retry_secondary_threshold (string, optional)
The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
retry_timeout (string, optional)
The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
retry_type (string, optional)
exponential_backoff: wait seconds will become large exponentially per failures periodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
retry_wait (string, optional)
Seconds to wait before next retry to flush, or constant factor of exponential backoff
tags (*string, optional)
When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
Default: tag,time
timekey (string, required)
Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Default: 10m
timekey_use_utc (bool, optional)
Output plugin decides to use UTC or not to format placeholders using timekey
timekey_wait (string, optional)
Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Default: 1m
timekey_zone (string, optional)
The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
total_limit_size (string, optional)
The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
type (string, optional)
Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
+
9 - Datadog
Datadog output plugin for Fluentd
Overview
It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Datadog - so no need to use a log shipper if you don’t want to.
+For details, see https://github.com/DataDog/fluent-plugin-datadog.
Example
spec:
+datadog:
+api_key:
+value:'<YOUR_API_KEY>'# For referencing a secret, see https://kube-logging.dev/docs/configuration/plugins/outputs/secret/
+dd_source:'<INTEGRATION_NAME>'
+dd_tags:'<KEY1:VALUE1>,<KEY2:VALUE2>'
+dd_sourcecategory:'<YOUR_SOURCE_CATEGORY>'
+
Configuration
Output Config
api_key (*secret.Secret, required)
This parameter is required in order to authenticate your fluent agent.
Set the log compression level for HTTP (1 to 9, 9 being the best ratio)
Default: “6”
dd_hostname (string, optional)
Used by Datadog to identify the host submitting the logs.
Default: “hostname -f”
dd_source (string, optional)
This tells Datadog what integration it is
Default: nil
dd_sourcecategory (string, optional)
Multiple value attribute. Can be used to refine the source attribute
Default: nil
dd_tags (string, optional)
Custom tags with the following format “key1:value1, key2:value2”
Default: nil
host (string, optional)
Proxy endpoint when logs are not directly forwarded to Datadog
Default: “http-intake.logs.datadoghq.com”
include_tag_key (bool, optional)
Automatically include the Fluentd tag in the record.
Default: false
max_backoff (string, optional)
The maximum time waited between each retry in seconds
Default: “30”
max_retries (string, optional)
The number of retries before the output plugin stops. Set to -1 for unlimited retries
Default: “-1”
no_ssl_validation (bool, optional)
Disable SSL validation (useful for proxy forwarding)
Default: false
port (string, optional)
Proxy port when logs are not directly forwarded to Datadog and ssl is not used
Default: “80”
service (string, optional)
Used by Datadog to correlate between logs, traces and metrics.
Default: nil
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
ssl_port (string, optional)
Port used to send logs over a SSL encrypted connection to Datadog. If use_http is disabled, use 10516 for the US region and 443 for the EU region.
Default: “443”
tag_key (string, optional)
Where to store the Fluentd tag.
Default: “tag”
timestamp_key (string, optional)
Name of the attribute which will contain timestamp of the log event. If nil, timestamp attribute is not added.
Default: “@timestamp”
use_compression (bool, optional)
Enable log compression for HTTP
Default: true
use_http (bool, optional)
Enable HTTP forwarding. If you disable it, make sure to change the port to 10514 or ssl_port to 10516
Default: true
use_json (bool, optional)
Event format, if true, the event is sent in json format. Othwerwise, in plain text.
Default: true
use_ssl (bool, optional)
If true, the agent initializes a secure connection to Datadog. In clear TCP otherwise.
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
content_type (string, optional)
With content_type application/x-ndjson, elasticsearch plugin adds application/x-ndjson as Content-Profile in payload.
Default: application/json
custom_headers (string, optional)
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
flatten_hashes (bool, optional)
Elasticsearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places: {“people” => 100} {“people” => {“some” => “thing”}} The second log line will be rejected by the Elasticsearch parser because objects and concrete values can’t live in the same field. To combat this, you can enable hash flattening.
flatten_hashes_separator (string, optional)
Flatten separator
host (string, optional)
You can specify the Elasticsearch host using this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple Elasticsearch hosts with separator “,”. If you specify the hosts option, the host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, elasticsearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy. For example ignore_exceptions ["Elasticsearch::Transport::Transport::ServerError"] will match all subclasses of ServerError - Elasticsearch::Transport::Transport::Errors::BadRequest, Elasticsearch::Transport::Transport::Errors::ServiceUnavailable, etc.
ilm_policy (string, optional)
Specify ILM policy contents as Hash.
ilm_policy_id (string, optional)
Specify ILM policy id.
ilm_policy_overwrite (bool, optional)
Specify whether overwriting ilm policy or not.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_name (string, optional)
The index name to write events to
Default: fluentd
index_prefix (string, optional)
Specify the index prefix for the rollover index to be created.
Default: logstash
log_es_400_reason (bool, optional)
By default, the error logger won’t record the reason for a 400 error from the Elasticsearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn’t desirable if all you want is the 400 error reasons. You can set this true to capture the 400 error reasons without all the other debug logs.
Default: false
logstash_dateformat (string, optional)
Set the Logstash date format.
Default: %Y.%m.%d
logstash_format (bool, optional)
Enable Logstash log format.
Default: false
logstash_prefix (string, optional)
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional)
Set the Logstash prefix separator.
Default: -
max_retry_get_es_version (string, optional)
You can specify the number of times to retry fetching the Elasticsearch version.
This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify the Elasticsearch port using this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, Elasticsearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, Elasticsearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of elasticsearch shield.
Default: false
reload_after (string, optional)
When reload_connections is true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the elasticsearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the request. This can be useful to quickly remove a dead node from the list of addresses.
Remove keys on update will not update the configured keys in elasticsearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Similar to parent_key config, will add _routing into elasticsearch command if routing_key is set and the field does exist in input event.
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sniffer_class_name (string, optional)
The default Sniffer used by the Elasticsearch::Transport class works well when Fluentd has a direct connection to all of the Elasticsearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::ElasticsearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. https://github.com/uken/fluent-plugin-elasticsearch#sniffer-class-name
ssl_max_version (string, optional)
Specify min/max SSL/TLS version
ssl_min_version (string, optional)
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in Elasticsearch 7.x
Similar to target_index_key config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to type_name.
Default: fluentd
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, elasticsearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
type_name (string, optional)
Set the index type for elasticsearch. This is the fallback if target_type_key is missing.
Default: fluentd
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because es_rejected_execution_exception is caused by exceeding Elasticsearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior. If you want to increase it and forcibly retrying bulk request, please consider to change unrecoverable_error_types parameter from default value. Change default value of thread_pool.bulk.queue_size in elasticsearch.yml)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders, for example, %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
validate_client_version (bool, optional)
When you use mismatched Elasticsearch server and client libraries, fluent-plugin-elasticsearch cannot send data into Elasticsearch.
Default: false
verify_es_version_at_startup (*bool, optional)
Because Elasticsearch plugin should change behavior each of Elasticsearch major versions. For example, Elasticsearch 6 starts to prohibit multiple type_names in one index, and Elasticsearch 7 will handle only _doc type_name in index. If you want to disable to verify Elasticsearch version at start up, set it as false. When using the following configuration, ES plugin intends to communicate into Elasticsearch 6. (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
The Path of the file. The actual path is path + time + “.log” by default.
path_suffix (string, optional)
The suffix of output result.
Default: “.log”
recompress (bool, optional)
Performs compression again even if the buffer chunk is already compressed.
Default: false
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
symlink_path (bool, optional)
Create symlink to temporary buffered file when buffer_type is file. This is useful for tailing file content to check logs.
The timeout time for socket connect. When the connection timed out during establishment, Errno::ETIMEDOUT is raised.
dns_round_robin (bool, optional)
Enable client-side DNS round robin. Uniform randomly pick an IP address to send data when a hostname has several IP addresses. heartbeat_type udp is not available with dns_round_robin true. Use heartbeat_type tcp or heartbeat_type none.
expire_dns_cache (int, optional)
Set TTL to expire DNS cache in seconds. Set 0 not to use DNS Cache.
Default: 0
hard_timeout (int, optional)
The hard timeout used to detect server failure. The default value is equal to the send_timeout parameter.
Default: 60
heartbeat_interval (int, optional)
The interval of the heartbeat packer.
Default: 1
heartbeat_type (string, optional)
The transport protocol to use for heartbeats. Set “none” to disable heartbeat. [transport, tcp, udp, none]
ignore_network_errors_at_startup (bool, optional)
Ignore DNS resolution and errors at startup time.
keepalive (bool, optional)
Enable keepalive connection.
Default: false
keepalive_timeout (int, optional)
Expired time of keepalive. Default value is nil, which means to keep connection as long as possible.
Default: 0
phi_failure_detector (bool, optional)
Use the “Phi accrual failure detector” to detect server failure.
Default: true
phi_threshold (int, optional)
The threshold parameter used to detect server faults. phi_threshold is deeply related to heartbeat_interval. If you are using longer heartbeat_interval, please use the larger phi_threshold. Otherwise you will see frequent detachments of destination servers. The default value 16 is tuned for heartbeat_interval 1s.
Default: 16
recover_wait (int, optional)
The wait time before accepting a server fault recovery.
Default: 10
require_ack_response (bool, optional)
Change the protocol to at-least-once. The plugin waits the ack from destination’s in_forward plugin.
Server definitions at least one is required Server
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
tls_allow_self_signed_cert (bool, optional)
Allow self signed certificates or not.
Default: false
tls_cert_logical_store_name (string, optional)
The certificate logical store name on Windows system certstore. This parameter is for Windows only.
tls_cert_path (*secret.Secret, optional)
The additional CA certificate path for TLS.
tls_cert_thumbprint (string, optional)
The certificate thumbprint for searching from Windows system certstore This parameter is for Windows only.
tls_cert_use_enterprise_store (bool, optional)
Enable to use certificate enterprise store on Windows system certstore. This parameter is for Windows only.
Verify hostname of servers and certificates or not in TLS transport.
Default: true
tls_version (string, optional)
The default version of TLS transport. [TLSv1_1, TLSv1_2]
Default: TLSv1_2
transport (string, optional)
The transport protocol to use [ tcp, tls ]
verify_connection_at_startup (bool, optional)
Verify that a connection can be made with one of out_forward nodes at the time of startup.
Default: false
Fluentd Server
server
host (string, required)
The IP address or host name of the server.
name (string, optional)
The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
password (*secret.Secret, optional)
The password for authentication.
port (int, optional)
The port number of the host. Note that both TCP packets (event stream) and UDP packets (heartbeat message) are sent to this port.
Default: 24224
shared_key (*secret.Secret, optional)
The shared key per server.
standby (bool, optional)
Marks a node as the standby node for an Active-Standby model between Fluentd nodes. When an active node goes down, the standby node is promoted to an active node. The standby node is not used by the out_forward plugin until then.
username (*secret.Secret, optional)
The username for authentication.
weight (int, optional)
The load balancing weight. If the weight of one server is 20 and the weight of the other server is 30, events are sent in a 2:3 ratio. .
User provided web-safe keys and arbitrary string values that will returned with requests for the file as “x-goog-meta-” response headers. Object Metadata
overwrite (bool, optional)
Overwrite already existing path
Default: false
path (string, optional)
Path prefix of the files on GCS
project (string, required)
Project identifier for GCS
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
storage_class (string, optional)
Storage class of the file: dranearlinecoldlinemulti_regionalregionalstandard
TLS: CA certificate file for server certificate verification Secret
cert (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
configure_kubernetes_labels (*bool, optional)
Configure Kubernetes metadata in a Prometheus like format
Default: false
drop_single_key (*bool, optional)
If a record only has 1 key, then just set the log line to the value and discard the key.
Default: false
extra_labels (map[string]string, optional)
Set of extra labels to include with every Loki stream.
extract_kubernetes_labels (*bool, optional)
Extract kubernetes labels as loki labels
Default: false
include_thread_label (*bool, optional)
whether to include the fluentd_thread label when multiple threads are used for flushing.
Default: true
insecure_tls (*bool, optional)
TLS: disable server certificate verification
Default: false
key (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
labels (Label, optional)
Set of labels to include with every Loki stream.
line_format (string, optional)
Format to use when flattening the record to a log line: json, key_value (default: key_value)
Default: json
password (*secret.Secret, optional)
Specify password if the Loki server requires authentication. Secret
remove_keys ([]string, optional)
Comma separated list of needless record keys to remove
Default: []
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
tenant (string, optional)
Loki is a multi-tenant log storage platform and all requests sent must include a tenant.
url (string, optional)
The url of the Loki server to send logs to.
Default: https://logs-us-west1.grafana.net
username (*secret.Secret, optional)
Specify a username if the Loki server requires authentication. Secret
Raise UnrecoverableError when the response code is non success, 1xx/3xx/4xx/5xx. If false, the plugin logs error message instead of raising UnrecoverableError.
Using array format of JSON. This parameter is used and valid only for json format. When json_array as true, Content-Profile should be application/json and be able to use JSON data for the HTTP request body.
Default: false
open_timeout (int, optional)
Connection open timeout in seconds.
proxy (string, optional)
Proxy for HTTP request.
read_timeout (int, optional)
Read timeout in seconds.
retryable_response_codes ([]int, optional)
List of retryable response codes. If the response code is included in this list, the plugin retries the buffer flush. Since Fluentd v2 the Status code 503 is going to be removed from default.
Default: [503]
ssl_timeout (int, optional)
TLS timeout in seconds.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Maximum value of total message size to be included in one batch transmission. .
Default: 4096
kafka_agg_max_messages (int, optional)
Maximum number of messages to include in one batch transmission. .
Default: nil
keytab (*secret.Secret, optional)
max_send_retries (int, optional)
Number of times to retry sending of messages to a leader
Default: 1
message_key_key (string, optional)
Message Key
Default: “message_key”
partition_key (string, optional)
Partition
Default: “partition”
partition_key_key (string, optional)
Partition Key
Default: “partition_key”
password (*secret.Secret, optional)
Password when using PLAIN/SCRAM SASL authentication
principal (string, optional)
required_acks (int, optional)
The number of acks required per request .
Default: -1
ssl_ca_cert (*secret.Secret, optional)
CA certificate
ssl_ca_certs_from_system (*bool, optional)
System’s CA cert store
Default: false
ssl_client_cert (*secret.Secret, optional)
Client certificate
ssl_client_cert_chain (*secret.Secret, optional)
Client certificate chain
ssl_client_cert_key (*secret.Secret, optional)
Client certificate key
ssl_verify_hostname (*bool, optional)
Verify certificate hostname
sasl_over_ssl (bool, required)
SASL over SSL
Default: true
scram_mechanism (string, optional)
If set, use SCRAM authentication with specified mechanism. When unset, default to PLAIN authentication
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
topic_key (string, optional)
Topic Key
Default: “topic”
use_default_for_unknown_topic (bool, optional)
Use default for unknown topics
Default: false
username (*secret.Secret, optional)
Username when using PLAIN/SCRAM SASL authentication
HTTPS POST Request Timeout, Optional. Supports s and ms Suffices
Default: 30 s
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Limit to the size of the Logz.io upload bulk. Defaults to 1000000 bytes leaving about 24kB for overhead.
bulk_limit_warning_limit (int, optional)
Limit to the size of the Logz.io warning message when a record exceeds bulk_limit to prevent a recursion when Fluent warnings are sent to the Logz.io output.
endpoint (*Endpoint, required)
Define LogZ endpoint URL
gzip (bool, optional)
Should the plugin ship the logs in gzip compression. Default is false.
http_idle_timeout (int, optional)
Timeout in seconds that the http persistent connection will stay open without traffic.
output_include_tags (bool, optional)
Should the appender add the fluentd tag to the document, called “fluentd_tag”
output_include_time (bool, optional)
Should the appender add a timestamp to your logs on their process time (recommended).
retry_count (int, optional)
How many times to resend failed bulks.
retry_sleep (int, optional)
How long to sleep initially between retries, exponential step-off.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Specify the application name for the rollover index to be created.
Default: default
buffer (*Buffer, optional)
bulk_message_request_threshold (string, optional)
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
This parameter adds additional headers to request. Example: {"token":"secret"}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
data_stream_enable (*bool, optional)
Use @type opensearch_data_stream
data_stream_name (string, optional)
You can specify Opensearch data stream name by this parameter. This parameter is mandatory for opensearch_data_stream.
data_stream_template_name (string, optional)
Specify an existing index template for the data stream. If not present, a new template is created and named after the data stream.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on Fluentd statup.(default: true)
You can specify OpenSearch host by this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, the opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional)
http_backend_excon_nonblock
Default: true
id_key (string, optional)
Field on your data to identify the data uniquely
ignore_exceptions (string, optional)
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
This param is to set a pipeline ID of your OpenSearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify OpenSearch port by this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
reload_after (string, optional)
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
remove_keys_on_update (string, optional)
Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
routing_key (string, optional)
routing_key
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
selector_class_name (string, optional)
selector_class_name
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
sniffer_class_name (string, optional)
The default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The sniffer_class_name parameter gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name.
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in OpenSearch
tag_key (string, optional)
This will add the Fluentd tag in the JSON record.
Default: tag
target_index_affinity (bool, optional)
target_index_affinity
Default: false
target_index_key (string, optional)
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.’) as a separator.
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_exclude_timestamp (bool, optional)
time_key_exclude_timestamp
Default: false
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
truncate_caches_interval (string, optional)
truncate_caches_interval
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
unrecoverable_record_types (string, optional)
unrecoverable_record_types
use_legacy_template (*bool, optional)
Specify wether to use legacy template or not.
Default: true
user (string, optional)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
Default: true
validate_client_version (bool, optional)
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
verify_os_version_at_startup (*bool, optional)
verify_os_version_at_startup (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
Default: index
OpenSearchEndpointCredentials
access_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
assume_role_arn (*secret.Secret, optional)
Typically, you can use AssumeRole for cross-account access or federation.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
strftime_format (string, optional)
Users can set strftime format.
Default: “%s”
ttl (int, optional)
If 0 or negative value is set, ttl is not set in each key.
+
26 - Relabel
Available in Logging Operator version 4.2 and later.
The relabel output uses the relabel output plugin of Fluentd to route events back to a specific Flow, where they can be processed again.
This is useful, for example, if you need to preprocess a subset of logs differently, but then do the same processing on all messages at the end. In this case, you can create multiple flows for preprocessing based on specific log matchers and then aggregate everything into a single final flow for postprocessing.
The value of the label parameter of the relabel output must be the same as the value of the flowLabel parameter of the Flow (or ClusterFlow) where you want to send the messages.
Using the relabel output also makes it possible to pass the messages emitted by the Concat plugin in case of a timeout. Set the timeout_label of the concat plugin to the flowLabel of the flow where you want to send the timeout messages.
Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. .
Default: true
data_type (string, optional)
The type of data that will be sent to Sumo Logic, either event or metric
Default: event
fields (Fields, optional)
In this case, parameters inside <fields> are used as indexed fields and removed from the original input events
The host location for events. Cannot set both host and host_key parameters at the same time. (Default:hostname)
host_key (string, optional)
Key for the host location. Cannot set both host and host_key parameters at the same time.
idle_timeout (int, optional)
If a connection has not been used for this number of seconds it will automatically be reset upon the next use to avoid attempting to send to a closed connection. nil means no timeout.
index (string, optional)
Identifier for the Splunk index to be used for indexing events. If this parameter is not set, the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.
index_key (string, optional)
The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.
insecure_ssl (*bool, optional)
Indicates if insecure SSL connection is allowed
Default: false
keep_keys (bool, optional)
By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.
metric_name_key (string, optional)
Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event parameter. When this prameter is set, the metrics_from_event parameter is automatically set to false.
Default: true
metric_value_key (string, optional)
Field name that contains the metric value, this parameter is required when metric_name_key is configured.
metrics_from_event (*bool, optional)
When data_type is set to “metric”, the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. (Default:true)
non_utf8_replacement_string (string, optional)
If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. .
Default: ’ '
open_timeout (int, optional)
The amount of time to wait for a connection to be opened.
protocol (string, optional)
This is the protocol to use for calling the Hec API. Available values are: http, https.
Default: https
read_timeout (int, optional)
The amount of time allowed between reading two chunks from the socket.
ssl_ciphers (string, optional)
List of SSL ciphers allowed.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source (string, optional)
The source field for events. If this parameter is not set, the source will be decided by HEC. Cannot set both source and source_key parameters at the same time.
source_key (string, optional)
Field name to contain source. Cannot set both source and source_key parameters at the same time.
sourcetype (string, optional)
The sourcetype field for events. When not set, the sourcetype is decided by HEC. Cannot set both source and source_key parameters at the same time.
sourcetype_key (string, optional)
Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.
SQS queue url e.g. https://sqs.us-west-2.amazonaws.com/123456789012/myqueue
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Used to specify the key when merging json or sending logs in text format
Default: message
metric_data_format (string, optional)
The format of metrics you will be sending, either graphite or carbon2 or prometheus
Default: graphite
open_timeout (int, optional)
Set timeout seconds to wait until connection is opened.
Default: 60
proxy_uri (string, optional)
Add the uri of the proxy environment if present.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source_category (string, optional)
Set _sourceCategory metadata field within SumoLogic
Default: nil
source_host (string, optional)
Set _sourceHost metadata field within SumoLogic
Default: nil
source_name (string, required)
Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is nil)
source_name_key (string, optional)
Set as source::path_key’s value so that the source_name can be extracted from Fluentd’s buffer
Default: source_name
sumo_client (string, optional)
Name of sumo client which is send as X-Sumo-Client header
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Authorization Bearer token for http request to VMware Log Intelligence Secret
content_type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Default: simple
LogIntelligenceHeadersOut
LogIntelligenceHeadersOut is used to convert the input LogIntelligenceHeaders to a fluentd
+output that uses the correct key names for the VMware Log Intelligence plugin. This allows the
+Ouput to accept the config is snake_case (as other output plugins do) but output the fluentd
+ config with the proper key names (ie. content_type -> Content-Type)
Authorization (*secret.Secret, required)
Authorization Bearer token for http request to VMware Log Intelligence
Content-Type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Flatten hashes to create one key/val pair w/o losing log data
Default: true
flatten_hashes_separator (string, optional)
Separator to use for joining flattened keys
Default: _
http_conn_debug (bool, optional)
If set, enables debug logs for http connection
Default: false
http_method (string, optional)
HTTP method (post)
Default: post
host (string, optional)
VMware Aria Operations For Logs Host ex. localhost
log_text_keys ([]string, optional)
Keys from log event whose values should be added as log message/text to VMware Aria Operations For Logs. These key/value pairs won’t be expanded/flattened and won’t be added as metadata/fields.
VMware Aria Operations For Logs ingestion api path ex. ‘api/v1/events/ingest’
Default: api/v1/events/ingest
port (int, optional)
VMware Aria Operations For Logs port ex. 9000
Default: 80
raise_on_error (bool, optional)
Raise errors that were rescued during HTTP requests?
Default: false
rate_limit_msec (int, optional)
Simple rate limiting: ignore any records within rate_limit_msec since the last one
Default: 0
request_retries (int, optional)
Number of retries
Default: 3
request_timeout (int, optional)
http connection ttl for each request
Default: 5
ssl_verify (*bool, optional)
SSL verification flag
Default: true
scheme (string, optional)
HTTP scheme (http,https)
Default: http
serializer (string, optional)
Serialization (json)
Default: json
shorten_keys (map[string]string, optional)
Keys from log event to rewrite for instance from ‘kubernetes_namespace’ to ‘k8s_namespace’ tags will be rewritten with substring substitution and applied in the order present in the hash. Hashes enumerate their values in the order that the corresponding keys were inserted, see: https://ruby-doc.org/core-2.2.2/Hash.html
The annotation format is logging.banzaicloud.io/<loggingRef>: watched. Since the name part of the an annotation can’t be empty the default applies to empty loggingRef value as well.
The mount path is generated from the secret information
Available in Logging operator version 4.5 and later. Azure Cloud to use, for example, AzurePublicCloud, AzureChinaCloud, AzureGermanCloud, AzureUSGovernmentCloud, AZURESTACKCLOUD (in uppercase). This field is supported only if the fluentd plugin honors it, for example, https://github.com/elsesiy/fluent-plugin-azure-storage-append-blob-lts
Compat format type: out_file, json, ltsv (default: out_file)
Default: json
path (string, optional)
Path prefix of the files on Azure
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/azurestore/releases.releases b/4.6/docs/configuration/plugins/outputs/azurestore/releases.releases
new file mode 100644
index 000000000..d10433a5c
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/azurestore/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/buffer/index.html b/4.6/docs/configuration/plugins/outputs/buffer/index.html
new file mode 100644
index 000000000..9784ec7e9
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/buffer/index.html
@@ -0,0 +1,626 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Buffer | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Buffer
+
Buffer
chunk_full_threshold (string, optional)
The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size * chunk_full_threshold (== 8MB * 0.95 in default)
chunk_limit_records (int, optional)
The max number of events that each chunks can store in it
chunk_limit_size (string, optional)
The max size of each chunks: events will be written into chunks until the size of chunks become this size (default: 8MB)
Default: 8MB
compress (string, optional)
If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
delayed_commit_timeout (string, optional)
The timeout seconds until output plugin decides that async write operation fails
disable_chunk_backup (bool, optional)
Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
disabled (bool, optional)
Disable buffer section (default: false)
Default: false,hidden
flush_at_shutdown (bool, optional)
The value to specify to flush/write all buffer chunks at shutdown, or not
flush_interval (string, optional)
Default: 60s
flush_mode (string, optional)
Default: default (equals to lazy if time is specified as chunk key, interval otherwise) lazy: flush/write chunks once per timekey interval: flush/write chunks per specified time via flush_interval immediate: flush/write chunks immediately after events are appended into chunks
flush_thread_burst_interval (string, optional)
The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
flush_thread_count (int, optional)
The number of threads of output plugins, which is used to write chunks in parallel
flush_thread_interval (string, optional)
The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
overflow_action (string, optional)
How output plugin behaves when its buffer queue is full throw_exception: raise exception to show this error in log block: block processing of input plugin to emit events into that buffer drop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
path (string, optional)
The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Default: operator generated
queue_limit_length (int, optional)
The queue length limitation of this buffer plugin instance
queued_chunks_limit_size (int, optional)
Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
retry_exponential_backoff_base (string, optional)
The base number of exponential backoff for retries
retry_forever (*bool, optional)
If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Default: true
retry_max_interval (string, optional)
The maximum interval seconds for exponential backoff between retries while failing
retry_max_times (int, optional)
The maximum number of times to retry to flush while failing
retry_randomize (bool, optional)
If true, output plugin will retry after randomized interval not to do burst retries
retry_secondary_threshold (string, optional)
The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
retry_timeout (string, optional)
The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
retry_type (string, optional)
exponential_backoff: wait seconds will become large exponentially per failures periodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
retry_wait (string, optional)
Seconds to wait before next retry to flush, or constant factor of exponential backoff
tags (*string, optional)
When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
Default: tag,time
timekey (string, required)
Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Default: 10m
timekey_use_utc (bool, optional)
Output plugin decides to use UTC or not to format placeholders using timekey
timekey_wait (string, optional)
Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Default: 1m
timekey_zone (string, optional)
The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
total_limit_size (string, optional)
The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
type (string, optional)
Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
If true, put_log_events_retry_limit will be ignored
put_log_events_retry_limit (int, optional)
Maximum count of retry (if exceeding this, the events will be discarded)
put_log_events_retry_wait (string, optional)
Time before retrying PutLogEvents (retry interval increases exponentially like put_log_events_retry_wait * (2 ^ retry_count))
region (string, required)
AWS Region
remove_log_group_aws_tags_key (string, optional)
Remove field specified by log_group_aws_tags_key
remove_log_group_name_key (string, optional)
Remove field specified by log_group_name_key
remove_log_stream_name_key (string, optional)
Remove field specified by log_stream_name_key
remove_retention_in_days (string, optional)
Remove field specified by retention_in_days
retention_in_days (string, optional)
Use to set the expiry time for log group when created with auto_create_stream. (default to no expiry)
retention_in_days_key (string, optional)
Use specified field of records as retention period
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/cloudwatch/releases.releases b/4.6/docs/configuration/plugins/outputs/cloudwatch/releases.releases
new file mode 100644
index 000000000..955f370d0
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/cloudwatch/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/datadog/index.html b/4.6/docs/configuration/plugins/outputs/datadog/index.html
new file mode 100644
index 000000000..9a08edb19
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/datadog/index.html
@@ -0,0 +1,630 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Datadog | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Datadog
+
Datadog output plugin for Fluentd
Overview
It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Datadog - so no need to use a log shipper if you don’t want to.
+For details, see https://github.com/DataDog/fluent-plugin-datadog.
Example
spec:
+datadog:
+api_key:
+value:'<YOUR_API_KEY>'# For referencing a secret, see https://kube-logging.dev/docs/configuration/plugins/outputs/secret/
+dd_source:'<INTEGRATION_NAME>'
+dd_tags:'<KEY1:VALUE1>,<KEY2:VALUE2>'
+dd_sourcecategory:'<YOUR_SOURCE_CATEGORY>'
+
Configuration
Output Config
api_key (*secret.Secret, required)
This parameter is required in order to authenticate your fluent agent.
Set the log compression level for HTTP (1 to 9, 9 being the best ratio)
Default: “6”
dd_hostname (string, optional)
Used by Datadog to identify the host submitting the logs.
Default: “hostname -f”
dd_source (string, optional)
This tells Datadog what integration it is
Default: nil
dd_sourcecategory (string, optional)
Multiple value attribute. Can be used to refine the source attribute
Default: nil
dd_tags (string, optional)
Custom tags with the following format “key1:value1, key2:value2”
Default: nil
host (string, optional)
Proxy endpoint when logs are not directly forwarded to Datadog
Default: “http-intake.logs.datadoghq.com”
include_tag_key (bool, optional)
Automatically include the Fluentd tag in the record.
Default: false
max_backoff (string, optional)
The maximum time waited between each retry in seconds
Default: “30”
max_retries (string, optional)
The number of retries before the output plugin stops. Set to -1 for unlimited retries
Default: “-1”
no_ssl_validation (bool, optional)
Disable SSL validation (useful for proxy forwarding)
Default: false
port (string, optional)
Proxy port when logs are not directly forwarded to Datadog and ssl is not used
Default: “80”
service (string, optional)
Used by Datadog to correlate between logs, traces and metrics.
Default: nil
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
ssl_port (string, optional)
Port used to send logs over a SSL encrypted connection to Datadog. If use_http is disabled, use 10516 for the US region and 443 for the EU region.
Default: “443”
tag_key (string, optional)
Where to store the Fluentd tag.
Default: “tag”
timestamp_key (string, optional)
Name of the attribute which will contain timestamp of the log event. If nil, timestamp attribute is not added.
Default: “@timestamp”
use_compression (bool, optional)
Enable log compression for HTTP
Default: true
use_http (bool, optional)
Enable HTTP forwarding. If you disable it, make sure to change the port to 10514 or ssl_port to 10516
Default: true
use_json (bool, optional)
Event format, if true, the event is sent in json format. Othwerwise, in plain text.
Default: true
use_ssl (bool, optional)
If true, the agent initializes a secure connection to Datadog. In clear TCP otherwise.
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
content_type (string, optional)
With content_type application/x-ndjson, elasticsearch plugin adds application/x-ndjson as Content-Profile in payload.
Default: application/json
custom_headers (string, optional)
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
flatten_hashes (bool, optional)
Elasticsearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places: {“people” => 100} {“people” => {“some” => “thing”}} The second log line will be rejected by the Elasticsearch parser because objects and concrete values can’t live in the same field. To combat this, you can enable hash flattening.
flatten_hashes_separator (string, optional)
Flatten separator
host (string, optional)
You can specify the Elasticsearch host using this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple Elasticsearch hosts with separator “,”. If you specify the hosts option, the host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, elasticsearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy. For example ignore_exceptions ["Elasticsearch::Transport::Transport::ServerError"] will match all subclasses of ServerError - Elasticsearch::Transport::Transport::Errors::BadRequest, Elasticsearch::Transport::Transport::Errors::ServiceUnavailable, etc.
ilm_policy (string, optional)
Specify ILM policy contents as Hash.
ilm_policy_id (string, optional)
Specify ILM policy id.
ilm_policy_overwrite (bool, optional)
Specify whether overwriting ilm policy or not.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in Elasticsearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_name (string, optional)
The index name to write events to
Default: fluentd
index_prefix (string, optional)
Specify the index prefix for the rollover index to be created.
Default: logstash
log_es_400_reason (bool, optional)
By default, the error logger won’t record the reason for a 400 error from the Elasticsearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn’t desirable if all you want is the 400 error reasons. You can set this true to capture the 400 error reasons without all the other debug logs.
Default: false
logstash_dateformat (string, optional)
Set the Logstash date format.
Default: %Y.%m.%d
logstash_format (bool, optional)
Enable Logstash log format.
Default: false
logstash_prefix (string, optional)
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional)
Set the Logstash prefix separator.
Default: -
max_retry_get_es_version (string, optional)
You can specify the number of times to retry fetching the Elasticsearch version.
This param is to set a pipeline id of your elasticsearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify the Elasticsearch port using this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, Elasticsearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, Elasticsearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of elasticsearch shield.
Default: false
reload_after (string, optional)
When reload_connections is true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the elasticsearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the request. This can be useful to quickly remove a dead node from the list of addresses.
Remove keys on update will not update the configured keys in elasticsearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Similar to parent_key config, will add _routing into elasticsearch command if routing_key is set and the field does exist in input event.
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sniffer_class_name (string, optional)
The default Sniffer used by the Elasticsearch::Transport class works well when Fluentd has a direct connection to all of the Elasticsearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::ElasticsearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. https://github.com/uken/fluent-plugin-elasticsearch#sniffer-class-name
ssl_max_version (string, optional)
Specify min/max SSL/TLS version
ssl_min_version (string, optional)
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in Elasticsearch 7.x
Similar to target_index_key config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to type_name.
Default: fluentd
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, elasticsearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
type_name (string, optional)
Set the index type for elasticsearch. This is the fallback if target_type_key is missing.
Default: fluentd
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because es_rejected_execution_exception is caused by exceeding Elasticsearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior. If you want to increase it and forcibly retrying bulk request, please consider to change unrecoverable_error_types parameter from default value. Change default value of thread_pool.bulk.queue_size in elasticsearch.yml)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders, for example, %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
validate_client_version (bool, optional)
When you use mismatched Elasticsearch server and client libraries, fluent-plugin-elasticsearch cannot send data into Elasticsearch.
Default: false
verify_es_version_at_startup (*bool, optional)
Because Elasticsearch plugin should change behavior each of Elasticsearch major versions. For example, Elasticsearch 6 starts to prohibit multiple type_names in one index, and Elasticsearch 7 will handle only _doc type_name in index. If you want to disable to verify Elasticsearch version at start up, set it as false. When using the following configuration, ES plugin intends to communicate into Elasticsearch 6. (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
The Path of the file. The actual path is path + time + “.log” by default.
path_suffix (string, optional)
The suffix of output result.
Default: “.log”
recompress (bool, optional)
Performs compression again even if the buffer chunk is already compressed.
Default: false
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
symlink_path (bool, optional)
Create symlink to temporary buffered file when buffer_type is file. This is useful for tailing file content to check logs.
The timeout time for socket connect. When the connection timed out during establishment, Errno::ETIMEDOUT is raised.
dns_round_robin (bool, optional)
Enable client-side DNS round robin. Uniform randomly pick an IP address to send data when a hostname has several IP addresses. heartbeat_type udp is not available with dns_round_robin true. Use heartbeat_type tcp or heartbeat_type none.
expire_dns_cache (int, optional)
Set TTL to expire DNS cache in seconds. Set 0 not to use DNS Cache.
Default: 0
hard_timeout (int, optional)
The hard timeout used to detect server failure. The default value is equal to the send_timeout parameter.
Default: 60
heartbeat_interval (int, optional)
The interval of the heartbeat packer.
Default: 1
heartbeat_type (string, optional)
The transport protocol to use for heartbeats. Set “none” to disable heartbeat. [transport, tcp, udp, none]
ignore_network_errors_at_startup (bool, optional)
Ignore DNS resolution and errors at startup time.
keepalive (bool, optional)
Enable keepalive connection.
Default: false
keepalive_timeout (int, optional)
Expired time of keepalive. Default value is nil, which means to keep connection as long as possible.
Default: 0
phi_failure_detector (bool, optional)
Use the “Phi accrual failure detector” to detect server failure.
Default: true
phi_threshold (int, optional)
The threshold parameter used to detect server faults. phi_threshold is deeply related to heartbeat_interval. If you are using longer heartbeat_interval, please use the larger phi_threshold. Otherwise you will see frequent detachments of destination servers. The default value 16 is tuned for heartbeat_interval 1s.
Default: 16
recover_wait (int, optional)
The wait time before accepting a server fault recovery.
Default: 10
require_ack_response (bool, optional)
Change the protocol to at-least-once. The plugin waits the ack from destination’s in_forward plugin.
Server definitions at least one is required Server
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
tls_allow_self_signed_cert (bool, optional)
Allow self signed certificates or not.
Default: false
tls_cert_logical_store_name (string, optional)
The certificate logical store name on Windows system certstore. This parameter is for Windows only.
tls_cert_path (*secret.Secret, optional)
The additional CA certificate path for TLS.
tls_cert_thumbprint (string, optional)
The certificate thumbprint for searching from Windows system certstore This parameter is for Windows only.
tls_cert_use_enterprise_store (bool, optional)
Enable to use certificate enterprise store on Windows system certstore. This parameter is for Windows only.
Verify hostname of servers and certificates or not in TLS transport.
Default: true
tls_version (string, optional)
The default version of TLS transport. [TLSv1_1, TLSv1_2]
Default: TLSv1_2
transport (string, optional)
The transport protocol to use [ tcp, tls ]
verify_connection_at_startup (bool, optional)
Verify that a connection can be made with one of out_forward nodes at the time of startup.
Default: false
Fluentd Server
server
host (string, required)
The IP address or host name of the server.
name (string, optional)
The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
password (*secret.Secret, optional)
The password for authentication.
port (int, optional)
The port number of the host. Note that both TCP packets (event stream) and UDP packets (heartbeat message) are sent to this port.
Default: 24224
shared_key (*secret.Secret, optional)
The shared key per server.
standby (bool, optional)
Marks a node as the standby node for an Active-Standby model between Fluentd nodes. When an active node goes down, the standby node is promoted to an active node. The standby node is not used by the out_forward plugin until then.
username (*secret.Secret, optional)
The username for authentication.
weight (int, optional)
The load balancing weight. If the weight of one server is 20 and the weight of the other server is 30, events are sent in a 2:3 ratio. .
User provided web-safe keys and arbitrary string values that will returned with requests for the file as “x-goog-meta-” response headers. Object Metadata
overwrite (bool, optional)
Overwrite already existing path
Default: false
path (string, optional)
Path prefix of the files on GCS
project (string, required)
Project identifier for GCS
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
storage_class (string, optional)
Storage class of the file: dranearlinecoldlinemulti_regionalregionalstandard
Raise UnrecoverableError when the response code is non success, 1xx/3xx/4xx/5xx. If false, the plugin logs error message instead of raising UnrecoverableError.
Using array format of JSON. This parameter is used and valid only for json format. When json_array as true, Content-Profile should be application/json and be able to use JSON data for the HTTP request body.
Default: false
open_timeout (int, optional)
Connection open timeout in seconds.
proxy (string, optional)
Proxy for HTTP request.
read_timeout (int, optional)
Read timeout in seconds.
retryable_response_codes ([]int, optional)
List of retryable response codes. If the response code is included in this list, the plugin retries the buffer flush. Since Fluentd v2 the Status code 503 is going to be removed from default.
Default: [503]
ssl_timeout (int, optional)
TLS timeout in seconds.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Maximum value of total message size to be included in one batch transmission. .
Default: 4096
kafka_agg_max_messages (int, optional)
Maximum number of messages to include in one batch transmission. .
Default: nil
keytab (*secret.Secret, optional)
max_send_retries (int, optional)
Number of times to retry sending of messages to a leader
Default: 1
message_key_key (string, optional)
Message Key
Default: “message_key”
partition_key (string, optional)
Partition
Default: “partition”
partition_key_key (string, optional)
Partition Key
Default: “partition_key”
password (*secret.Secret, optional)
Password when using PLAIN/SCRAM SASL authentication
principal (string, optional)
required_acks (int, optional)
The number of acks required per request .
Default: -1
ssl_ca_cert (*secret.Secret, optional)
CA certificate
ssl_ca_certs_from_system (*bool, optional)
System’s CA cert store
Default: false
ssl_client_cert (*secret.Secret, optional)
Client certificate
ssl_client_cert_chain (*secret.Secret, optional)
Client certificate chain
ssl_client_cert_key (*secret.Secret, optional)
Client certificate key
ssl_verify_hostname (*bool, optional)
Verify certificate hostname
sasl_over_ssl (bool, required)
SASL over SSL
Default: true
scram_mechanism (string, optional)
If set, use SCRAM authentication with specified mechanism. When unset, default to PLAIN authentication
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
topic_key (string, optional)
Topic Key
Default: “topic”
use_default_for_unknown_topic (bool, optional)
Use default for unknown topics
Default: false
username (*secret.Secret, optional)
Username when using PLAIN/SCRAM SASL authentication
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
append_new_line (*bool, optional)
If it is enabled, the plugin adds new line character (\n) to each serialized record. Before appending \n, plugin calls chomp and removes separator from the end of each record as chomp_record is true. Therefore, you don’t need to enable chomp_record option when you use kinesis_firehose output with default configuration (append_new_line is true). If you want to set append_new_line false, you can choose chomp_record false (default) or true (compatible format with plugin v2). (Default:true)
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required) {#assume role credentials-role_arn}
The Amazon Resource Name (ARN) of the role to assume
The number of attempts to make (with exponential backoff) when loading instance profile credentials from the EC2 metadata service using an IAM role. Defaults to 5 retries.
aws_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_sec_key (*secret.Secret, optional)
AWS secret key. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
aws_ses_token (*secret.Secret, optional)
AWS session token. This parameter is optional, but can be provided if using MFA or temporary credentials when your agent is not running on EC2 instance with an IAM Role.
This loads AWS access credentials from an external process.
region (string, optional)
AWS region of your stream. It should be in form like us-east-1, us-west-2. Default nil, which means try to find from environment variable AWS_REGION.
reset_backoff_if_success (bool, optional)
Boolean, default true. If enabled, when after retrying, the next retrying checks the number of succeeded records on the former batch request and reset exponential backoff if there is any success. Because batch request could be composed by requests across shards, simple exponential backoff for the batch request wouldn’t work some cases.
retries_on_batch_request (int, optional)
The plugin will put multiple records to Amazon Kinesis Data Streams in batches using PutRecords. A set of records in a batch may fail for reasons documented in the Kinesis Service API Reference for PutRecords. Failed records will be retried retries_on_batch_request times
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
stream_name (string, required)
Name of the stream to put data.
Assume Role Credentials
assume_role_credentials
duration_seconds (string, optional)
The duration, in seconds, of the role session (900-3600)
external_id (string, optional)
A unique identifier that is used by third parties when assuming roles in their customers’ accounts.
policy (string, optional)
An IAM policy in JSON format
role_arn (string, required)
The Amazon Resource Name (ARN) of the role to assume
HTTPS POST Request Timeout, Optional. Supports s and ms Suffices
Default: 30 s
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
Limit to the size of the Logz.io upload bulk. Defaults to 1000000 bytes leaving about 24kB for overhead.
bulk_limit_warning_limit (int, optional)
Limit to the size of the Logz.io warning message when a record exceeds bulk_limit to prevent a recursion when Fluent warnings are sent to the Logz.io output.
endpoint (*Endpoint, required)
Define LogZ endpoint URL
gzip (bool, optional)
Should the plugin ship the logs in gzip compression. Default is false.
http_idle_timeout (int, optional)
Timeout in seconds that the http persistent connection will stay open without traffic.
output_include_tags (bool, optional)
Should the appender add the fluentd tag to the document, called “fluentd_tag”
output_include_time (bool, optional)
Should the appender add a timestamp to your logs on their process time (recommended).
retry_count (int, optional)
How many times to resend failed bulks.
retry_sleep (int, optional)
How long to sleep initially between retries, exponential step-off.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
TLS: CA certificate file for server certificate verification Secret
cert (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
configure_kubernetes_labels (*bool, optional)
Configure Kubernetes metadata in a Prometheus like format
Default: false
drop_single_key (*bool, optional)
If a record only has 1 key, then just set the log line to the value and discard the key.
Default: false
extra_labels (map[string]string, optional)
Set of extra labels to include with every Loki stream.
extract_kubernetes_labels (*bool, optional)
Extract kubernetes labels as loki labels
Default: false
include_thread_label (*bool, optional)
whether to include the fluentd_thread label when multiple threads are used for flushing.
Default: true
insecure_tls (*bool, optional)
TLS: disable server certificate verification
Default: false
key (*secret.Secret, optional)
TLS: parameters for presenting a client certificate Secret
labels (Label, optional)
Set of labels to include with every Loki stream.
line_format (string, optional)
Format to use when flattening the record to a log line: json, key_value (default: key_value)
Default: json
password (*secret.Secret, optional)
Specify password if the Loki server requires authentication. Secret
remove_keys ([]string, optional)
Comma separated list of needless record keys to remove
Default: []
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
tenant (string, optional)
Loki is a multi-tenant log storage platform and all requests sent must include a tenant.
url (string, optional)
The url of the Loki server to send logs to.
Default: https://logs-us-west1.grafana.net
username (*secret.Secret, optional)
Specify a username if the Loki server requires authentication. Secret
Specify the application name for the rollover index to be created.
Default: default
buffer (*Buffer, optional)
bulk_message_request_threshold (string, optional)
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
This parameter adds additional headers to request. Example: {"token":"secret"}
Default: {}
customize_template (string, optional)
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
data_stream_enable (*bool, optional)
Use @type opensearch_data_stream
data_stream_name (string, optional)
You can specify Opensearch data stream name by this parameter. This parameter is mandatory for opensearch_data_stream.
data_stream_template_name (string, optional)
Specify an existing index template for the data stream. If not present, a new template is created and named after the data stream.
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on Fluentd statup.(default: true)
You can specify OpenSearch host by this parameter.
Default: localhost
hosts (string, optional)
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
http_backend (string, optional)
With http_backend typhoeus, the opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional)
http_backend_excon_nonblock
Default: true
id_key (string, optional)
Field on your data to identify the data uniquely
ignore_exceptions (string, optional)
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
include_index_in_url (bool, optional)
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
include_tag_key (bool, optional)
This will add the Fluentd tag in the JSON record.
Default: false
include_timestamp (bool, optional)
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
index_date_pattern (*string, optional)
Specify this to override the index date pattern for creating a rollover index.
This param is to set a pipeline ID of your OpenSearch to be added into the request, you can configure ingest node.
port (int, optional)
You can specify OpenSearch port by this parameter.
Default: 9200
prefer_oj_serializer (bool, optional)
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
reconnect_on_error (bool, optional)
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
reload_after (string, optional)
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
reload_connections (*bool, optional)
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional)
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
remove_keys_on_update (string, optional)
Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
remove_keys_on_update_key (string, optional)
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
request_timeout (string, optional)
You can specify HTTP request timeout.
Default: 5s
resurrect_after (string, optional)
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
retry_tag (string, optional)
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
routing_key (string, optional)
routing_key
ca_file (*secret.Secret, optional)
CA certificate
client_cert (*secret.Secret, optional)
Client certificate
client_key (*secret.Secret, optional)
Client certificate key
client_key_pass (*secret.Secret, optional)
Client key password
scheme (string, optional)
Connection scheme
Default: http
selector_class_name (string, optional)
selector_class_name
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
sniffer_class_name (string, optional)
The default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The sniffer_class_name parameter gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name.
ssl_verify (*bool, optional)
Skip ssl verification (default: true)
Default: true
ssl_version (string, optional)
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2]
suppress_doc_wrap (bool, optional)
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
suppress_type_name (*bool, optional)
Suppress type name to avoid warnings in OpenSearch
tag_key (string, optional)
This will add the Fluentd tag in the JSON record.
Default: tag
target_index_affinity (bool, optional)
target_index_affinity
Default: false
target_index_key (string, optional)
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.’) as a separator.
template_file (*secret.Secret, optional)
The path to the file containing the template to install. Secret
template_name (string, optional)
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
template_overwrite (bool, optional)
Always update the template, even if it already exists.
Default: false
templates (string, optional)
Specify index templates in form of hash. Can contain multiple templates.
time_key (string, optional)
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
time_key_exclude_timestamp (bool, optional)
time_key_exclude_timestamp
Default: false
time_key_format (string, optional)
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
time_parse_error_tag (string, optional)
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
time_precision (string, optional)
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
truncate_caches_interval (string, optional)
truncate_caches_interval
unrecoverable_error_types (string, optional)
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
unrecoverable_record_types (string, optional)
unrecoverable_record_types
use_legacy_template (*bool, optional)
Specify wether to use legacy template or not.
Default: true
user (string, optional)
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
utc_index (*bool, optional)
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
Default: true
validate_client_version (bool, optional)
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
verify_os_version_at_startup (*bool, optional)
verify_os_version_at_startup (default: true)
Default: true
with_transporter_log (bool, optional)
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
write_operation (string, optional)
The write_operation can be any of: (index,create,update,upsert)
Default: index
OpenSearchEndpointCredentials
access_key_id (*secret.Secret, optional)
AWS access key id. This parameter is required when your agent is not running on EC2 instance with an IAM Role.
assume_role_arn (*secret.Secret, optional)
Typically, you can use AssumeRole for cross-account access or federation.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/opensearch/releases.releases b/4.6/docs/configuration/plugins/outputs/opensearch/releases.releases
new file mode 100644
index 000000000..20e019877
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/opensearch/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/oss/index.html b/4.6/docs/configuration/plugins/outputs/oss/index.html
new file mode 100644
index 000000000..7581b494c
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/oss/index.html
@@ -0,0 +1,626 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Alibaba Cloud | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Alibaba Cloud
+
Aliyun OSS plugin for Fluentd
Overview
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. For details, see https://github.com/aliyun/fluent-plugin-oss.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, Fluentd logs a warning message and increases the fluentd_output_status_slow_flush_count metric.
strftime_format (string, optional)
Users can set strftime format.
Default: “%s”
ttl (int, optional)
If 0 or negative value is set, ttl is not set in each key.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/redis/releases.releases b/4.6/docs/configuration/plugins/outputs/redis/releases.releases
new file mode 100644
index 000000000..857ccd52a
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/redis/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/relabel/index.html b/4.6/docs/configuration/plugins/outputs/relabel/index.html
new file mode 100644
index 000000000..6e8746f21
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/relabel/index.html
@@ -0,0 +1,670 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Relabel | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Relabel
+
Available in Logging Operator version 4.2 and later.
The relabel output uses the relabel output plugin of Fluentd to route events back to a specific Flow, where they can be processed again.
This is useful, for example, if you need to preprocess a subset of logs differently, but then do the same processing on all messages at the end. In this case, you can create multiple flows for preprocessing based on specific log matchers and then aggregate everything into a single final flow for postprocessing.
The value of the label parameter of the relabel output must be the same as the value of the flowLabel parameter of the Flow (or ClusterFlow) where you want to send the messages.
Using the relabel output also makes it possible to pass the messages emitted by the Concat plugin in case of a timeout. Set the timeout_label of the concat plugin to the flowLabel of the flow where you want to send the timeout messages.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/relabel/releases.releases b/4.6/docs/configuration/plugins/outputs/relabel/releases.releases
new file mode 100644
index 000000000..a6a19168b
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/relabel/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/releases.releases b/4.6/docs/configuration/plugins/outputs/releases.releases
new file mode 100644
index 000000000..197c07097
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/outputs/s3/index.html b/4.6/docs/configuration/plugins/outputs/s3/index.html
new file mode 100644
index 000000000..fd83d87aa
--- /dev/null
+++ b/4.6/docs/configuration/plugins/outputs/s3/index.html
@@ -0,0 +1,645 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Amazon S3 | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Amazon S3
+
Amazon S3 plugin for Fluentd
Overview
The s3 output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log ‘2011-01-02 message B’ is reached, and then another log ‘2011-01-03 message B’ is reached in this order, the former one is stored in “20110102.gz” file, and latter one in “20110103.gz” file.
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
sse_customer_algorithm (string, optional)
Specifies the algorithm to use to when encrypting the object
sse_customer_key (string, optional)
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data
sse_customer_key_md5 (string, optional)
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321
If false, the certificate of endpoint will not be verified
storage_class (string, optional)
The type of storage to use for the object, for example STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR For a complete list of possible values, see the Amazon S3 API reference.
store_as (string, optional)
Archive format on S3
use_bundled_cert (string, optional)
Use aws-sdk-ruby bundled cert
use_server_side_encryption (string, optional)
The Server-side encryption algorithm used when storing this object in S3 (AES256, aws:kms)
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into s3
The annotation format is logging.banzaicloud.io/<loggingRef>: watched. Since the name part of the an annotation can’t be empty the default applies to empty loggingRef value as well.
The mount path is generated from the secret information
Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. .
Default: true
data_type (string, optional)
The type of data that will be sent to Sumo Logic, either event or metric
Default: event
fields (Fields, optional)
In this case, parameters inside <fields> are used as indexed fields and removed from the original input events
The host location for events. Cannot set both host and host_key parameters at the same time. (Default:hostname)
host_key (string, optional)
Key for the host location. Cannot set both host and host_key parameters at the same time.
idle_timeout (int, optional)
If a connection has not been used for this number of seconds it will automatically be reset upon the next use to avoid attempting to send to a closed connection. nil means no timeout.
index (string, optional)
Identifier for the Splunk index to be used for indexing events. If this parameter is not set, the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.
index_key (string, optional)
The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.
insecure_ssl (*bool, optional)
Indicates if insecure SSL connection is allowed
Default: false
keep_keys (bool, optional)
By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.
metric_name_key (string, optional)
Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event parameter. When this prameter is set, the metrics_from_event parameter is automatically set to false.
Default: true
metric_value_key (string, optional)
Field name that contains the metric value, this parameter is required when metric_name_key is configured.
metrics_from_event (*bool, optional)
When data_type is set to “metric”, the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. (Default:true)
non_utf8_replacement_string (string, optional)
If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. .
Default: ’ '
open_timeout (int, optional)
The amount of time to wait for a connection to be opened.
protocol (string, optional)
This is the protocol to use for calling the Hec API. Available values are: http, https.
Default: https
read_timeout (int, optional)
The amount of time allowed between reading two chunks from the socket.
ssl_ciphers (string, optional)
List of SSL ciphers allowed.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source (string, optional)
The source field for events. If this parameter is not set, the source will be decided by HEC. Cannot set both source and source_key parameters at the same time.
source_key (string, optional)
Field name to contain source. Cannot set both source and source_key parameters at the same time.
sourcetype (string, optional)
The sourcetype field for events. When not set, the sourcetype is decided by HEC. Cannot set both source and source_key parameters at the same time.
sourcetype_key (string, optional)
Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.
SQS queue url e.g. https://sqs.us-west-2.amazonaws.com/123456789012/myqueue
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Used to specify the key when merging json or sending logs in text format
Default: message
metric_data_format (string, optional)
The format of metrics you will be sending, either graphite or carbon2 or prometheus
Default: graphite
open_timeout (int, optional)
Set timeout seconds to wait until connection is opened.
Default: 60
proxy_uri (string, optional)
Add the uri of the proxy environment if present.
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
source_category (string, optional)
Set _sourceCategory metadata field within SumoLogic
Default: nil
source_host (string, optional)
Set _sourceHost metadata field within SumoLogic
Default: nil
source_name (string, required)
Set _sourceName metadata field within SumoLogic - overrides source_name_key (default is nil)
source_name_key (string, optional)
Set as source::path_key’s value so that the source_name can be extracted from Fluentd’s buffer
Default: source_name
sumo_client (string, optional)
Name of sumo client which is send as X-Sumo-Client header
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Authorization Bearer token for http request to VMware Log Intelligence Secret
content_type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Default: simple
LogIntelligenceHeadersOut
LogIntelligenceHeadersOut is used to convert the input LogIntelligenceHeaders to a fluentd
+output that uses the correct key names for the VMware Log Intelligence plugin. This allows the
+Ouput to accept the config is snake_case (as other output plugins do) but output the fluentd
+ config with the proper key names (ie. content_type -> Content-Type)
Authorization (*secret.Secret, required)
Authorization Bearer token for http request to VMware Log Intelligence
Content-Type (string, required)
Content Type for http request to VMware Log Intelligence
Default: application/json
structure (string, required)
Structure for http request to VMware Log Intelligence
Flatten hashes to create one key/val pair w/o losing log data
Default: true
flatten_hashes_separator (string, optional)
Separator to use for joining flattened keys
Default: _
http_conn_debug (bool, optional)
If set, enables debug logs for http connection
Default: false
http_method (string, optional)
HTTP method (post)
Default: post
host (string, optional)
VMware Aria Operations For Logs Host ex. localhost
log_text_keys ([]string, optional)
Keys from log event whose values should be added as log message/text to VMware Aria Operations For Logs. These key/value pairs won’t be expanded/flattened and won’t be added as metadata/fields.
VMware Aria Operations For Logs ingestion api path ex. ‘api/v1/events/ingest’
Default: api/v1/events/ingest
port (int, optional)
VMware Aria Operations For Logs port ex. 9000
Default: 80
raise_on_error (bool, optional)
Raise errors that were rescued during HTTP requests?
Default: false
rate_limit_msec (int, optional)
Simple rate limiting: ignore any records within rate_limit_msec since the last one
Default: 0
request_retries (int, optional)
Number of retries
Default: 3
request_timeout (int, optional)
http connection ttl for each request
Default: 5
ssl_verify (*bool, optional)
SSL verification flag
Default: true
scheme (string, optional)
HTTP scheme (http,https)
Default: http
serializer (string, optional)
Serialization (json)
Default: json
shorten_keys (map[string]string, optional)
Keys from log event to rewrite for instance from ‘kubernetes_namespace’ to ‘k8s_namespace’ tags will be rewritten with substring substitution and applied in the order present in the hash. Hashes enumerate their values in the order that the corresponding keys were inserted, see: https://ruby-doc.org/core-2.2.2/Hash.html
The name of the counter to create. Note that the value of this option is always prefixed with syslogng_, so for example key("my-custom-key") becomes syslogng_my-custom-key.
labels (ArrowMap, optional)
The labels used to create separate counters, based on the fields of the messages processed by metrics-probe(). The keys of the map are the name of the label, and the values are syslog-ng templates.
level (int, optional)
Sets the stats level of the generated metrics (default 0).
- (struct{}, required)
+
3 - Rewrite
Rewrite filters can be used to modify record contents. Logging operator currently supports the following rewrite functions:
The name of the counter to create. Note that the value of this option is always prefixed with syslogng_, so for example key("my-custom-key") becomes syslogng_my-custom-key.
labels (ArrowMap, optional)
The labels used to create separate counters, based on the fields of the messages processed by metrics-probe(). The keys of the map are the name of the label, and the values are syslog-ng templates.
level (int, optional)
Sets the stats level of the generated metrics (default 0).
SyslogNGOutput and SyslogNGClusterOutput resources have almost the same structure as Output and ClusterOutput resources, with the main difference being the number and kind of supported destinations.
You can use the following syslog-ng outputs in your SyslogNGOutput and SyslogNGClusterOutput resources.
+
1 - Authentication for syslog-ng outputs
Overview
GRPC-based outputs use this configuration instead of the simple tls field found at most HTTP based destinations. For details, see the documentation of a related syslog-ng destination, for example, Grafana Loki.
Configuration
Auth
Authentication settings. Only one authentication method can be set. Default: Insecure
adc (*ADC, optional)
Application Default Credentials (ADC).
alts (*ALTS, optional)
Application Layer Transport Security (ALTS) is a simple to use authentication, only available within Google’s infrastructure.
insecure (*Insecure, optional)
This is the default method, authentication is disabled (auth(insecure())).
Prunes the unused space in the LogMessage representation
dir (string, optional)
Description: Defines the folder where the disk-buffer files are stored.
disk_buf_size (int64, required)
This is a required option. The maximum size of the disk-buffer in bytes. The minimum value is 1048576 bytes.
mem_buf_length (*int64, optional)
Use this option if the option reliable() is set to no. This option contains the number of messages stored in overflow queue.
mem_buf_size (*int64, optional)
Use this option if the option reliable() is set to yes. This option contains the size of the messages in bytes that is used in the memory part of the disk buffer.
q_out_size (*int64, optional)
The number of messages stored in the output buffer of the destination.
reliable (bool, required)
If set to yes, syslog-ng OSE cannot lose logs in case of reload/restart, unreachable destination or syslog-ng OSE crash. This solution provides a slower, but reliable disk-buffer option.
The group of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-group().
Default: Use the global settings
dir_owner (string, optional)
The owner of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-owner().
Default: Use the global settings
dir_perm (int, optional)
The permission mask of directories created by syslog-ng. Log directories are only created if a file after macro expansion refers to a non-existing directory, and directory creation is enabled (see also the create-dirs() option). For octal numbers prefix the number with 0, for example, use 0755 for rwxr-xr-x.
Default: Use the global settings
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
The body of the HTTP request, for example, body("${ISODATE} ${MESSAGE}"). You can use strings, macros, and template functions in the body. If not set, it will contain the message received from the source by default.
body-prefix (string, optional)
The string syslog-ng OSE puts at the beginning of the body of the HTTP request, before the log message.
body-suffix (string, optional)
The string syslog-ng OSE puts to the end of the body of the HTTP request, after the log message.
delimiter (string, optional)
By default, syslog-ng OSE separates the log messages of the batch with a newline character.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
log-fifo-size (int, optional)
The number of messages that the output queue can store.
method (string, optional)
Specifies the HTTP method to use when sending the message to the server. POST | PUT
password (secret.Secret, optional)
The password that syslog-ng OSE uses to authenticate on the server where it sends the messages.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timeout (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: http://127.0.0.1:8000
user (string, optional)
The username that syslog-ng OSE uses to authenticate on the server where it sends the messages.
user-agent (string, optional)
The value of the USER-AGENT header in the messages sent to the server.
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
Batch
batch-bytes (int, optional)
Description: Sets the maximum size of payload in a batch. If the size of the messages reaches this value, syslog-ng OSE sends the batch to the destination even if the number of messages is less than the value of the batch-lines() option. Note that if the batch-timeout() option is enabled and the queue becomes empty, syslog-ng OSE flushes the messages only if batch-timeout() expires, or the batch reaches the limit set in batch-bytes().
batch-lines (int, optional)
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
+
6 - Loggly output
Overview
The loggly() destination sends log messages to the Loggly Logging-as-a-Service provider.
+You can send log messages over TCP, or encrypted with TLS for syslog-ng outputs.
A JSON object representing key-value pairs for the Event. These key-value pairs adds structure to Events, making it easier to search. Attributes can be nested JSON objects, however, we recommend limiting the amount of nesting.
Default: "--scope rfc5424 --exclude MESSAGE --exclude DATE --leave-initial-dot"
batch_bytes (int, optional)
batch_lines (int, optional)
batch_timeout (int, optional)
body (string, optional)
content_type (string, optional)
This field specifies the content type of the log records being sent to Falcon’s LogScale.
Default: "application/json"
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
extra_headers (string, optional)
This field represents additional headers that can be included in the HTTP request when sending log records to Falcon’s LogScale.
Default: empty
persist_name (string, optional)
rawstring (string, optional)
The raw string representing the Event. The default display for an Event in LogScale is the rawstring. If you do not provide the rawstring field, then the response defaults to a JSON representation of the attributes field.
Default: empty
timezone (string, optional)
The timezone is only required if you specify the timestamp in milliseconds. The timezone specifies the local timezone for the event. Note that you must still specify the timestamp in UTC time.
token (*secret.Secret, optional)
An Ingest Token is a unique string that identifies a repository and allows you to send data to that repository.
Default: empty
url (*secret.Secret, optional)
Ingester URL is the URL of the Humio cluster you want to send data to.
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
labels (filter.ArrowMap, optional)
Using the Labels map, Kubernetes label to Loki label mapping can be configured. Example: {"app" : "$PROGRAM"}
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See syslog-ng docs for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
template (string, optional)
Template for customizing the log message format.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timestamp (string, optional)
The timestamp that will be applied to the outgoing messages (possible values: current|received|msg default: current). Loki does not accept events, in which the timestamp is not monotonically increasing.
url (string, optional)
Specifies the hostname or IP address and optionally the port number of the service that can receive log data via gRPC. Use a colon (:) after the address to specify the port number of the server. For example: grpc://127.0.0.1:8000
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The name of the MongoDB collection where the log messages are stored (collections are similar to SQL tables). Note that the name of the collection must not start with a dollar sign ($), and that it may contain dot (.) characters.
dir (string, optional)
Defines the folder where the disk-buffer files are stored.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
fallback-topic is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).
qos (int, optional)
qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered.
template (string, optional)
Template where you can configure the message template sent to the MQTT broker. By default, the template is: $ISODATE $HOST $MSGHDR$MSG
topic (string, optional)
Topic defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.
The password used for authentication on a password-protected Redis server.
command (StringList, optional)
Internal rendered form of the CommandAndArguments field
command_and_arguments ([]string, optional)
The Redis command to execute, for example, LPUSH, INCR, or HINCRBY. Using the HINCRBY command with an increment value of 1 allows you to create various statistics. For example, the command("HINCRBY" "${HOST}/programs" "${PROGRAM}" "1") command counts the number of log messages on each host for each program.
Default: ""
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
host (string, optional)
The hostname or IP address of the Redis server.
Default: 127.0.0.1
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
Persistname
port (int, optional)
The port number of the Redis server.
Default: 6379
retries (int, optional)
If syslog-ng OSE cannot send a message, it will try again until the number of attempts reaches retries().
Default: 3
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Default: 0
time-reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The number of messages that the output queue can store.
max_object_size (int, optional)
Set the maximum object size size.
Default: 5120GiB
max_pending_uploads (int, optional)
Set the maximum number of pending uploads.
Default: 32
object_key (string, optional)
The object_key for the S3 server.
object_key_timestamp (RawString, optional)
Set object_key_timestamp
persist_name (string, optional)
Persistname
region (string, optional)
Set the region option.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
secret_key (*secret.Secret, optional)
The secret_key for the S3 server.
storage_class (string, optional)
Set the storage_class option.
template (RawString, optional)
Template
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
persist_name (string, optional)
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
persist_name (string, optional)
port (int, optional)
This option sets the port number of the Sumo Logic server to connect to.
Default: 6514
tag (string, optional)
This option specifies the list of tags to add as the tags fields of Sumo Logic messages. If not specified, syslog-ng OSE automatically adds the tags already assigned to the message. If you set the tag() option, only the tags you specify will be added to the messages.
By default, syslog-ng OSE closes destination sockets if it receives any input from the socket (for example, a reply). If this option is set to no, syslog-ng OSE just ignores the input, but does not close the socket. For details, see the documentation of the AxoSyslog syslog-ng distribution.
disk_buffer (*DiskBuffer, optional)
Enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Unique name for the syslog-ng driver. If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The name of a directory that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. (Optional) For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file, that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
cipher-suite (string, optional)
Description: Specifies the cipher, hash, and key-exchange algorithms used for the encryption, for example, ECDHE-ECDSA-AES256-SHA384. The list of available algorithms depends on the version of OpenSSL used to compile syslog-ng.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Use the certificate store of the system for verifying HTTPS certificates. For details, see the AxoSyslog Core documentation.
GrpcTLS
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
GRPC-based outputs use this configuration instead of the simple tls field found at most HTTP based destinations. For details, see the documentation of a related syslog-ng destination, for example, Grafana Loki.
Configuration
Auth
Authentication settings. Only one authentication method can be set. Default: Insecure
adc (*ADC, optional)
Application Default Credentials (ADC).
alts (*ALTS, optional)
Application Layer Transport Security (ALTS) is a simple to use authentication, only available within Google’s infrastructure.
insecure (*Insecure, optional)
This is the default method, authentication is disabled (auth(insecure())).
Prunes the unused space in the LogMessage representation
dir (string, optional)
Description: Defines the folder where the disk-buffer files are stored.
disk_buf_size (int64, required)
This is a required option. The maximum size of the disk-buffer in bytes. The minimum value is 1048576 bytes.
mem_buf_length (*int64, optional)
Use this option if the option reliable() is set to no. This option contains the number of messages stored in overflow queue.
mem_buf_size (*int64, optional)
Use this option if the option reliable() is set to yes. This option contains the size of the messages in bytes that is used in the memory part of the disk buffer.
q_out_size (*int64, optional)
The number of messages stored in the output buffer of the destination.
reliable (bool, required)
If set to yes, syslog-ng OSE cannot lose logs in case of reload/restart, unreachable destination or syslog-ng OSE crash. This solution provides a slower, but reliable disk-buffer option.
The group of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-group().
Default: Use the global settings
dir_owner (string, optional)
The owner of the directories created by syslog-ng. To preserve the original properties of an existing directory, use the option without specifying an attribute: dir-owner().
Default: Use the global settings
dir_perm (int, optional)
The permission mask of directories created by syslog-ng. Log directories are only created if a file after macro expansion refers to a non-existing directory, and directory creation is enabled (see also the create-dirs() option). For octal numbers prefix the number with 0, for example, use 0755 for rwxr-xr-x.
Default: Use the global settings
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
The body of the HTTP request, for example, body("${ISODATE} ${MESSAGE}"). You can use strings, macros, and template functions in the body. If not set, it will contain the message received from the source by default.
body-prefix (string, optional)
The string syslog-ng OSE puts at the beginning of the body of the HTTP request, before the log message.
body-suffix (string, optional)
The string syslog-ng OSE puts to the end of the body of the HTTP request, after the log message.
delimiter (string, optional)
By default, syslog-ng OSE separates the log messages of the batch with a newline character.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
log-fifo-size (int, optional)
The number of messages that the output queue can store.
method (string, optional)
Specifies the HTTP method to use when sending the message to the server. POST | PUT
password (secret.Secret, optional)
The password that syslog-ng OSE uses to authenticate on the server where it sends the messages.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timeout (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: http://127.0.0.1:8000
user (string, optional)
The username that syslog-ng OSE uses to authenticate on the server where it sends the messages.
user-agent (string, optional)
The value of the USER-AGENT header in the messages sent to the server.
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
Batch
batch-bytes (int, optional)
Description: Sets the maximum size of payload in a batch. If the size of the messages reaches this value, syslog-ng OSE sends the batch to the destination even if the number of messages is less than the value of the batch-lines() option. Note that if the batch-timeout() option is enabled and the queue becomes empty, syslog-ng OSE flushes the messages only if batch-timeout() expires, or the batch reaches the limit set in batch-bytes().
batch-lines (int, optional)
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/syslog-ng-outputs/http/releases.releases b/4.6/docs/configuration/plugins/syslog-ng-outputs/http/releases.releases
new file mode 100644
index 000000000..14752c9de
--- /dev/null
+++ b/4.6/docs/configuration/plugins/syslog-ng-outputs/http/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/syslog-ng-outputs/index.html b/4.6/docs/configuration/plugins/syslog-ng-outputs/index.html
new file mode 100644
index 000000000..606fb0bdb
--- /dev/null
+++ b/4.6/docs/configuration/plugins/syslog-ng-outputs/index.html
@@ -0,0 +1,664 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+syslog-ng outputs | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
syslog-ng outputs
+
SyslogNGOutput and SyslogNGClusterOutput resources have almost the same structure as Output and ClusterOutput resources, with the main difference being the number and kind of supported destinations.
You can use the following syslog-ng outputs in your SyslogNGOutput and SyslogNGClusterOutput resources.
The loggly() destination sends log messages to the Loggly Logging-as-a-Service provider.
+You can send log messages over TCP, or encrypted with TLS for syslog-ng outputs.
A JSON object representing key-value pairs for the Event. These key-value pairs adds structure to Events, making it easier to search. Attributes can be nested JSON objects, however, we recommend limiting the amount of nesting.
Default: "--scope rfc5424 --exclude MESSAGE --exclude DATE --leave-initial-dot"
batch_bytes (int, optional)
batch_lines (int, optional)
batch_timeout (int, optional)
body (string, optional)
content_type (string, optional)
This field specifies the content type of the log records being sent to Falcon’s LogScale.
Default: "application/json"
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
extra_headers (string, optional)
This field represents additional headers that can be included in the HTTP request when sending log records to Falcon’s LogScale.
Default: empty
persist_name (string, optional)
rawstring (string, optional)
The raw string representing the Event. The default display for an Event in LogScale is the rawstring. If you do not provide the rawstring field, then the response defaults to a JSON representation of the attributes field.
Default: empty
timezone (string, optional)
The timezone is only required if you specify the timestamp in milliseconds. The timezone specifies the local timezone for the event. Note that you must still specify the timestamp in UTC time.
token (*secret.Secret, optional)
An Ingest Token is a unique string that identifies a repository and allows you to send data to that repository.
Default: empty
url (*secret.Secret, optional)
Ingester URL is the URL of the Humio cluster you want to send data to.
Description: Specifies how many lines are flushed to a destination in one batch. The syslog-ng OSE application waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For example, if you set batch-lines() to 100, syslog-ng OSE waits for 100 messages.
batch-timeout (int, optional)
Description: Specifies the time syslog-ng OSE waits for lines to accumulate in the output buffer. The syslog-ng OSE application sends batches to the destinations evenly. The timer starts when the first message arrives to the buffer, so if only few messages arrive, syslog-ng OSE sends messages to the destination at most once every batch-timeout() milliseconds.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
labels (filter.ArrowMap, optional)
Using the Labels map, Kubernetes label to Loki label mapping can be configured. Example: {"app" : "$PROGRAM"}
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See syslog-ng docs for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
template (string, optional)
Template for customizing the log message format.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
timestamp (string, optional)
The timestamp that will be applied to the outgoing messages (possible values: current|received|msg default: current). Loki does not accept events, in which the timestamp is not monotonically increasing.
url (string, optional)
Specifies the hostname or IP address and optionally the port number of the service that can receive log data via gRPC. Use a colon (:) after the address to specify the port number of the server. For example: grpc://127.0.0.1:8000
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The name of the MongoDB collection where the log messages are stored (collections are similar to SQL tables). Note that the name of the collection must not start with a dollar sign ($), and that it may contain dot (.) characters.
dir (string, optional)
Defines the folder where the disk-buffer files are stored.
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
fallback-topic is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).
qos (int, optional)
qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered.
template (string, optional)
Template where you can configure the message template sent to the MQTT broker. By default, the template is: $ISODATE $HOST $MSGHDR$MSG
topic (string, optional)
Topic defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.
The password used for authentication on a password-protected Redis server.
command (StringList, optional)
Internal rendered form of the CommandAndArguments field
command_and_arguments ([]string, optional)
The Redis command to execute, for example, LPUSH, INCR, or HINCRBY. Using the HINCRBY command with an increment value of 1 allows you to create various statistics. For example, the command("HINCRBY" "${HOST}/programs" "${PROGRAM}" "1") command counts the number of log messages on each host for each program.
Default: ""
disk_buffer (*DiskBuffer, optional)
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
host (string, optional)
The hostname or IP address of the Redis server.
Default: 127.0.0.1
log-fifo-size (int, optional)
The number of messages that the output queue can store.
persist_name (string, optional)
Persistname
port (int, optional)
The port number of the Redis server.
Default: 6379
retries (int, optional)
If syslog-ng OSE cannot send a message, it will try again until the number of attempts reaches retries().
Default: 3
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
Default: 0
time-reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
Default: 60
workers (int, optional)
Specifies the number of worker threads (at least 1) that syslog-ng OSE uses to send messages to the server. Increasing the number of worker threads can drastically improve the performance of the destination.
The number of messages that the output queue can store.
max_object_size (int, optional)
Set the maximum object size size.
Default: 5120GiB
max_pending_uploads (int, optional)
Set the maximum number of pending uploads.
Default: 32
object_key (string, optional)
The object_key for the S3 server.
object_key_timestamp (RawString, optional)
Set object_key_timestamp
persist_name (string, optional)
Persistname
region (string, optional)
Set the region option.
retries (int, optional)
The number of times syslog-ng OSE attempts to send a message to this destination. If syslog-ng OSE could not send a message, it will try again until the number of attempts reaches retries, then drops the message.
secret_key (*secret.Secret, optional)
The secret_key for the S3 server.
storage_class (string, optional)
Set the storage_class option.
template (RawString, optional)
Template
throttle (int, optional)
Sets the maximum number of messages sent to the destination per second. Use this output-rate-limiting functionality only when using disk-buffer as well to avoid the risk of losing messages. Specifying 0 or a lower value sets the output limit to unlimited.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
headers ([]string, optional)
Custom HTTP headers to include in the request, for example, headers("HEADER1: header1", "HEADER2: header2").
Default: empty
persist_name (string, optional)
time_reopen (int, optional)
The time to wait in seconds before a dead connection is reestablished.
This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Default: false
persist_name (string, optional)
port (int, optional)
This option sets the port number of the Sumo Logic server to connect to.
Default: 6514
tag (string, optional)
This option specifies the list of tags to add as the tags fields of Sumo Logic messages. If not specified, syslog-ng OSE automatically adds the tags already assigned to the message. If you set the tag() option, only the tags you specify will be added to the messages.
By default, syslog-ng OSE closes destination sockets if it receives any input from the socket (for example, a reply). If this option is set to no, syslog-ng OSE just ignores the input, but does not close the socket. For details, see the documentation of the AxoSyslog syslog-ng distribution.
disk_buffer (*DiskBuffer, optional)
Enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the Syslog-ng DiskBuffer options.
Unique name for the syslog-ng driver. If you receive the following error message during syslog-ng startup, set the persist-name() option of the duplicate drivers: Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down. See the documentation of the AxoSyslog syslog-ng distribution for more information.
The name of a directory that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. (Optional) For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file, that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
cipher-suite (string, optional)
Description: Specifies the cipher, hash, and key-exchange algorithms used for the encryption, for example, ECDHE-ECDSA-AES256-SHA384. The list of available algorithms depends on the version of OpenSSL used to compile syslog-ng.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
Use the certificate store of the system for verifying HTTPS certificates. For details, see the AxoSyslog Core documentation.
GrpcTLS
ca_file (*secret.Secret, optional)
The name of a file that contains a set of trusted CA certificates in PEM format. For details, see the AxoSyslog Core documentation.
cert_file (*secret.Secret, optional)
Name of a file that contains an X.509 certificate (or a certificate chain) in PEM format, suitable as a TLS certificate, matching the private key set in the key-file() option. For details, see the AxoSyslog Core documentation.
key_file (*secret.Secret, optional)
The name of a file that contains an unencrypted private key in PEM format, suitable as a TLS key. For details, see the AxoSyslog Core documentation.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/plugins/syslog-ng-outputs/tls/releases.releases b/4.6/docs/configuration/plugins/syslog-ng-outputs/tls/releases.releases
new file mode 100644
index 000000000..63da3b4da
--- /dev/null
+++ b/4.6/docs/configuration/plugins/syslog-ng-outputs/tls/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/configuration/releases.releases b/4.6/docs/configuration/releases.releases
new file mode 100644
index 000000000..14d37f249
--- /dev/null
+++ b/4.6/docs/configuration/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/developers/index.html b/4.6/docs/developers/index.html
new file mode 100644
index 000000000..c944c2a96
--- /dev/null
+++ b/4.6/docs/developers/index.html
@@ -0,0 +1,694 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+For developers | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
For developers
+
This documentation helps to set-up a developer environment and writing plugins for the Logging operator.
Setting up Kind
+
+
Install Kind on your computer
go get sigs.k8s.io/kind@v0.5.1
+
+
Create cluster
kind create cluster --name logging
+
+
Install prerequisites (this is a Kubebuilder makefile that will generate and install crds)
make install
+
+
Run the Operator
go run main.go
+
Writing a plugin
To add a plugin to the logging operator you need to define the plugin struct.
+
Note: Place your plugin in the corresponding directory pkg/sdk/logging/model/filter or pkg/sdk/logging/model/output
typeMyExampleOutputstruct{
+// Path that is required for the plugin
+Pathstring`json:"path,omitempty"`
+}
+
The plugin uses the JSON tags to parse and validate configuration. Without tags the configuration is not valid. The fluent parameter name must match with the JSON tag. Don’t forget to use omitempty for non required parameters.
Implement ToDirective
To render the configuration you have to implement the ToDirective function.
The operator parse the docstrings for the documentation.
...
+// AWS access key id
+AwsAccessKey*secret.Secret`json:"aws_key_id,omitempty"`
+...
+
Will generate the following Markdown
+
+
+
Variable Name
Default
Applied function
+
+
AwsAccessKey
AWS access key id
You can hint default values in docstring via (default: value). This is useful if you don’t want to set default explicitly with tag. However during rendering defaults in tags have priority over docstring.
...
+// The format of S3 object keys (default: %{path}%{time_slice}_%{index}.%{file_extension})
+S3ObjectKeyFormatstring`json:"s3_object_key_format,omitempty"`
+...
+
Special docstrings
+
+docName:"Title for the plugin section"
+docLink:"Buffer,./buffer.md"
You can declare document title and description above the type _doc* interface{} variable declaration.
Example Document headings:
// +docName:"Amazon S3 plugin for Fluentd"
+// **s3** output plugin buffers event logs in local file and upload it to S3 periodically. This plugin splits files exactly by using the time of event logs (not the time when the logs are received). For example, a log '2011-01-02 message B' is reached, and then another log '2011-01-03 message B' is reached in this order, the former one is stored in "20110102.gz" file, and latter one in "20110103.gz" file.
+type_docS3interface{}
+
YAML files for simple logging flows with filter examples.
GeoIP filter
Parser and tag normalizer
Dedot filter
Multiple format
+
2 - Parsing custom date formats
By default, the syslog-ng aggregator uses the time when a message has been received on its input source as the timestamp. If you want to use the timestamp written in the message metadata, you can use a date-parser.
Available in Logging operator version 4.5 and later.
To use the timestamps written by the container runtime (cri or docker) and parsed by Fluent Bit, define the sourceDateParser in the syslog-ng spec.
3 - Store Nginx Access Logs in Amazon CloudWatch with Logging Operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to CloudWatch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator and a demo Application
Install the Logging operator and a demo application using Helm.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Create AWS secret
+
If you have your $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY set you can use the following snippet.
4 - Transport all logs into Amazon S3 with Logging operator
This guide describes how to collect all the container logs in Kubernetes using the Logging operator, and how to send them to Amazon S3.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator
Install the Logging operator.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
Check the output. The logs will be available in the bucket on a path like:
5 - Store NGINX access logs in Elasticsearch with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Elasticsearch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Elasticsearch
First, deploy Elasticsearch in your Kubernetes cluster. The following procedure is based on the Elastic Cloud on Kubernetes quickstart, but there are some minor configuration changes, and we install everything into the logging namespace.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Use the following command to retrieve the password of the elastic user:
kubectl -n logging get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}'| base64 --decode;echo
+
+
Enable port forwarding to the Kibana Dashboard Service.
Open the Kibana dashboard in your browser at https://localhost:5601 and login as elastic using the retrieved password.
+
By default, the Logging operator sends the incoming log messages into an index called fluentd. Create an Index Pattern that includes this index (for example, fluentd*), then select Menu > Kibana > Discover. You should see the dashboard and some sample log messages from the demo application.
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Splunk.
Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output (in this case, to Splunk). For more details about the Logging operator, see the Logging operator overview.
Deploy Splunk
First, deploy Splunk Standalone in your Kubernetes cluster. The following procedure is based on the Splunk on Kubernetes quickstart.
This guide walks you through a simple Sumo Logic setup using the Logging Operator.
+Sumo Logic has Prometheus and logging capabilities as well. Now we only focus on the logging part.
Configuration
There are 3 crucial plugins needed for a proper Sumo Logic setup.
+
Kubernetes metadata enhancer
Sumo Logic filter
Sumo Logic output
Let’s setup the logging first.
GlobalFilters
The first thing we need to ensure is that the EnhanceK8s filter is present in the globalFilters section of the Logging spec.
+This adds additional data to the log lines (like deployment and service names).
Now we can create a ClusterFlow. Add the Sumo Logic filter to the filters section of the ClusterFlow spec.
+It will use the Kubernetes metadata and moves them to a special field called _sumo_metadata.
+All those moved fields will be sent as HTTP Header to the Sumo Logic endpoint.
+
Note: As we are using Fluent Bit to enrich Kubernetes metadata, we need to specify the field names where this data is stored.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Configure the Logging operator
+
+
Create the logging resource with a persistent syslog-ng installation.
9 - Transport Nginx Access Logs into Kafka with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Kafka.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
10 - Store Nginx Access Logs in Grafana Loki with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Grafana Loki.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Loki and Grafana
+
+
Add the chart repositories of Loki and Grafana using the following commands:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Nodegroup-based multitenancy allows you to have multiple tenants (for example, different developer teams or customer environments) on the same cluster who can configure their own logging resources within their assigned namespaces residing on different node groups.
+These resources are isolated from the resources of the other tenants so the configuration issues and performance characteristics of one tenant doesn’t affect the others.
Sample setup
The following procedure creates two tenants (A and B) and their respective namespaces on a two-node cluster.
+
+
If you don’t already have a cluster, create one with your provider. For a quick test, you can use a local cluster, for example, using minikube:
minikube start --nodes=2
+
+
Set labels on the nodes that correspond to your tenants, for example, tenant-a and tenant-b.
Output metrics are added before the log reaches the destination, and is decorated with the output metadata like: name, namespace, and scope. scope stores whether the output is a local or global one. For example:
Store Nginx Access Logs in Amazon CloudWatch with Logging Operator
+
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to CloudWatch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator and a demo Application
Install the Logging operator and a demo application using Helm.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Create AWS secret
+
If you have your $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY set you can use the following snippet.
Output metrics are added before the log reaches the destination, and is decorated with the output metadata like: name, namespace, and scope. scope stores whether the output is a local or global one. For example:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/custom-syslog-ng-metrics/releases.releases b/4.6/docs/examples/custom-syslog-ng-metrics/releases.releases
new file mode 100644
index 000000000..c6965a9ef
--- /dev/null
+++ b/4.6/docs/examples/custom-syslog-ng-metrics/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/date-parser/index.html b/4.6/docs/examples/date-parser/index.html
new file mode 100644
index 000000000..a51ab72a6
--- /dev/null
+++ b/4.6/docs/examples/date-parser/index.html
@@ -0,0 +1,631 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Parsing custom date formats | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Parsing custom date formats
+
By default, the syslog-ng aggregator uses the time when a message has been received on its input source as the timestamp. If you want to use the timestamp written in the message metadata, you can use a date-parser.
Available in Logging operator version 4.5 and later.
To use the timestamps written by the container runtime (cri or docker) and parsed by Fluent Bit, define the sourceDateParser in the syslog-ng spec.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/date-parser/releases.releases b/4.6/docs/examples/date-parser/releases.releases
new file mode 100644
index 000000000..bf9ff9267
--- /dev/null
+++ b/4.6/docs/examples/date-parser/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/es-nginx/index.html b/4.6/docs/examples/es-nginx/index.html
new file mode 100644
index 000000000..0a7376657
--- /dev/null
+++ b/4.6/docs/examples/es-nginx/index.html
@@ -0,0 +1,742 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Store NGINX access logs in Elasticsearch with Logging operator | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Store NGINX access logs in Elasticsearch with Logging operator
+
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Elasticsearch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Elasticsearch
First, deploy Elasticsearch in your Kubernetes cluster. The following procedure is based on the Elastic Cloud on Kubernetes quickstart, but there are some minor configuration changes, and we install everything into the logging namespace.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Use the following command to retrieve the password of the elastic user:
kubectl -n logging get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}'| base64 --decode;echo
+
+
Enable port forwarding to the Kibana Dashboard Service.
Open the Kibana dashboard in your browser at https://localhost:5601 and login as elastic using the retrieved password.
+
By default, the Logging operator sends the incoming log messages into an index called fluentd. Create an Index Pattern that includes this index (for example, fluentd*), then select Menu > Kibana > Discover. You should see the dashboard and some sample log messages from the demo application.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/es-nginx/releases.releases b/4.6/docs/examples/es-nginx/releases.releases
new file mode 100644
index 000000000..202a408f4
--- /dev/null
+++ b/4.6/docs/examples/es-nginx/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/example-s3/index.html b/4.6/docs/examples/example-s3/index.html
new file mode 100644
index 000000000..1bba5b098
--- /dev/null
+++ b/4.6/docs/examples/example-s3/index.html
@@ -0,0 +1,712 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Transport all logs into Amazon S3 with Logging operator | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Transport all logs into Amazon S3 with Logging operator
+
This guide describes how to collect all the container logs in Kubernetes using the Logging operator, and how to send them to Amazon S3.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator
Install the Logging operator.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
Check the output. The logs will be available in the bucket on a path like:
Transport Nginx Access Logs into Kafka with Logging operator
+
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Kafka.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Store Nginx Access Logs in Grafana Loki with Logging operator
+
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Grafana Loki.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Loki and Grafana
+
+
Add the chart repositories of Loki and Grafana using the following commands:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/loki-nginx/releases.releases b/4.6/docs/examples/loki-nginx/releases.releases
new file mode 100644
index 000000000..cd6dc42aa
--- /dev/null
+++ b/4.6/docs/examples/loki-nginx/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/multitenancy/index.html b/4.6/docs/examples/multitenancy/index.html
new file mode 100644
index 000000000..8c9371755
--- /dev/null
+++ b/4.6/docs/examples/multitenancy/index.html
@@ -0,0 +1,642 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Nodegroup-based multitenancy | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Nodegroup-based multitenancy
+
Nodegroup-based multitenancy allows you to have multiple tenants (for example, different developer teams or customer environments) on the same cluster who can configure their own logging resources within their assigned namespaces residing on different node groups.
+These resources are isolated from the resources of the other tenants so the configuration issues and performance characteristics of one tenant doesn’t affect the others.
Sample setup
The following procedure creates two tenants (A and B) and their respective namespaces on a two-node cluster.
+
+
If you don’t already have a cluster, create one with your provider. For a quick test, you can use a local cluster, for example, using minikube:
minikube start --nodes=2
+
+
Set labels on the nodes that correspond to your tenants, for example, tenant-a and tenant-b.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/multitenancy/releases.releases b/4.6/docs/examples/multitenancy/releases.releases
new file mode 100644
index 000000000..3e0bedaec
--- /dev/null
+++ b/4.6/docs/examples/multitenancy/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/releases.releases b/4.6/docs/examples/releases.releases
new file mode 100644
index 000000000..a86dac408
--- /dev/null
+++ b/4.6/docs/examples/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/splunk/index.html b/4.6/docs/examples/splunk/index.html
new file mode 100644
index 000000000..5209cd998
--- /dev/null
+++ b/4.6/docs/examples/splunk/index.html
@@ -0,0 +1,704 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Splunk operator with Logging operator | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Splunk operator with Logging operator
+
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Splunk.
Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output (in this case, to Splunk). For more details about the Logging operator, see the Logging operator overview.
Deploy Splunk
First, deploy Splunk Standalone in your Kubernetes cluster. The following procedure is based on the Splunk on Kubernetes quickstart.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/examples/splunk/releases.releases b/4.6/docs/examples/splunk/releases.releases
new file mode 100644
index 000000000..bcbdcb0be
--- /dev/null
+++ b/4.6/docs/examples/splunk/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/examples/sumologic/index.html b/4.6/docs/examples/sumologic/index.html
new file mode 100644
index 000000000..eb8e33da6
--- /dev/null
+++ b/4.6/docs/examples/sumologic/index.html
@@ -0,0 +1,692 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Sumo Logic with Logging operator and Fluentd | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Sumo Logic with Logging operator and Fluentd
+
This guide walks you through a simple Sumo Logic setup using the Logging Operator.
+Sumo Logic has Prometheus and logging capabilities as well. Now we only focus on the logging part.
Configuration
There are 3 crucial plugins needed for a proper Sumo Logic setup.
+
Kubernetes metadata enhancer
Sumo Logic filter
Sumo Logic output
Let’s setup the logging first.
GlobalFilters
The first thing we need to ensure is that the EnhanceK8s filter is present in the globalFilters section of the Logging spec.
+This adds additional data to the log lines (like deployment and service names).
Now we can create a ClusterFlow. Add the Sumo Logic filter to the filters section of the ClusterFlow spec.
+It will use the Kubernetes metadata and moves them to a special field called _sumo_metadata.
+All those moved fields will be sent as HTTP Header to the Sumo Logic endpoint.
+
Note: As we are using Fluent Bit to enrich Kubernetes metadata, we need to specify the field names where this data is stored.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
Configure the Logging operator
+
+
Create the logging resource with a persistent syslog-ng installation.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/image-versions/releases.releases b/4.6/docs/image-versions/releases.releases
new file mode 100644
index 000000000..7a3f15c3c
--- /dev/null
+++ b/4.6/docs/image-versions/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/img/cw.png b/4.6/docs/img/cw.png
new file mode 100644
index 000000000..f78dae314
Binary files /dev/null and b/4.6/docs/img/cw.png differ
diff --git a/4.6/docs/img/es_cerb.png b/4.6/docs/img/es_cerb.png
new file mode 100644
index 000000000..1c1e67cc2
Binary files /dev/null and b/4.6/docs/img/es_cerb.png differ
diff --git a/4.6/docs/img/es_kibana.png b/4.6/docs/img/es_kibana.png
new file mode 100644
index 000000000..8367b6d5a
Binary files /dev/null and b/4.6/docs/img/es_kibana.png differ
diff --git a/4.6/docs/img/fluentbit.png b/4.6/docs/img/fluentbit.png
new file mode 100644
index 000000000..aae238e11
Binary files /dev/null and b/4.6/docs/img/fluentbit.png differ
diff --git a/4.6/docs/img/fluentd.png b/4.6/docs/img/fluentd.png
new file mode 100644
index 000000000..e371b0517
Binary files /dev/null and b/4.6/docs/img/fluentd.png differ
diff --git a/4.6/docs/img/helm.svg b/4.6/docs/img/helm.svg
new file mode 100644
index 000000000..b7c15c14f
--- /dev/null
+++ b/4.6/docs/img/helm.svg
@@ -0,0 +1,42 @@
+
+
+
diff --git a/4.6/docs/img/icon.png b/4.6/docs/img/icon.png
new file mode 100644
index 000000000..9634c2b2e
Binary files /dev/null and b/4.6/docs/img/icon.png differ
diff --git a/4.6/docs/img/kafka_logo.png b/4.6/docs/img/kafka_logo.png
new file mode 100644
index 000000000..d3e6d1334
Binary files /dev/null and b/4.6/docs/img/kafka_logo.png differ
diff --git a/4.6/docs/img/les.png b/4.6/docs/img/les.png
new file mode 100644
index 000000000..3fe7ce733
Binary files /dev/null and b/4.6/docs/img/les.png differ
diff --git a/4.6/docs/img/lo-pro.png b/4.6/docs/img/lo-pro.png
new file mode 100644
index 000000000..03a9fde2a
Binary files /dev/null and b/4.6/docs/img/lo-pro.png differ
diff --git a/4.6/docs/img/lo.svg b/4.6/docs/img/lo.svg
new file mode 100644
index 000000000..7e6077a26
--- /dev/null
+++ b/4.6/docs/img/lo.svg
@@ -0,0 +1,69 @@
+
diff --git a/4.6/docs/img/logging-operator-v2-architecture.png b/4.6/docs/img/logging-operator-v2-architecture.png
new file mode 100644
index 000000000..84bd37136
Binary files /dev/null and b/4.6/docs/img/logging-operator-v2-architecture.png differ
diff --git a/4.6/docs/img/logging.svg b/4.6/docs/img/logging.svg
new file mode 100644
index 000000000..f212ee088
--- /dev/null
+++ b/4.6/docs/img/logging.svg
@@ -0,0 +1,35 @@
+
diff --git a/4.6/docs/img/logging_operator_flow.png b/4.6/docs/img/logging_operator_flow.png
new file mode 100644
index 000000000..7205c1793
Binary files /dev/null and b/4.6/docs/img/logging_operator_flow.png differ
diff --git a/4.6/docs/img/loki1.png b/4.6/docs/img/loki1.png
new file mode 100644
index 000000000..59cead1f4
Binary files /dev/null and b/4.6/docs/img/loki1.png differ
diff --git a/4.6/docs/img/monitor.png b/4.6/docs/img/monitor.png
new file mode 100644
index 000000000..bb18b5783
Binary files /dev/null and b/4.6/docs/img/monitor.png differ
diff --git a/4.6/docs/img/nginx-cloudwatch.png b/4.6/docs/img/nginx-cloudwatch.png
new file mode 100644
index 000000000..31a53e5a9
Binary files /dev/null and b/4.6/docs/img/nginx-cloudwatch.png differ
diff --git a/4.6/docs/img/nginx-elastic.png b/4.6/docs/img/nginx-elastic.png
new file mode 100644
index 000000000..522f1042d
Binary files /dev/null and b/4.6/docs/img/nginx-elastic.png differ
diff --git a/4.6/docs/img/nginx-loki.png b/4.6/docs/img/nginx-loki.png
new file mode 100644
index 000000000..ceb29d155
Binary files /dev/null and b/4.6/docs/img/nginx-loki.png differ
diff --git a/4.6/docs/img/nignx-kafka.png b/4.6/docs/img/nignx-kafka.png
new file mode 100644
index 000000000..22ebfc986
Binary files /dev/null and b/4.6/docs/img/nignx-kafka.png differ
diff --git a/4.6/docs/img/nle.png b/4.6/docs/img/nle.png
new file mode 100644
index 000000000..d7fb91c33
Binary files /dev/null and b/4.6/docs/img/nle.png differ
diff --git a/4.6/docs/img/nll.png b/4.6/docs/img/nll.png
new file mode 100644
index 000000000..57e06bfcf
Binary files /dev/null and b/4.6/docs/img/nll.png differ
diff --git a/4.6/docs/img/nlw.png b/4.6/docs/img/nlw.png
new file mode 100644
index 000000000..e1a73a296
Binary files /dev/null and b/4.6/docs/img/nlw.png differ
diff --git a/4.6/docs/img/s3_flow.png b/4.6/docs/img/s3_flow.png
new file mode 100644
index 000000000..badf61cb0
Binary files /dev/null and b/4.6/docs/img/s3_flow.png differ
diff --git a/4.6/docs/img/s3_logo.png b/4.6/docs/img/s3_logo.png
new file mode 100644
index 000000000..5765853c3
Binary files /dev/null and b/4.6/docs/img/s3_logo.png differ
diff --git a/4.6/docs/img/servicemonitor_grafana.png b/4.6/docs/img/servicemonitor_grafana.png
new file mode 100644
index 000000000..d341d278f
Binary files /dev/null and b/4.6/docs/img/servicemonitor_grafana.png differ
diff --git a/4.6/docs/img/servicemonitor_minio.png b/4.6/docs/img/servicemonitor_minio.png
new file mode 100644
index 000000000..a84eca20e
Binary files /dev/null and b/4.6/docs/img/servicemonitor_minio.png differ
diff --git a/4.6/docs/img/servicemonitor_prometheus.png b/4.6/docs/img/servicemonitor_prometheus.png
new file mode 100644
index 000000000..ac2d3936e
Binary files /dev/null and b/4.6/docs/img/servicemonitor_prometheus.png differ
diff --git a/4.6/docs/img/splunk.png b/4.6/docs/img/splunk.png
new file mode 100644
index 000000000..e9eb22bc3
Binary files /dev/null and b/4.6/docs/img/splunk.png differ
diff --git a/4.6/docs/img/splunk_dash.png b/4.6/docs/img/splunk_dash.png
new file mode 100644
index 000000000..eaf759db8
Binary files /dev/null and b/4.6/docs/img/splunk_dash.png differ
diff --git a/4.6/docs/img/troubleshooting.svg b/4.6/docs/img/troubleshooting.svg
new file mode 100644
index 000000000..313f6be15
--- /dev/null
+++ b/4.6/docs/img/troubleshooting.svg
@@ -0,0 +1,27 @@
+
diff --git a/4.6/docs/index.html b/4.6/docs/index.html
new file mode 100644
index 000000000..ca9bf12db
--- /dev/null
+++ b/4.6/docs/index.html
@@ -0,0 +1,653 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Logging operator | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Logging operator
+
Welcome to the Logging operator documentation!
Overview
The Logging operator solves your logging-related problems in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.
+
The operator deploys and configures a log collector (currently a Fluent Bit DaemonSet) on every node to collect container and application logs from the node file system.
Fluent Bit queries the Kubernetes API and enriches the logs with metadata about the pods, and transfers both the logs and the metadata to a log forwarder instance.
The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng (via the AxoSyslog syslog-ng distribution) as log forwarders.
Your logs are always transferred on authenticated and encrypted channels.
This operator helps you bundle logging information with your applications: you can describe the behavior of your application in its charts, the Logging operator does the rest.
Feature highlights
+
Namespace isolation
Native Kubernetes label selectors
Secure communication (TLS)
Configuration validation
Multiple flow support (multiply logs for different transformations)
Multiple output support (store the same logs in multiple storage: S3, GCS, ES, Loki and more…)
Multiple logging system support (multiple Fluentd, Fluent Bit deployment on the same cluster)
Support for both syslog-ng and Fluentd as the central log routing component
Architecture
The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
The log collectors are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
The log forwarder (also called log aggregator) instance receives, filters, and transforms the incoming logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements. For tips, see Which log forwarder to use.
You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
You can configure the Logging operator using the following Custom Resource Definitions.
+
logging - The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It can also contain configurations for Fluent Bit, Fluentd, and syslog-ng. (Starting with Logging operator version 4.5, you can also configure Fluent Bit, Fluentd, and syslog-ng as separate resources.)
CRDs for Fluentd:
+
+
output - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also clusteroutput. To configure syslog-ng outputs, see SyslogNGOutput.
flow - Defines a Fluentd logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also clusterflow. To configure syslog-ng flows, see SyslogNGFlow.
clusteroutput - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
clusterflow - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure syslog-ng clusterflows, see SyslogNGClusterFlow.
CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+
+
SyslogNGOutput - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also SyslogNGClusterOutput. To configure Fluentd outputs, see output.
SyslogNGFlow - Defines a syslog-ng logging flow using filters and outputs. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also SyslogNGClusterFlow. To configure Fluentd flows, see flow.
SyslogNGClusterOutput - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.
SyslogNGClusterFlow - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true. To configure Fluentd clusterflows, see clusterflow.
For the detailed CRD documentation, see List of CRDs.
If you encounter problems while using the Logging operator the documentation does not address, open an issue or talk to us on Discord or on the CNCF Slack.
With the 4.3.0 release, the chart is now distributed through an OCI registry.
+For instructions on how to interact with OCI registries, please take a look at Use OCI-based registries.
+For instructions on installing the previous 4.2.3 version, see Installation for 4.2.
Deploy Logging operator with Helm
+
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
Validate the deployment
To verify that the installation was successful, complete the following steps.
+
+
Check the status of the pods. You should see a new logging-operator pod.
kubectl -n logging get pods
+
Expected output:
NAME READY STATUS RESTARTS AGE
+logging-operator-5df66b87c9-wgsdf 1/1 Running 0 21s
+
+
Check the CRDs. You should see the following five new CRDs.
kubectl get crd
+
Expected output:
NAME CREATED AT
+clusterflows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+clusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:04Z
+eventtailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+flows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+fluentbitagents.logging.banzaicloud.io 2023-08-10T12:05:04Z
+hosttailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+loggings.logging.banzaicloud.io 2023-08-10T12:05:05Z
+nodeagents.logging.banzaicloud.io 2023-08-10T12:05:05Z
+outputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusterflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngoutputs.logging.banzaicloud.io 2023-08-10T12:05:06Z
+
With the 4.3.0 release, the chart is now distributed through an OCI registry.
+For instructions on how to interact with OCI registries, please take a look at Use OCI-based registries.
+For instructions on installing the previous 4.2.3 version, see Installation for 4.2.
Deploy Logging operator with Helm
+
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
Validate the deployment
To verify that the installation was successful, complete the following steps.
+
+
Check the status of the pods. You should see a new logging-operator pod.
kubectl -n logging get pods
+
Expected output:
NAME READY STATUS RESTARTS AGE
+logging-operator-5df66b87c9-wgsdf 1/1 Running 0 21s
+
+
Check the CRDs. You should see the following five new CRDs.
kubectl get crd
+
Expected output:
NAME CREATED AT
+clusterflows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+clusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:04Z
+eventtailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+flows.logging.banzaicloud.io 2023-08-10T12:05:04Z
+fluentbitagents.logging.banzaicloud.io 2023-08-10T12:05:04Z
+hosttailers.logging-extensions.banzaicloud.io 2023-08-10T12:05:04Z
+loggings.logging.banzaicloud.io 2023-08-10T12:05:05Z
+nodeagents.logging.banzaicloud.io 2023-08-10T12:05:05Z
+outputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusterflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngclusteroutputs.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngflows.logging.banzaicloud.io 2023-08-10T12:05:05Z
+syslogngoutputs.logging.banzaicloud.io 2023-08-10T12:05:06Z
+
Licensed under the Apache License, Version 2.0 (the “License”);
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an “AS IS” BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
The following sections describe how to change the configuration of your logging infrastructure, that is, how to configure your log collectors and forwarders.
The logging resource defines the logging infrastructure for your cluster that collects and transports your log messages, and also contains configurations for the Fluent Bit log collector and the Fluentd and syslog-ng log forwarders. It also establishes the controlNamespace, the administrative namespace of the Logging operator. The Fluentd and syslog-ng statefulsets and the Fluent Bit daemonset are deployed in this namespace, and global resources like ClusterOutput and ClusterFlow are evaluated only in this namespace by default - they are ignored in any other namespace unless allowClusterResourcesFromAllNamespaces is set to true.
You can customize the configuration of Fluentd, syslog-ng, and Fluent Bit in the logging resource. The logging resource also declares watchNamespaces, that specifies the namespaces where Flow/SyslogNGFlow and Output/SyslogNGOutput resources will be applied into Fluentd’s/syslog-ng’s configuration.
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
You can customize the following sections of the logging resource:
+
Generic parameters of the logging resource. For the list of available parameters, see LoggingSpec.
The fluentd statefulset that Logging operator deploys. For a list of parameters, see FluentdSpec. For examples on customizing the Fluentd configuration, see Configure Fluentd.
The syslogNG statefulset that Logging operator deploys. For a list of parameters, see SyslogNGSpec. For examples on customizing the Fluentd configuration, see Configure syslog-ng.
The fluentbit field is deprecated. Fluent Bit should now be configured separately, see Fluent Bit log collector.
The following example snippets use the logging namespace. To create this namespace if it does not already exist, run:
Starting with Logging operator version 4.3, you can use the watchNamespaceSelector selector to select the watched namespaces based on their label, or an expression, for example:
Using the standalone FluentdConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.fluentd configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see FluentdSpec.
Migrating from spec.fluentd to FluentdConfig
The standalone FluentdConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.fluentd configuration method. Using the FluentdConfig CRD allows you to remove the spec.fluentd section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentdConfig CRD, so you can have separate roles that can manage the Logging resource and the FluentdConfig resource (that is, the Fluentd deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.fluentd configuration from the Logging resource to a separate FluentdConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.fluentd section. For example:
Create a new FluentdConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.fluentd section from the Logging resource into the spec section of the FluentdConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.fluentd section from the Logging resource, then apply the Logging and the FluentdConfig CRDs.
Using the standalone FluentdConfig resource
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one FluentdConfig at a time. The controller registers the active FluentdConfig resource into the Logging resource’s status under fluentdConfigName, and also registers the Logging resource name under logging in the FluentdConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example"
+}
+
kubectl get fluentdconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a FluentdConfig is already registered to a Logging resource and you create another FluentdConfig resource in the same namespace, then the first FluentdConfig is left intact, while the second one should have the following status:
kubectl get fluentdconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached fluentd configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example",
+"problems": [
+"multiple fluentd configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Logging
+metadata:
+name:default-logging-simple
+spec:
+fluentd:
+disablePvc:true
+bufferStorageVolume:
+hostPath:
+path:""# leave it empty to automatically generate: /opt/logging-operator/default-logging-simple/default-logging-simple-fluentd-buffer
+fluentbit:{}
+controlNamespace:logging
+
FluentOutLogrotate
The following snippet redirects Fluentd’s stdout to a file and configures rotation settings.
This mechanism was used prior to version 4.4 to avoid Fluent-bit rereading Fluentd’s logs and causing an exponentially growing amount of redundant logs.
Example configuration used by the operator in version 4.3 and earlier (keep 10 files, 10M each):
Fluentd logs are now excluded using the fluentbit.io/exclude: "true" annotation.
Scaling
You can scale the Fluentd deployment manually by changing the number of replicas in the fluentd section of the The Logging custom resource. For example:
While you can scale down the Fluentd deployment by decreasing the number of replicas in the fluentd section of the The Logging custom resource, it won’t automatically be graceful, as the controller will stop the extra replica pods without waiting for any remaining buffers to be flushed.
+You can enable graceful draining in the scaling subsection:
When graceful draining is enabled, the operator starts drainer jobs for any undrained volumes.
+The drainer job flushes any remaining buffers before terminating, and the operator marks the associated volume (the PVC, actually) as drained until it gets used again.
+The drainer job has a template very similar to that of the Fluentd deployment with the addition of a sidecar container that oversees the buffers and signals Fluentd to terminate when all buffers are gone.
+Pods created by the job are labeled as not to receive any further logs, thus buffers will clear out eventually.
If you want, you can specify a custom drainer job sidecar image in the drain subsection:
In addition to the drainer job, the operator also creates a placeholder pod with the same name as the terminated pod of the Fluentd deployment to keep the deployment from recreating that pod which would result in concurrent access of the volume.
+The placeholder pod just runs a pause container, and goes away as soon as the job has finished successfully or the deployment is scaled back up and explicitly flushing the buffers is no longer necessary because the newly created replica will take care of processing them.
You can mark volumes that should be ignored by the drain logic by adding the label logging.banzaicloud.io/drain: no to the PVC.
Autoscaling with HPA
To configure autoscaling of the Fluentd deployment using Horizontal Pod Autoscaler (HPA), complete the following steps.
Install Prometheus and the Prometheus Adapter if you don’t already have them installed on the cluster. Adjust the default Prometheus address values as needed for your environment (set prometheus.url, prometheus.port, and prometheus.path to the appropriate values).
+
(Optional) Install metrics-server to access basic metrics. If the readiness of the metrics-server pod fails with HTTP 500, try adding the --kubelet-insecure-tls flag to the container.
+
If you want to use a custom metric for autoscaling Fluentd and the necessary metric is not available in Prometheus, define a Prometheus recording rule:
Alternatively, you can define the derived metric as a configuration rule in the Prometheus Adapter’s config map.
+
If it’s not already installed, install the logging-operator and configure a logging resource with at least one flow. Make sure that the logging resource has buffer volume metrics monitoring enabled under spec.fluentd:
Verify that the custom metric is available by running:
kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/buffer_space_usage_ratio'
+
+
The logging-operator enforces the replica count of the stateful set based on the logging resource’s replica count, even if it’s not set explicitly. To allow for HPA to control the replica count of the stateful set, this coupling has to be severed.
+Currently, the only way to do that is by deleting the logging-operator deployment.
+
Create a HPA resource. The following example tries to keep the average buffer volume usage of Fluentd instances at 80%.
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluentd in the livenessProbe section of the The Logging custom resource. For example:
You can deploy custom images by overriding the default images using the following parameters in the fluentd or fluentbit sections of the logging resource.
+
+
+
Name
Type
Default
Description
+
+
repository
string
""
Image repository
+
tag
string
""
Image tag
+
pullPolicy
string
""
Always, IfNotPresent, Never
The following example deploys a custom fluentd image:
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
Using the standalone syslogNGConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.syslogNG configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see SyslogNGSpec.
Migrating from spec.syslogNG to syslogNGConfig
The standalone syslogNGConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.syslogNG configuration method. Using the syslogNGConfig CRD allows you to remove the spec.syslogNG section from the Logging CRD, which has the following benefits.
+
RBAC control over the syslogNGConfig CRD, so you can have separate roles that can manage the Logging resource and the syslogNGConfig resource (that is, the syslog-ng deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.syslogNG configuration from the Logging resource to a separate syslogNGConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.syslogNG section. For example:
Create a new syslogNGConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.syslogNG section from the Logging resource into the spec section of the syslogNGConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.syslogNG section from the Logging resource, then apply the Logging and the syslogNGConfig CRDs.
Using the standalone syslogNGConfig resource
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one syslogNGConfig at a time. The controller registers the active syslogNGConfig resource into the Logging resource’s status under syslogNGConfigName, and also registers the Logging resource name under logging in the syslogNGConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example"
+}
+
kubectl get syslogngconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a syslogNGConfig is already registered to a Logging resource and you create another syslogNGConfig resource in the same namespace, then the first syslogNGConfig is left intact, while the second one should have the following status:
kubectl get syslogngconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached syslog-ng configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example",
+"problems": [
+"multiple syslog-ng configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
Volume mount for buffering
The following example sets a volume mount that syslog-ng can use for buffering messages on the disk (if Disk buffer is configured in the output).
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for syslog-ng in the livenessProbe section of the The Logging custom resource. For example:
Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations.
Logging operator uses Fluent Bit as a log collector agent: Logging operator deploys Fluent Bit to your Kubernetes nodes where it collects and enriches the local logs and transfers them to a log forwarder instance.
Ways to configure Fluent Bit
There are three ways to configure the Fluent Bit daemonset:
+
Using the spec.fluentbit section of The Logging custom resource. This method is deprecated and will be removed in the next major release.
Using the standalone FluentbitAgent CRD. This method is only available in Logging operator version 4.2 and newer, and the specification of the CRD is compatible with the spec.fluentbit configuration method.
Using the spec.nodeagents section of The Logging custom resource. This method is deprecated and will be removed from the Logging operator. (Note that this configuration isn’t compatible with the FluentbitAgent CRD.)
For the detailed list of available parameters, see FluentbitSpec.
Migrating from spec.fluentbit to FluentbitAgent
The standalone FluentbitAgent CRD is only available in Logging operator version 4.2 and newer. Its specification and logic is identical with the spec.fluentbit configuration method. Using the FluentbitAgent CRD allows you to remove the spec.fluentbit section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentbitAgent CRD, so you can have separate roles that can manage the Logging resource and the FluentbitAgent resource (that is, the Fluent Bit deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
Create a new FluentbitAgent CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+
+
Copy the the spec.fluentbit section from the Logging resource into the spec section of the FluentbitAgent CRD, then fix the indentation.
+
Specify the paths for the positiondb and the bufferStorageVolume. If you used the default settings in the spec.fluentbit configuration, set empty strings as paths, like in the following example. This is needed to retain the existing buffers of the deployment, otherwise data loss may occur.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+spec:
+inputTail:
+storage.type:filesystem
+positiondb:
+hostPath:
+path:""
+bufferStorageVolume:
+hostPath:
+path:""
+
+
Delete the spec.fluentbit section from the Logging resource, then apply the Logging and the FluentbitAgent CRDs.
Examples
The following sections show you some examples on configuring Fluent Bit. For the detailed list of available parameters, see FluentbitSpec.
+
Note: These examples use the traditional method that configures the Fluent Bit deployment using spec.fluentbit section of The Logging custom resource.
Filters
Kubernetes (filterKubernetes)
Fluent Bit Kubernetes Filter allows you to enrich your log files with Kubernetes metadata. For example:
For the detailed list of available parameters for this plugin, see InputTail.
+More Info.
Buffering
Buffering in Fluent Bit places the processed data into a temporal location until is sent to Fluentd. By default, the Logging operator sets storage.path to /buffers and leaves fluent-bit defaults for the other options.
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluent Bit in the livenessProbe section of the The Logging custom resource. For example:
There can be at least two different use cases where one might need multiple sets of node agents running with different configuration while still forwarding logs to the same aggregator.
One specific example is when there is a need for a configuration change in a rolling upgrade manner. As new nodes come up, they need to run with a new configuration, while old nodes use the previous configuration.
The other use case is when there are different node groups in a cluster for multitenancy reasons for example. You might need different Fluent Bit configurations on the separate node groups in that case.
Starting with Logging operator version 4.2, you can do that by using the FluentbitAgent CRD. This allows you to implement hard multitenancy on the node group level.
To configure multiple FluentbitAgent CRDs for a cluster, complete the following steps.
+
Note: The examples refer to a scenario where you have two node groups that have the Kubernetes label nodeGroup=A and nodeGroup=B. These labels are fictional and are used only as examples. Node labels are not available in the log metadata, to have similar labels, you have to apply the node labels directly to the pods. How to do that is beyond the scope of this guide (for example, you can use a policy engine, like Kyverno).
Edit your existing FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=A. For details, see nodeSelector in the Kubernetes documentation.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the same name as the logging resource does
+name:multi
+spec:
+nodeSelector:
+nodeGroup:"A"
+
+
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
Create a new FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=B. For details, see nodeSelector in the Kubernetes documentation.
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/logging-infrastructure/fluentbit-multiple/index.html b/4.6/docs/logging-infrastructure/fluentbit-multiple/index.html
new file mode 100644
index 000000000..67fb04cd1
--- /dev/null
+++ b/4.6/docs/logging-infrastructure/fluentbit-multiple/index.html
@@ -0,0 +1,664 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Multiple Fluent Bit agents in the cluster | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Multiple Fluent Bit agents in the cluster
+
There can be at least two different use cases where one might need multiple sets of node agents running with different configuration while still forwarding logs to the same aggregator.
One specific example is when there is a need for a configuration change in a rolling upgrade manner. As new nodes come up, they need to run with a new configuration, while old nodes use the previous configuration.
The other use case is when there are different node groups in a cluster for multitenancy reasons for example. You might need different Fluent Bit configurations on the separate node groups in that case.
Starting with Logging operator version 4.2, you can do that by using the FluentbitAgent CRD. This allows you to implement hard multitenancy on the node group level.
To configure multiple FluentbitAgent CRDs for a cluster, complete the following steps.
+
Note: The examples refer to a scenario where you have two node groups that have the Kubernetes label nodeGroup=A and nodeGroup=B. These labels are fictional and are used only as examples. Node labels are not available in the log metadata, to have similar labels, you have to apply the node labels directly to the pods. How to do that is beyond the scope of this guide (for example, you can use a policy engine, like Kyverno).
Edit your existing FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=A. For details, see nodeSelector in the Kubernetes documentation.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the same name as the logging resource does
+name:multi
+spec:
+nodeSelector:
+nodeGroup:"A"
+
+
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
Create a new FluentbitAgent CRD, and set the spec.nodeSelector field so it applies only to the node group you want to apply this Fluent Bit configuration on, for example, nodes that have the label nodeGroup=B. For details, see nodeSelector in the Kubernetes documentation.
Note: If your Logging resource has its spec.loggingRef parameter set, set the same value in the spec.loggingRef parameter of the FluentbitAgent resource.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/logging-infrastructure/fluentbit-multiple/releases.releases b/4.6/docs/logging-infrastructure/fluentbit-multiple/releases.releases
new file mode 100644
index 000000000..a16865115
--- /dev/null
+++ b/4.6/docs/logging-infrastructure/fluentbit-multiple/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/logging-infrastructure/fluentbit/index.html b/4.6/docs/logging-infrastructure/fluentbit/index.html
new file mode 100644
index 000000000..f8eef907d
--- /dev/null
+++ b/4.6/docs/logging-infrastructure/fluentbit/index.html
@@ -0,0 +1,775 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Fluent Bit log collector | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Fluent Bit log collector
+
Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations.
Logging operator uses Fluent Bit as a log collector agent: Logging operator deploys Fluent Bit to your Kubernetes nodes where it collects and enriches the local logs and transfers them to a log forwarder instance.
Ways to configure Fluent Bit
There are three ways to configure the Fluent Bit daemonset:
+
Using the spec.fluentbit section of The Logging custom resource. This method is deprecated and will be removed in the next major release.
Using the standalone FluentbitAgent CRD. This method is only available in Logging operator version 4.2 and newer, and the specification of the CRD is compatible with the spec.fluentbit configuration method.
Using the spec.nodeagents section of The Logging custom resource. This method is deprecated and will be removed from the Logging operator. (Note that this configuration isn’t compatible with the FluentbitAgent CRD.)
For the detailed list of available parameters, see FluentbitSpec.
Migrating from spec.fluentbit to FluentbitAgent
The standalone FluentbitAgent CRD is only available in Logging operator version 4.2 and newer. Its specification and logic is identical with the spec.fluentbit configuration method. Using the FluentbitAgent CRD allows you to remove the spec.fluentbit section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentbitAgent CRD, so you can have separate roles that can manage the Logging resource and the FluentbitAgent resource (that is, the Fluent Bit deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
Create a new FluentbitAgent CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+
+
Copy the the spec.fluentbit section from the Logging resource into the spec section of the FluentbitAgent CRD, then fix the indentation.
+
Specify the paths for the positiondb and the bufferStorageVolume. If you used the default settings in the spec.fluentbit configuration, set empty strings as paths, like in the following example. This is needed to retain the existing buffers of the deployment, otherwise data loss may occur.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentbitAgent
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+spec:
+inputTail:
+storage.type:filesystem
+positiondb:
+hostPath:
+path:""
+bufferStorageVolume:
+hostPath:
+path:""
+
+
Delete the spec.fluentbit section from the Logging resource, then apply the Logging and the FluentbitAgent CRDs.
Examples
The following sections show you some examples on configuring Fluent Bit. For the detailed list of available parameters, see FluentbitSpec.
+
Note: These examples use the traditional method that configures the Fluent Bit deployment using spec.fluentbit section of The Logging custom resource.
Filters
Kubernetes (filterKubernetes)
Fluent Bit Kubernetes Filter allows you to enrich your log files with Kubernetes metadata. For example:
For the detailed list of available parameters for this plugin, see InputTail.
+More Info.
Buffering
Buffering in Fluent Bit places the processed data into a temporal location until is sent to Fluentd. By default, the Logging operator sets storage.path to /buffers and leaves fluent-bit defaults for the other options.
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluent Bit in the livenessProbe section of the The Logging custom resource. For example:
Using the standalone FluentdConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.fluentd configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see FluentdSpec.
Migrating from spec.fluentd to FluentdConfig
The standalone FluentdConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.fluentd configuration method. Using the FluentdConfig CRD allows you to remove the spec.fluentd section from the Logging CRD, which has the following benefits.
+
RBAC control over the FluentdConfig CRD, so you can have separate roles that can manage the Logging resource and the FluentdConfig resource (that is, the Fluentd deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.fluentd configuration from the Logging resource to a separate FluentdConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.fluentd section. For example:
Create a new FluentdConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.fluentd section from the Logging resource into the spec section of the FluentdConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:FluentdConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.fluentd section from the Logging resource, then apply the Logging and the FluentdConfig CRDs.
Using the standalone FluentdConfig resource
The standalone FluentdConfig is a namespaced resource that allows the configuration of the Fluentd aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one FluentdConfig at a time. The controller registers the active FluentdConfig resource into the Logging resource’s status under fluentdConfigName, and also registers the Logging resource name under logging in the FluentdConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example"
+}
+
kubectl get fluentdconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a FluentdConfig is already registered to a Logging resource and you create another FluentdConfig resource in the same namespace, then the first FluentdConfig is left intact, while the second one should have the following status:
kubectl get fluentdconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached fluentd configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"fluentdConfigName": "example",
+"problems": [
+"multiple fluentd configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Logging
+metadata:
+name:default-logging-simple
+spec:
+fluentd:
+disablePvc:true
+bufferStorageVolume:
+hostPath:
+path:""# leave it empty to automatically generate: /opt/logging-operator/default-logging-simple/default-logging-simple-fluentd-buffer
+fluentbit:{}
+controlNamespace:logging
+
FluentOutLogrotate
The following snippet redirects Fluentd’s stdout to a file and configures rotation settings.
This mechanism was used prior to version 4.4 to avoid Fluent-bit rereading Fluentd’s logs and causing an exponentially growing amount of redundant logs.
Example configuration used by the operator in version 4.3 and earlier (keep 10 files, 10M each):
Fluentd logs are now excluded using the fluentbit.io/exclude: "true" annotation.
Scaling
You can scale the Fluentd deployment manually by changing the number of replicas in the fluentd section of the The Logging custom resource. For example:
While you can scale down the Fluentd deployment by decreasing the number of replicas in the fluentd section of the The Logging custom resource, it won’t automatically be graceful, as the controller will stop the extra replica pods without waiting for any remaining buffers to be flushed.
+You can enable graceful draining in the scaling subsection:
When graceful draining is enabled, the operator starts drainer jobs for any undrained volumes.
+The drainer job flushes any remaining buffers before terminating, and the operator marks the associated volume (the PVC, actually) as drained until it gets used again.
+The drainer job has a template very similar to that of the Fluentd deployment with the addition of a sidecar container that oversees the buffers and signals Fluentd to terminate when all buffers are gone.
+Pods created by the job are labeled as not to receive any further logs, thus buffers will clear out eventually.
If you want, you can specify a custom drainer job sidecar image in the drain subsection:
In addition to the drainer job, the operator also creates a placeholder pod with the same name as the terminated pod of the Fluentd deployment to keep the deployment from recreating that pod which would result in concurrent access of the volume.
+The placeholder pod just runs a pause container, and goes away as soon as the job has finished successfully or the deployment is scaled back up and explicitly flushing the buffers is no longer necessary because the newly created replica will take care of processing them.
You can mark volumes that should be ignored by the drain logic by adding the label logging.banzaicloud.io/drain: no to the PVC.
Autoscaling with HPA
To configure autoscaling of the Fluentd deployment using Horizontal Pod Autoscaler (HPA), complete the following steps.
Install Prometheus and the Prometheus Adapter if you don’t already have them installed on the cluster. Adjust the default Prometheus address values as needed for your environment (set prometheus.url, prometheus.port, and prometheus.path to the appropriate values).
+
(Optional) Install metrics-server to access basic metrics. If the readiness of the metrics-server pod fails with HTTP 500, try adding the --kubelet-insecure-tls flag to the container.
+
If you want to use a custom metric for autoscaling Fluentd and the necessary metric is not available in Prometheus, define a Prometheus recording rule:
Alternatively, you can define the derived metric as a configuration rule in the Prometheus Adapter’s config map.
+
If it’s not already installed, install the logging-operator and configure a logging resource with at least one flow. Make sure that the logging resource has buffer volume metrics monitoring enabled under spec.fluentd:
Verify that the custom metric is available by running:
kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/buffer_space_usage_ratio'
+
+
The logging-operator enforces the replica count of the stateful set based on the logging resource’s replica count, even if it’s not set explicitly. To allow for HPA to control the replica count of the stateful set, this coupling has to be severed.
+Currently, the only way to do that is by deleting the logging-operator deployment.
+
Create a HPA resource. The following example tries to keep the average buffer volume usage of Fluentd instances at 80%.
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for Fluentd in the livenessProbe section of the The Logging custom resource. For example:
You can deploy custom images by overriding the default images using the following parameters in the fluentd or fluentbit sections of the logging resource.
+
+
+
Name
Type
Default
Description
+
+
repository
string
""
Image repository
+
tag
string
""
Image tag
+
pullPolicy
string
""
Always, IfNotPresent, Never
The following example deploys a custom fluentd image:
Represents a host path mapped into a pod. If path is empty, it will automatically be set to /opt/logging-operator/<name of the logging CR>/<name of the volume>
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/logging-infrastructure/fluentd/releases.releases b/4.6/docs/logging-infrastructure/fluentd/releases.releases
new file mode 100644
index 000000000..9598f4093
--- /dev/null
+++ b/4.6/docs/logging-infrastructure/fluentd/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/logging-infrastructure/index.html b/4.6/docs/logging-infrastructure/index.html
new file mode 100644
index 000000000..49e12daf1
--- /dev/null
+++ b/4.6/docs/logging-infrastructure/index.html
@@ -0,0 +1,628 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Logging infrastructure setup | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Logging infrastructure setup
+
The following sections describe how to change the configuration of your logging infrastructure, that is, how to configure your log collectors and forwarders.
The logging resource defines the logging infrastructure for your cluster that collects and transports your log messages, and also contains configurations for the Fluent Bit log collector and the Fluentd and syslog-ng log forwarders. It also establishes the controlNamespace, the administrative namespace of the Logging operator. The Fluentd and syslog-ng statefulsets and the Fluent Bit daemonset are deployed in this namespace, and global resources like ClusterOutput and ClusterFlow are evaluated only in this namespace by default - they are ignored in any other namespace unless allowClusterResourcesFromAllNamespaces is set to true.
You can customize the configuration of Fluentd, syslog-ng, and Fluent Bit in the logging resource. The logging resource also declares watchNamespaces, that specifies the namespaces where Flow/SyslogNGFlow and Output/SyslogNGOutput resources will be applied into Fluentd’s/syslog-ng’s configuration.
+
Note: By default, the Logging operator Helm chart doesn’t install the logging resource. If you want to install it with Helm, set the logging.enabled value to true.
For details on customizing the installation, see the Helm chart values.
You can customize the following sections of the logging resource:
+
Generic parameters of the logging resource. For the list of available parameters, see LoggingSpec.
The fluentd statefulset that Logging operator deploys. For a list of parameters, see FluentdSpec. For examples on customizing the Fluentd configuration, see Configure Fluentd.
The syslogNG statefulset that Logging operator deploys. For a list of parameters, see SyslogNGSpec. For examples on customizing the Fluentd configuration, see Configure syslog-ng.
The fluentbit field is deprecated. Fluent Bit should now be configured separately, see Fluent Bit log collector.
The following example snippets use the logging namespace. To create this namespace if it does not already exist, run:
Starting with Logging operator version 4.3, you can use the watchNamespaceSelector selector to select the watched namespaces based on their label, or an expression, for example:
Using the standalone syslogNGConfig CRD. This method is only available in Logging operator version 4.5 and newer, and the specification of the CRD is compatible with the spec.syslogNG configuration method. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
For the detailed list of available parameters, see SyslogNGSpec.
Migrating from spec.syslogNG to syslogNGConfig
The standalone syslogNGConfig CRD is only available in Logging operator version 4.5 and newer. Its specification and logic is identical with the spec.syslogNG configuration method. Using the syslogNGConfig CRD allows you to remove the spec.syslogNG section from the Logging CRD, which has the following benefits.
+
RBAC control over the syslogNGConfig CRD, so you can have separate roles that can manage the Logging resource and the syslogNGConfig resource (that is, the syslog-ng deployment).
It reduces the size of the Logging resource, which can grow big enough to reach the annotation size limit in certain scenarios (e.g. when using kubectl apply).
You can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
To migrate your spec.syslogNG configuration from the Logging resource to a separate syslogNGConfig CRD, complete the following steps.
+
+
Open your Logging resource and find the spec.syslogNG section. For example:
Create a new syslogNGConfig CRD. For the value of metadata.name, use the name of the Logging resource, for example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+
+
Copy the the spec.syslogNG section from the Logging resource into the spec section of the syslogNGConfig CRD, then fix the indentation. For example:
apiVersion:logging.banzaicloud.io/v1beta1
+kind:syslogNGConfig
+metadata:
+# Use the name of the logging resource
+name:example-logging-resource
+# Use the control namespace of the logging resource
+namespace:logging
+spec:
+scaling:
+replicas:2
+
+
Delete the spec.syslogNG section from the Logging resource, then apply the Logging and the syslogNGConfig CRDs.
Using the standalone syslogNGConfig resource
The standalone syslogNGConfig is a namespaced resource that allows the configuration of the syslog-ng aggregator in the control namespace, separately from the Logging resource. This allows you to use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team. For more information about the multi-tenancy model where the collector is capable of routing logs based on namespaces to individual aggregators and where aggregators are fully isolated, see this blog post about Multi-tenancy using Logging operator.
A Logging resource can have only one syslogNGConfig at a time. The controller registers the active syslogNGConfig resource into the Logging resource’s status under syslogNGConfigName, and also registers the Logging resource name under logging in the syslogNGConfig resource’s status, for example:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example"
+}
+
kubectl get syslogngconfig example -o jsonpath='{.status}'| jq .
+{
+"active": true,
+"logging": "example"
+}
+
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a syslogNGConfig is already registered to a Logging resource and you create another syslogNGConfig resource in the same namespace, then the first syslogNGConfig is left intact, while the second one should have the following status:
kubectl get syslogngconfig example2 -o jsonpath='{.status}'| jq .
+{
+"active": false,
+"problems": [
+"logging already has a detached syslog-ng configuration, remove excess configuration objects"
+],
+"problemsCount": 1
+}
+
The Logging resource will also show the issue:
kubectl get logging example -o jsonpath='{.status}'| jq .
+{
+"configCheckResults": {
+"ac2d4553": true
+},
+"syslogNGConfigName": "example",
+"problems": [
+"multiple syslog-ng configurations found, couldn't associate it with logging"
+],
+"problemsCount": 1
+}
+
Volume mount for buffering
The following example sets a volume mount that syslog-ng can use for buffering messages on the disk (if Disk buffer is configured in the output).
To adjust the CPU and memory limits and requests of the pods managed by Logging operator, see CPU and memory requirements.
Probe
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. You can configure a probe for syslog-ng in the livenessProbe section of the The Logging custom resource. For example:
Verify that the Logging operator pod is running. Issue the following command: kubectl get pods |grep logging-operator
+The output should include the a running pod, for example:
NAME READY STATUS RESTARTS AGE
+logging-demo-log-generator-6448d45cd9-z7zk8 1/1 Running 0 24m
+
+
Check the status of your resources. Beginning with Logging Operator 3.8, all custom resources have a Status and a Problems field. In a healthy system, the Problems field of the resources is empty, for example:
kubectl get clusteroutput -A
+
Sample output:
NAMESPACE NAME ACTIVE PROBLEMS
+default nullout true
+
The ACTIVE column indicates that the ClusterOutput has successfully passed the configcheck and presented it in the current fluentd configuration. When no errors are reported the PROBLEMS column is empty.
Take a look at another example, in which we have an incorrect ClusterFlow.
kubectl get clusterflow -o wide
+
Sample output:
NAME ACTIVE PROBLEMS
+all-log true
+nullout false1
+
You can see that the nulloutClusterflow is inactive and there is 1 problem with the configuration. To display the problem, check the status field of the object, for example:
kubectl get clusterflow nullout -o=jsonpath='{.status}'| jq
+
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
1.1 - Troubleshooting Fluent Bit
The following sections help you troubleshoot the Fluent Bit component of the Logging operator.
Check the Fluent Bit daemonset
Verify that the Fluent Bit daemonset is available. Issue the following command: kubectl get daemonsets
+The output should include a Fluent Bit daemonset, for example:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+logging-demo-fluentbit 11111 <none> 110s
+
Check the Fluent Bit configuration
You can display the current configuration of the Fluent Bit daemonset using the following command:
+kubectl get secret logging-demo-fluentbit -o jsonpath="{.data['fluent-bit\.conf']}" | base64 --decode
All Fluent Bit image tags have a debug version marked with the -debug suffix. You can install this debug version using the following command:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
After deploying the debug version, you can kubectl exec into the pod using sh and look around. For example: kubectl exec -it logging-demo-fluentbit-778zg sh
Check the queued log messages
You can check the buffer directory if Fluent Bit is configured to buffer queued log messages to disk instead of in memory. (You can configure it through the InputTail fluentbit config, by setting the storage.type field to filesystem.)
kubectl exec -it logging-demo-fluentbit-9dpzg ls /buffers
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
1.2 - Troubleshooting Fluentd
The following sections help you troubleshoot the Fluentd statefulset component of the Logging operator.
Check Fluentd pod status (statefulset)
Verify that the Fluentd statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-fluentd 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated fluentd configuration before applying it to fluentd. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check Fluentd configuration
Use the following command to display the configuration of Fluentd:
+kubectl get secret logging-demo-fluentd-app -o jsonpath="{.data['fluentd\.conf']}" | base64 --decode
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Tip: If the logs include the error="can't create buffer file ... error message, Fluentd can’t create the buffer file at the specified location. This can mean for example that the disk is full, the filesystem is read-only, or some other permission error. Check the buffer-related settings of your Fluentd configuration.
Set stdout as an output
You can use an stdout filter at any point in the flow to dump the log messages to the stdout of the Fluentd container. For example:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
kubectl exec -it logging-demo-fluentd-0 ls /buffers
Defaulting container name to fluentd.
+Use 'kubectl describe pod/logging-demo-fluentd-0 -n logging' to see all of the containers in this pod.
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer.meta
+
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
1.3 - Troubleshooting syslog-ng
The following sections help you troubleshoot the syslog-ng statefulset component of the Logging operator.
Check syslog-ng pod status (statefulset)
Verify that the syslog-ng statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-syslogng 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated syslog-ng configuration before applying it to syslog-ng. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check syslog-ng configuration
Use the following command to display the configuration of syslog-ng:
+kubectl get secret logging-demo-syslogng-app -o jsonpath="{.data['syslogng\.conf']}" | base64 --decode
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
2 - Monitor your logging pipeline with Prometheus Operator
You can configure the Logging operator to expose metrics endpoints for Fluentd, Fluent Bit, and syslog-ng using ServiceMonitor resources. That way, a Prometheus operator running in the same cluster can automatically fetch your logging metrics.
Metrics Variables
You can configure the following metrics-related options in the spec.fluentd.metrics, spec.syslogNG.metrics, and spec.fluentbit.metrics sections of your Logging resource.
+
+
+
Variable Name
Type
Required
Default
Description
+
+
interval
string
No
“15s”
Scrape Interval
+
timeout
string
No
“5s”
Scrape Timeout
+
port
int
No
-
Metrics Port.
+
path
int
No
-
Metrics Path.
+
serviceMonitor
bool
No
false
Enable to create ServiceMonitor for Prometheus operator
Prometheus Operator Documentation
+The prometheus-operator install may take a few more minutes. Please be patient.
+The logging-operator metrics function depends on the prometheus-operator’s resources.
+If those do not exist in the cluster it may cause the logging-operator’s malfunction.
Install Logging Operator with Helm
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
This section describes how to set alerts for your logging infrastructure. Alternatively, you can enable the default alerting rules that are provided by the Logging operator.
+
Note: Alerting based on the contents of the collected log messages is not covered here.
Prerequisites
Using alerting rules requires the following:
+
Logging operator 3.14.0 or newer installed on the cluster.
syslog-ng is supported only in Logging operator 4.0 or newer.
Enable the default alerting rules
Logging operator comes with a number of default alerting rules that help you monitor your logging environment and ensure that it’s working properly. To enable the default rules, complete the following steps.
The number of Fluent Bit errors or retries is high
For the Fluentd and syslog-ng log forwarders:
+
Prometheus cannot access the log forwarder node
The buffers of the log forwarder are filling up quickly
Traffic to the log forwarder is increasing at a high rate
The number of errors or retries is high on the log forwarder
The buffers are over 90% full
Currently, you cannot modify the default alerting rules, because they are generated from the source files. For the detailed list of alerts, see the source code:
For example, the Logging operator creates the following alerting rule to detect if a Fluentd node is down:
apiVersion:monitoring.coreos.com/v1
+kind:PrometheusRule
+name:logging-demo-fluentd-metrics
+namespace:logging
+spec:
+groups:
+- name:fluentd
+rules:
+- alert:FluentdNodeDown
+annotations:
+description:Prometheus could not scrape {{ "{{ $labels.job }}" }} for more
+than 30 minutes
+summary:fluentd cannot be scraped
+expr:up{job="logging-demo-fluentd-metrics", namespace="logging"} == 0
+for:10m
+labels:
+service:fluentd
+severity:critical
+
On the Prometheus web interface, this rule looks like:
+
4 - Readiness probe
This section describes how to configure readiness probes for your Fluentd and syslog-ng pods. If you don’t configure custom readiness probes, Logging operator uses the default probes.
Prerequisites
+
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
+
syslog-ng is supported only in Logging operator 4.0 or newer.
Overview of default readiness probes
By default, Logging operator performs the following readiness checks:
+
Number of buffer files is too high (higher than 5000)
Fluentd buffers are over 90% full
syslog-ng buffers are over 90% full
The parameters of the readiness probes and pod failure is set by using the usual Kubernetes probe configuration parameters. Instead of the Kubernetes defaults, the Logging operator uses the following values for these parameters:
Currently, you cannot modify the default readiness probes, because they are generated from the source files. For the detailed list of readiness probes, see the Default readiness probes. However, you can customize their values in the Logging custom resource, separately for the Fluentd and syslog-ng log forwarder. For example:
The Logging operator applies the following readiness probe by default:
readinessProbe:
+exec:
+command:
+- /bin/sh
+- -c
+- FREESPACE_THRESHOLD=90
+- FREESPACE_CURRENT=$(df -h $BUFFER_PATH | grep / | awk '{ print $5}' | sed
+'s/%//g')
+- if [ "$FREESPACE_CURRENT" -gt "$FREESPACE_THRESHOLD" ] ; then exit 1; fi
+- MAX_FILE_NUMBER=5000
+- FILE_NUMBER_CURRENT=$(find $BUFFER_PATH -type f -name *.buffer | wc -l)
+- if [ "$FILE_NUMBER_CURRENT" -gt "$MAX_FILE_NUMBER" ] ; then exit 1; fi
+failureThreshold:1
+initialDelaySeconds:5
+periodSeconds:30
+successThreshold:3
+timeoutSeconds:3
+
Add custom readiness probes
You can add your own custom readiness probes to the spec.ReadinessProbe section of the logging custom resource. For details on the format of readiness probes, see the official Kubernetes documentation.
+
CAUTION:
If you set any custom readiness probes, they completely override the default probes.
+
+
5 - Collect Fluentd errors
This section describes how to collect Fluentd error messages (messages that are sent to the @ERROR label from another plugin in Fluentd).
+
Note: It depends on the specific plugin implementation what messages are sent to the @ERROR label. For example, a parsing plugin that fails to parse a line could send that line to the @ERROR label.
Prerequisites
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
Configure error output
To collect the error messages of Fluentd, complete the following steps.
+
+
Create a ClusterOutput that receives logs from every logging flow where error happens. For example, create a file output. For details on creating outputs, see Output and ClusterOutput.
Set the errorOutputRef in the Logging resource to your preferred ClusterOutput.
apiVersion:logging.banzaicloud.io/v1beta1
+kind:Logging
+metadata:
+name:example
+spec:
+controlNamespace:default
+enableRecreateWorkloadOnImmutableFieldChange:true
+errorOutputRef:error-file
+fluentbit:
+bufferStorage:{}
+bufferStorageVolume:
+hostPath:
+path:""
+filterKubernetes:{}
+# rest of the resource is omitted
+
You cannot apply filters for this specific error flow.
+
Apply the ClusterOutput and Logging to your cluster.
+
6 - Optimization
Watch specific resources
The Logging operator watches resources in all namespaces, which is required because it manages cluster-scoped objects, and also objects in multiple namespaces.
However, in a large-scale infrastructure, where the number of resources is large, it makes sense to limit the scope of resources monitored by the Logging operator to save considerable amount of memory and container restarts.
Starting with Logging operator version 3.12.0, this is now available by passing the following command-line arguments to the operator.
+
watch-namespace: Watch only objects in this namespace. Note that even if the watch-namespace option is set, the operator must watch certain objects (like Flows and Outputs) in every namespace.
watch-logging-name: Logging resource name to optionally filter the list of watched objects based on which logging they belong to by checking the app.kubernetes.io/managed-by label.
+
7 - Scaling
+
Note: When multiple instances send logs to the same output, the output can receive chunks of messages out of order. Some outputs tolerate this (for example, Elasticsearch), some do not, some require fine tuning (for example, Loki).
Scaling Fluentd
In a large-scale infrastructure the logging components can get high load as well. The typical sign of this is when fluentd cannot handle its buffer directory size growth for more than the configured or calculated (timekey + timekey_wait) flush interval. In this case, you can scale the fluentd statefulset.
The Logging Operator supports scaling a Fluentd aggregator statefulset up and down. Scaling statefulset pods down is challenging, because we need to take care of the underlying volumes with buffered data that hasn’t been sent, but the Logging Operator supports that use case as well.
The details for that and how to configure an HPA is described in the following documents:
SyslogNG can be scaled up as well, but persistent disk buffers are not processed automatically when scaling the statefulset down. That is currently a manual process.
+
8 - CPU and memory requirements
The resource requirements and limits of your Logging operator deployment must match the size of your cluster and the logging workloads. By default, the Logging operator uses the following configuration.
This section describes how to set alerts for your logging infrastructure. Alternatively, you can enable the default alerting rules that are provided by the Logging operator.
+
Note: Alerting based on the contents of the collected log messages is not covered here.
Prerequisites
Using alerting rules requires the following:
+
Logging operator 3.14.0 or newer installed on the cluster.
syslog-ng is supported only in Logging operator 4.0 or newer.
Enable the default alerting rules
Logging operator comes with a number of default alerting rules that help you monitor your logging environment and ensure that it’s working properly. To enable the default rules, complete the following steps.
The number of Fluent Bit errors or retries is high
For the Fluentd and syslog-ng log forwarders:
+
Prometheus cannot access the log forwarder node
The buffers of the log forwarder are filling up quickly
Traffic to the log forwarder is increasing at a high rate
The number of errors or retries is high on the log forwarder
The buffers are over 90% full
Currently, you cannot modify the default alerting rules, because they are generated from the source files. For the detailed list of alerts, see the source code:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/alerting/releases.releases b/4.6/docs/operation/alerting/releases.releases
new file mode 100644
index 000000000..0e244584c
--- /dev/null
+++ b/4.6/docs/operation/alerting/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/error-output/index.html b/4.6/docs/operation/error-output/index.html
new file mode 100644
index 000000000..e8c5a0b1c
--- /dev/null
+++ b/4.6/docs/operation/error-output/index.html
@@ -0,0 +1,648 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Collect Fluentd errors | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Collect Fluentd errors
+
This section describes how to collect Fluentd error messages (messages that are sent to the @ERROR label from another plugin in Fluentd).
+
Note: It depends on the specific plugin implementation what messages are sent to the @ERROR label. For example, a parsing plugin that fails to parse a line could send that line to the @ERROR label.
Prerequisites
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
Configure error output
To collect the error messages of Fluentd, complete the following steps.
+
+
Create a ClusterOutput that receives logs from every logging flow where error happens. For example, create a file output. For details on creating outputs, see Output and ClusterOutput.
Monitor your logging pipeline with Prometheus Operator
+
You can configure the Logging operator to expose metrics endpoints for Fluentd, Fluent Bit, and syslog-ng using ServiceMonitor resources. That way, a Prometheus operator running in the same cluster can automatically fetch your logging metrics.
Metrics Variables
You can configure the following metrics-related options in the spec.fluentd.metrics, spec.syslogNG.metrics, and spec.fluentbit.metrics sections of your Logging resource.
+
+
+
Variable Name
Type
Required
Default
Description
+
+
interval
string
No
“15s”
Scrape Interval
+
timeout
string
No
“5s”
Scrape Timeout
+
port
int
No
-
Metrics Port.
+
path
int
No
-
Metrics Path.
+
serviceMonitor
bool
No
false
Enable to create ServiceMonitor for Prometheus operator
Prometheus Operator Documentation
+The prometheus-operator install may take a few more minutes. Please be patient.
+The logging-operator metrics function depends on the prometheus-operator’s resources.
+If those do not exist in the cluster it may cause the logging-operator’s malfunction.
Install Logging Operator with Helm
+
+
Install the Logging operator into the logging namespace:
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Wed Aug 9 11:02:12 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
+
Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
+Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/logging-operator-monitoring/releases.releases b/4.6/docs/operation/logging-operator-monitoring/releases.releases
new file mode 100644
index 000000000..6dc881b75
--- /dev/null
+++ b/4.6/docs/operation/logging-operator-monitoring/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/optimization/index.html b/4.6/docs/operation/optimization/index.html
new file mode 100644
index 000000000..c8058905b
--- /dev/null
+++ b/4.6/docs/operation/optimization/index.html
@@ -0,0 +1,621 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Optimization | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Optimization
+
Watch specific resources
The Logging operator watches resources in all namespaces, which is required because it manages cluster-scoped objects, and also objects in multiple namespaces.
However, in a large-scale infrastructure, where the number of resources is large, it makes sense to limit the scope of resources monitored by the Logging operator to save considerable amount of memory and container restarts.
Starting with Logging operator version 3.12.0, this is now available by passing the following command-line arguments to the operator.
+
watch-namespace: Watch only objects in this namespace. Note that even if the watch-namespace option is set, the operator must watch certain objects (like Flows and Outputs) in every namespace.
watch-logging-name: Logging resource name to optionally filter the list of watched objects based on which logging they belong to by checking the app.kubernetes.io/managed-by label.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/optimization/releases.releases b/4.6/docs/operation/optimization/releases.releases
new file mode 100644
index 000000000..0ae9221fb
--- /dev/null
+++ b/4.6/docs/operation/optimization/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/readiness-probe/index.html b/4.6/docs/operation/readiness-probe/index.html
new file mode 100644
index 000000000..5263a1052
--- /dev/null
+++ b/4.6/docs/operation/readiness-probe/index.html
@@ -0,0 +1,687 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Readiness probe | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Readiness probe
+
This section describes how to configure readiness probes for your Fluentd and syslog-ng pods. If you don’t configure custom readiness probes, Logging operator uses the default probes.
Prerequisites
+
Configuring readiness probes requires Logging operator 3.14.0 or newer installed on the cluster.
+
syslog-ng is supported only in Logging operator 4.0 or newer.
Overview of default readiness probes
By default, Logging operator performs the following readiness checks:
+
Number of buffer files is too high (higher than 5000)
Fluentd buffers are over 90% full
syslog-ng buffers are over 90% full
The parameters of the readiness probes and pod failure is set by using the usual Kubernetes probe configuration parameters. Instead of the Kubernetes defaults, the Logging operator uses the following values for these parameters:
Currently, you cannot modify the default readiness probes, because they are generated from the source files. For the detailed list of readiness probes, see the Default readiness probes. However, you can customize their values in the Logging custom resource, separately for the Fluentd and syslog-ng log forwarder. For example:
The Logging operator applies the following readiness probe by default:
readinessProbe:
+exec:
+command:
+- /bin/sh
+- -c
+- FREESPACE_THRESHOLD=90
+- FREESPACE_CURRENT=$(df -h $BUFFER_PATH | grep / | awk '{ print $5}' | sed
+'s/%//g')
+- if [ "$FREESPACE_CURRENT" -gt "$FREESPACE_THRESHOLD" ] ; then exit 1; fi
+- MAX_FILE_NUMBER=5000
+- FILE_NUMBER_CURRENT=$(find $BUFFER_PATH -type f -name *.buffer | wc -l)
+- if [ "$FILE_NUMBER_CURRENT" -gt "$MAX_FILE_NUMBER" ] ; then exit 1; fi
+failureThreshold:1
+initialDelaySeconds:5
+periodSeconds:30
+successThreshold:3
+timeoutSeconds:3
+
Add custom readiness probes
You can add your own custom readiness probes to the spec.ReadinessProbe section of the logging custom resource. For details on the format of readiness probes, see the official Kubernetes documentation.
+
CAUTION:
If you set any custom readiness probes, they completely override the default probes.
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/readiness-probe/releases.releases b/4.6/docs/operation/readiness-probe/releases.releases
new file mode 100644
index 000000000..3a794208b
--- /dev/null
+++ b/4.6/docs/operation/readiness-probe/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/releases.releases b/4.6/docs/operation/releases.releases
new file mode 100644
index 000000000..e747bdccb
--- /dev/null
+++ b/4.6/docs/operation/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/requirements/index.html b/4.6/docs/operation/requirements/index.html
new file mode 100644
index 000000000..eaab0e566
--- /dev/null
+++ b/4.6/docs/operation/requirements/index.html
@@ -0,0 +1,666 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+CPU and memory requirements | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
CPU and memory requirements
+
The resource requirements and limits of your Logging operator deployment must match the size of your cluster and the logging workloads. By default, the Logging operator uses the following configuration.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/requirements/releases.releases b/4.6/docs/operation/requirements/releases.releases
new file mode 100644
index 000000000..d6f3cadf9
--- /dev/null
+++ b/4.6/docs/operation/requirements/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/scaling/index.html b/4.6/docs/operation/scaling/index.html
new file mode 100644
index 000000000..29990e5d4
--- /dev/null
+++ b/4.6/docs/operation/scaling/index.html
@@ -0,0 +1,618 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Scaling | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Scaling
+
+
Note: When multiple instances send logs to the same output, the output can receive chunks of messages out of order. Some outputs tolerate this (for example, Elasticsearch), some do not, some require fine tuning (for example, Loki).
Scaling Fluentd
In a large-scale infrastructure the logging components can get high load as well. The typical sign of this is when fluentd cannot handle its buffer directory size growth for more than the configured or calculated (timekey + timekey_wait) flush interval. In this case, you can scale the fluentd statefulset.
The Logging Operator supports scaling a Fluentd aggregator statefulset up and down. Scaling statefulset pods down is challenging, because we need to take care of the underlying volumes with buffered data that hasn’t been sent, but the Logging Operator supports that use case as well.
The details for that and how to configure an HPA is described in the following documents:
SyslogNG can be scaled up as well, but persistent disk buffers are not processed automatically when scaling the statefulset down. That is currently a manual process.
Verify that the Logging operator pod is running. Issue the following command: kubectl get pods |grep logging-operator
+The output should include the a running pod, for example:
NAME READY STATUS RESTARTS AGE
+logging-demo-log-generator-6448d45cd9-z7zk8 1/1 Running 0 24m
+
+
Check the status of your resources. Beginning with Logging Operator 3.8, all custom resources have a Status and a Problems field. In a healthy system, the Problems field of the resources is empty, for example:
kubectl get clusteroutput -A
+
Sample output:
NAMESPACE NAME ACTIVE PROBLEMS
+default nullout true
+
The ACTIVE column indicates that the ClusterOutput has successfully passed the configcheck and presented it in the current fluentd configuration. When no errors are reported the PROBLEMS column is empty.
Take a look at another example, in which we have an incorrect ClusterFlow.
kubectl get clusterflow -o wide
+
Sample output:
NAME ACTIVE PROBLEMS
+all-log true
+nullout false1
+
You can see that the nulloutClusterflow is inactive and there is 1 problem with the configuration. To display the problem, check the status field of the object, for example:
kubectl get clusterflow nullout -o=jsonpath='{.status}'| jq
+
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
1 - Troubleshooting Fluent Bit
The following sections help you troubleshoot the Fluent Bit component of the Logging operator.
Check the Fluent Bit daemonset
Verify that the Fluent Bit daemonset is available. Issue the following command: kubectl get daemonsets
+The output should include a Fluent Bit daemonset, for example:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+logging-demo-fluentbit 11111 <none> 110s
+
Check the Fluent Bit configuration
You can display the current configuration of the Fluent Bit daemonset using the following command:
+kubectl get secret logging-demo-fluentbit -o jsonpath="{.data['fluent-bit\.conf']}" | base64 --decode
All Fluent Bit image tags have a debug version marked with the -debug suffix. You can install this debug version using the following command:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
After deploying the debug version, you can kubectl exec into the pod using sh and look around. For example: kubectl exec -it logging-demo-fluentbit-778zg sh
Check the queued log messages
You can check the buffer directory if Fluent Bit is configured to buffer queued log messages to disk instead of in memory. (You can configure it through the InputTail fluentbit config, by setting the storage.type field to filesystem.)
kubectl exec -it logging-demo-fluentbit-9dpzg ls /buffers
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
2 - Troubleshooting Fluentd
The following sections help you troubleshoot the Fluentd statefulset component of the Logging operator.
Check Fluentd pod status (statefulset)
Verify that the Fluentd statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-fluentd 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated fluentd configuration before applying it to fluentd. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check Fluentd configuration
Use the following command to display the configuration of Fluentd:
+kubectl get secret logging-demo-fluentd-app -o jsonpath="{.data['fluentd\.conf']}" | base64 --decode
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Tip: If the logs include the error="can't create buffer file ... error message, Fluentd can’t create the buffer file at the specified location. This can mean for example that the disk is full, the filesystem is read-only, or some other permission error. Check the buffer-related settings of your Fluentd configuration.
Set stdout as an output
You can use an stdout filter at any point in the flow to dump the log messages to the stdout of the Fluentd container. For example:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
kubectl exec -it logging-demo-fluentd-0 ls /buffers
Defaulting container name to fluentd.
+Use 'kubectl describe pod/logging-demo-fluentd-0 -n logging' to see all of the containers in this pod.
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer.meta
+
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
+
3 - Troubleshooting syslog-ng
The following sections help you troubleshoot the syslog-ng statefulset component of the Logging operator.
Check syslog-ng pod status (statefulset)
Verify that the syslog-ng statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-syslogng 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated syslog-ng configuration before applying it to syslog-ng. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check syslog-ng configuration
Use the following command to display the configuration of syslog-ng:
+kubectl get secret logging-demo-syslogng-app -o jsonpath="{.data['syslogng\.conf']}" | base64 --decode
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
The following sections help you troubleshoot the Fluent Bit component of the Logging operator.
Check the Fluent Bit daemonset
Verify that the Fluent Bit daemonset is available. Issue the following command: kubectl get daemonsets
+The output should include a Fluent Bit daemonset, for example:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+logging-demo-fluentbit 11111 <none> 110s
+
Check the Fluent Bit configuration
You can display the current configuration of the Fluent Bit daemonset using the following command:
+kubectl get secret logging-demo-fluentbit -o jsonpath="{.data['fluent-bit\.conf']}" | base64 --decode
All Fluent Bit image tags have a debug version marked with the -debug suffix. You can install this debug version using the following command:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
After deploying the debug version, you can kubectl exec into the pod using sh and look around. For example: kubectl exec -it logging-demo-fluentbit-778zg sh
Check the queued log messages
You can check the buffer directory if Fluent Bit is configured to buffer queued log messages to disk instead of in memory. (You can configure it through the InputTail fluentbit config, by setting the storage.type field to filesystem.)
kubectl exec -it logging-demo-fluentbit-9dpzg ls /buffers
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/troubleshooting/fluentbit/releases.releases b/4.6/docs/operation/troubleshooting/fluentbit/releases.releases
new file mode 100644
index 000000000..6c56cc17c
--- /dev/null
+++ b/4.6/docs/operation/troubleshooting/fluentbit/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/troubleshooting/fluentd/index.html b/4.6/docs/operation/troubleshooting/fluentd/index.html
new file mode 100644
index 000000000..fc50e2c73
--- /dev/null
+++ b/4.6/docs/operation/troubleshooting/fluentd/index.html
@@ -0,0 +1,717 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Troubleshooting Fluentd | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Troubleshooting Fluentd
+
The following sections help you troubleshoot the Fluentd statefulset component of the Logging operator.
Check Fluentd pod status (statefulset)
Verify that the Fluentd statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-fluentd 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated fluentd configuration before applying it to fluentd. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check Fluentd configuration
Use the following command to display the configuration of Fluentd:
+kubectl get secret logging-demo-fluentd-app -o jsonpath="{.data['fluentd\.conf']}" | base64 --decode
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4.
+See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
+
Tip: If the logs include the error="can't create buffer file ... error message, Fluentd can’t create the buffer file at the specified location. This can mean for example that the disk is full, the filesystem is read-only, or some other permission error. Check the buffer-related settings of your Fluentd configuration.
Set stdout as an output
You can use an stdout filter at any point in the flow to dump the log messages to the stdout of the Fluentd container. For example:
+kubectl edit loggings.logging.banzaicloud.io logging-demo
kubectl exec -it logging-demo-fluentd-0 ls /buffers
Defaulting container name to fluentd.
+Use 'kubectl describe pod/logging-demo-fluentd-0 -n logging' to see all of the containers in this pod.
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer
+logging_logging-demo-flow_logging-demo-output-minio_s3.b598f7eb0b2b34076b6da13a996ff2671.buffer.meta
+
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Verify that the Logging operator pod is running. Issue the following command: kubectl get pods |grep logging-operator
+The output should include the a running pod, for example:
NAME READY STATUS RESTARTS AGE
+logging-demo-log-generator-6448d45cd9-z7zk8 1/1 Running 0 24m
+
+
Check the status of your resources. Beginning with Logging Operator 3.8, all custom resources have a Status and a Problems field. In a healthy system, the Problems field of the resources is empty, for example:
kubectl get clusteroutput -A
+
Sample output:
NAMESPACE NAME ACTIVE PROBLEMS
+default nullout true
+
The ACTIVE column indicates that the ClusterOutput has successfully passed the configcheck and presented it in the current fluentd configuration. When no errors are reported the PROBLEMS column is empty.
Take a look at another example, in which we have an incorrect ClusterFlow.
kubectl get clusterflow -o wide
+
Sample output:
NAME ACTIVE PROBLEMS
+all-log true
+nullout false1
+
You can see that the nulloutClusterflow is inactive and there is 1 problem with the configuration. To display the problem, check the status field of the object, for example:
kubectl get clusterflow nullout -o=jsonpath='{.status}'| jq
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/operation/troubleshooting/kind/releases.releases b/4.6/docs/operation/troubleshooting/kind/releases.releases
new file mode 100644
index 000000000..bbac90d73
--- /dev/null
+++ b/4.6/docs/operation/troubleshooting/kind/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/troubleshooting/releases.releases b/4.6/docs/operation/troubleshooting/releases.releases
new file mode 100644
index 000000000..40b82b6dc
--- /dev/null
+++ b/4.6/docs/operation/troubleshooting/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/operation/troubleshooting/syslog-ng/index.html b/4.6/docs/operation/troubleshooting/syslog-ng/index.html
new file mode 100644
index 000000000..951b24da9
--- /dev/null
+++ b/4.6/docs/operation/troubleshooting/syslog-ng/index.html
@@ -0,0 +1,630 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Troubleshooting syslog-ng | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Troubleshooting syslog-ng
+
The following sections help you troubleshoot the syslog-ng statefulset component of the Logging operator.
Check syslog-ng pod status (statefulset)
Verify that the syslog-ng statefulset is available using the following command: kubectl get statefulsets
Expected output:
NAME READY AGE
+logging-demo-syslogng 1/1 1m
+
ConfigCheck
The Logging operator has a builtin mechanism that validates the generated syslog-ng configuration before applying it to syslog-ng. You should be able to see the configcheck pod and its log output. The result of the check is written into the status field of the corresponding Logging resource.
In case the operator is stuck in an error state caused by a failed configcheck, restore the previous configuration by modifying or removing the invalid resources to the point where the configcheck pod is finally able to complete successfully.
Check syslog-ng configuration
Use the following command to display the configuration of syslog-ng:
+kubectl get secret logging-demo-syslogng-app -o jsonpath="{.data['syslogng\.conf']}" | base64 --decode
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
Try out Logging Operator with these quick start guides, that show you the basics of Logging operator.
For other detailed examples using different outputs, see Examples.
+
1 - Single app, one destination
This guide shows you how to collect application and container logs in Kubernetes using the Logging operator.
The Logging operator itself doesn’t store any logs. For demonstration purposes, it can deploy a special workload will to the cluster to let you observe the logs flowing through the system.
The Logging operator collects all logs from the cluster, selects the specific logs based on pod labels, and sends the selected log messages to the output.
+For more details about the Logging operator, see the Logging operator overview.
+
Note: This example aims to be simple enough to understand the basic capabilities of the operator. For a production ready setup, see Logging infrastructure setup and Operation.
In this tutorial, you will:
+
Install the Logging operator on a cluster.
Configure Logging operator to collect logs from a namespace and send it to an sample output.
Install a sample application (log-generator) to collect its logs.
Check the collected logs.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
This command installs the latest stable Logging operator and an extra workload (service and deployment). This workload is called logging-operator-test-receiver. It listens on an HTTP port, receives JSON messages, and writes them to the standard output (stdout) so that it is trivial to observe.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Tue Aug 15 15:58:41 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
After the installation, check that the following pods and services are running:
kubectl get deploy -n logging
+
Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE
+logging-operator 1/1 11 15m
+logging-operator-test-receiver 1/1 11 15m
+
kubectl get svc -n logging
+
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+logging-operator ClusterIP None <none> 8080/TCP 15m
+logging-operator-test-receiver ClusterIP 10.99.77.113 <none> 8080/TCP 15m
+
Configure the Logging operator
+
+
Create a Logging resource to deploy syslog-ng or Fluentd as the central log aggregator and forwarder. You can complete this quick start guide with any of them, but they have different features, so they are not equivalent. For details, see Which log forwarder to use.
Note: The control namespace is where the Logging operator deploys the forwarder’s resources, like the StatefulSet and the configuration secrets. Usually it’s called logging.
By default, this namespace is used to define the cluster-wide resources: SyslogNGClusterOutput, SyslogNGClusterFlow, ClusterOutput, and ClusterFlow. For details, see Configure log routing.
Expected output:
logging.logging.banzaicloud.io/quickstart created
+
+
Create a FluentbitAgent resource to collect logs from all containers. No special configuration is required for now.
fluentbitagent.logging.banzaicloud.io/quickstart created
+
+
Check that the resources were created successfully so far. Run the following command:
kubectl get pod --namespace logging --selector app.kubernetes.io/managed-by=quickstart
+
You should already see a completed configcheck pod that validates the forwarder’s configuration before the actual statefulset starts.
+There should also be a running fluentbit instance per node, that already starts to send all logs to the forwarder.
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-jvdp5 1/1 Running 0 3m5s
+quickstart-syslog-ng-0 2/2 Running 0 3m5s
+quickstart-syslog-ng-configcheck-8197c552 0/1 Completed 0 3m42s
+
+
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-nk9ms 1/1 Running 0 19s
+quickstart-fluentd-0 2/2 Running 0 19s
+quickstart-fluentd-configcheck-ac2d4553 0/1 Completed 0 60s
+
+
Create a namespace (for example, quickstart) from where you want to collect the logs.
kubectl create namespace quickstart
+
Expected output:
namespace/quickstart created
+
+
Create a flow and an output resource in the same namespace (quickstart). The flow resource routes logs from the namespace to a specific output. In this example, the output is called http. The flow resources are called SyslogNGFlow and Flow, the output resources are SyslogNGOutput and Output for syslog-ng and Fluentd, respectively.
NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 10m
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 10m
+
+NAME ACTIVE PROBLEMS
+syslogngoutput.logging.banzaicloud.io/http true
+
+NAME ACTIVE PROBLEMS
+syslogngflow.logging.banzaicloud.io/log-generator true
+
+
NAME ACTIVE PROBLEMS
+flow.logging.banzaicloud.io/log-generator true
+
+NAME ACTIVE PROBLEMS
+output.logging.banzaicloud.io/http true
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 3m12s
+
+NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 3m2s
+
+
Install log-generator to produce logs with the label app.kubernetes.io/name: log-generator
Release "log-generator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/log-generator:0.7.0
+Digest: sha256:0eba2c5c3adfc33deeec1d1612839cd1a0aa86f30022672ee022beab22436e04
+NAME: log-generator
+LAST DEPLOYED: Tue Aug 15 16:21:40 2023
+NAMESPACE: quickstart
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
The log-generator application starts to create HTTP access logs. Logging operator collects these log messages and sends them to the test-receiver pod defined in the output custom resource.
+
Check that the logs are delivered to the test-receiver pod output. First, run the following command to get the name of the test-receiver pod:
The log messages include the usual information of the access logs, and also Kubernetes-specific information like the pod name, labels, and so on.
+
(Optional) If you want to retry this guide with the other log forwarder on the same cluster, run the following command to delete the forwarder-specific resources:
If you have completed this guide, you have made the following changes to your cluster:
+
+
Installed the Fluent Bit agent on every node of the cluster to collect the logs and the labels from the node.
+
Installed syslog-ng or Fluentd on the cluster, to receive the logs from the Fluent Bit agents, and filter, parse, and transform them as needed, and to route the incoming logs to an output. To learn more about routing and filtering, see Routing your logs with syslog-ng or Routing your logs with Fluentd match directives. - Created the following resources that configure Logging operator and the components it manages:
+
Logging to configure the logging infrastructure, like the details of the Fluent Bit and the syslog-ng or Fluentd deployment. To learn more about configuring the logging infrastructure, see Logging infrastructure setup.
SyslogNGFlow or Flow that processes the incoming messages and routes them to the appropriate output. To learn more, see syslog-ng flows or Flow and ClusterFlow.
+
Installed a simple receiver to act as the destination of the logs, and configured the the log forwarder to send the logs from the quickstart namespace to this destination.
+
Installed a log-generator application to generate sample log messages, and verified that the logs of this application arrive to the output.
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
+
+
+
+
+
+
\ No newline at end of file
diff --git a/4.6/docs/quickstarts/releases.releases b/4.6/docs/quickstarts/releases.releases
new file mode 100644
index 000000000..25b0e2679
--- /dev/null
+++ b/4.6/docs/quickstarts/releases.releases
@@ -0,0 +1,8 @@
+
+ latest (4.6.0)
+ 4.5
+ 4.4
+ 4.3
+ 4.2
+ 4.0
+
\ No newline at end of file
diff --git a/4.6/docs/quickstarts/single/index.html b/4.6/docs/quickstarts/single/index.html
new file mode 100644
index 000000000..c46bd300b
--- /dev/null
+++ b/4.6/docs/quickstarts/single/index.html
@@ -0,0 +1,926 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Single app, one destination | Logging operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Single app, one destination
+
This guide shows you how to collect application and container logs in Kubernetes using the Logging operator.
The Logging operator itself doesn’t store any logs. For demonstration purposes, it can deploy a special workload will to the cluster to let you observe the logs flowing through the system.
The Logging operator collects all logs from the cluster, selects the specific logs based on pod labels, and sends the selected log messages to the output.
+For more details about the Logging operator, see the Logging operator overview.
+
Note: This example aims to be simple enough to understand the basic capabilities of the operator. For a production ready setup, see Logging infrastructure setup and Operation.
In this tutorial, you will:
+
Install the Logging operator on a cluster.
Configure Logging operator to collect logs from a namespace and send it to an sample output.
Install a sample application (log-generator) to collect its logs.
Check the collected logs.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
+
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
This command installs the latest stable Logging operator and an extra workload (service and deployment). This workload is called logging-operator-test-receiver. It listens on an HTTP port, receives JSON messages, and writes them to the standard output (stdout) so that it is trivial to observe.
Release "logging-operator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
+Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
+NAME: logging-operator
+LAST DEPLOYED: Tue Aug 15 15:58:41 2023
+NAMESPACE: logging
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
After the installation, check that the following pods and services are running:
kubectl get deploy -n logging
+
Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE
+logging-operator 1/1 11 15m
+logging-operator-test-receiver 1/1 11 15m
+
kubectl get svc -n logging
+
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+logging-operator ClusterIP None <none> 8080/TCP 15m
+logging-operator-test-receiver ClusterIP 10.99.77.113 <none> 8080/TCP 15m
+
Configure the Logging operator
+
+
Create a Logging resource to deploy syslog-ng or Fluentd as the central log aggregator and forwarder. You can complete this quick start guide with any of them, but they have different features, so they are not equivalent. For details, see Which log forwarder to use.
Note: The control namespace is where the Logging operator deploys the forwarder’s resources, like the StatefulSet and the configuration secrets. Usually it’s called logging.
By default, this namespace is used to define the cluster-wide resources: SyslogNGClusterOutput, SyslogNGClusterFlow, ClusterOutput, and ClusterFlow. For details, see Configure log routing.
Expected output:
logging.logging.banzaicloud.io/quickstart created
+
+
Create a FluentbitAgent resource to collect logs from all containers. No special configuration is required for now.
fluentbitagent.logging.banzaicloud.io/quickstart created
+
+
Check that the resources were created successfully so far. Run the following command:
kubectl get pod --namespace logging --selector app.kubernetes.io/managed-by=quickstart
+
You should already see a completed configcheck pod that validates the forwarder’s configuration before the actual statefulset starts.
+There should also be a running fluentbit instance per node, that already starts to send all logs to the forwarder.
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-jvdp5 1/1 Running 0 3m5s
+quickstart-syslog-ng-0 2/2 Running 0 3m5s
+quickstart-syslog-ng-configcheck-8197c552 0/1 Completed 0 3m42s
+
+
NAME READY STATUS RESTARTS AGE
+quickstart-fluentbit-nk9ms 1/1 Running 0 19s
+quickstart-fluentd-0 2/2 Running 0 19s
+quickstart-fluentd-configcheck-ac2d4553 0/1 Completed 0 60s
+
+
Create a namespace (for example, quickstart) from where you want to collect the logs.
kubectl create namespace quickstart
+
Expected output:
namespace/quickstart created
+
+
Create a flow and an output resource in the same namespace (quickstart). The flow resource routes logs from the namespace to a specific output. In this example, the output is called http. The flow resources are called SyslogNGFlow and Flow, the output resources are SyslogNGOutput and Output for syslog-ng and Fluentd, respectively.
NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 10m
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 10m
+
+NAME ACTIVE PROBLEMS
+syslogngoutput.logging.banzaicloud.io/http true
+
+NAME ACTIVE PROBLEMS
+syslogngflow.logging.banzaicloud.io/log-generator true
+
+
NAME ACTIVE PROBLEMS
+flow.logging.banzaicloud.io/log-generator true
+
+NAME ACTIVE PROBLEMS
+output.logging.banzaicloud.io/http true
+
+NAME AGE
+logging.logging.banzaicloud.io/quickstart 3m12s
+
+NAME AGE
+fluentbitagent.logging.banzaicloud.io/quickstart 3m2s
+
+
Install log-generator to produce logs with the label app.kubernetes.io/name: log-generator
Release "log-generator" does not exist. Installing it now.
+Pulled: ghcr.io/kube-logging/helm-charts/log-generator:0.7.0
+Digest: sha256:0eba2c5c3adfc33deeec1d1612839cd1a0aa86f30022672ee022beab22436e04
+NAME: log-generator
+LAST DEPLOYED: Tue Aug 15 16:21:40 2023
+NAMESPACE: quickstart
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+
The log-generator application starts to create HTTP access logs. Logging operator collects these log messages and sends them to the test-receiver pod defined in the output custom resource.
+
Check that the logs are delivered to the test-receiver pod output. First, run the following command to get the name of the test-receiver pod:
The log messages include the usual information of the access logs, and also Kubernetes-specific information like the pod name, labels, and so on.
+
(Optional) If you want to retry this guide with the other log forwarder on the same cluster, run the following command to delete the forwarder-specific resources:
If you have completed this guide, you have made the following changes to your cluster:
+
+
Installed the Fluent Bit agent on every node of the cluster to collect the logs and the labels from the node.
+
Installed syslog-ng or Fluentd on the cluster, to receive the logs from the Fluent Bit agents, and filter, parse, and transform them as needed, and to route the incoming logs to an output. To learn more about routing and filtering, see Routing your logs with syslog-ng or Routing your logs with Fluentd match directives. - Created the following resources that configure Logging operator and the components it manages:
+
Logging to configure the logging infrastructure, like the details of the Fluent Bit and the syslog-ng or Fluentd deployment. To learn more about configuring the logging infrastructure, see Logging infrastructure setup.
SyslogNGFlow or Flow that processes the incoming messages and routes them to the appropriate output. To learn more, see syslog-ng flows or Flow and ClusterFlow.
+
Installed a simple receiver to act as the destination of the logs, and configured the the log forwarder to send the logs from the quickstart namespace to this destination.
+
Installed a log-generator application to generate sample log messages, and verified that the logs of this application arrive to the output.
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on Discord or on the CNCF Slack.
Before asking for help, prepare the following information to make troubleshooting faster:
+
Logging operator version
kubernetes version
helm/chart version (if you installed the Logging operator with helm)
As a Fluent Bit restart can take a long time when there are many files to index, Logging operator now supports hot reload for Fluent Bit to reload its configuration on the fly.
You can enable hot reloads under the Logging’s spec.fluentbit.configHotReload (legacy method) option, or the new FluentbitAgent’s spec.configHotReload option:
Many thanks to @zrobisho for contributing this feature!
Kubernetes namespace labels and annotations
Logging operator 4.6 supports the new Fluent Bit Kubernetes filter options that will be released in Fluent Bit 3.0. That way you’ll be able to enrich your logs with Kubernetes namespace labels and annotations right at the source of the log messages.
Fluent Bit 3.0 hasn’t been released yet (at the time of this writing), but you can use a developer image to test the feature, using a FluentbitAgent resource like this:
You can now specify the event tailer image to use in the logging-operator chart.
Fluent Bit can now automatically delete irrecoverable chunks.
The Fluentd statefulset and its components created by the Logging operator now include the whole securityContext object.
The Elasticsearch output of the syslog-ng aggregator now supports the template option.
To avoid problems that might occur when a tenant has a faulty output and backpressure kicks in, Logging operator now creates a dedicated tail input for each tenant.
Removed feature
We have removed support for Pod Security Policies (PSPs), which were deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.
Note that the API was left intact, it just doesn’t do anything.
Version 4.5
The following are the highlights and main changes of Logging operator 4.5. For a complete list of changes and bugfixes, see the Logging operator 4.5 releases page.
Standalone FluentdConfig and SyslogNGConfig CRDs
Starting with Logging operator version 4.5, you can either configure Fluentd in the Logging CR, or you can use a standalone FluentdConfig CR. Similarly, you can use a standalone SyslogNGConfig CRD to configure syslog-ng.
These standalone CRDs are namespaced resources that allow you to configure the Fluentd/syslog-ng aggregator in the control namespace, separately from the Logging resource. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The following are the highlights and main changes of Logging operator 4.4. For a complete list of changes and bugfixes, see the Logging operator 4.4 releases page.
New syslog-ng features
When using syslog-ng as the log aggregator, you can now use the following new outputs:
On a side note, nodegroup level isolation for hard multitenancy is also supported, see the Nodegroup-based multitenancy example.
Forwarder logs
Fluent-bit now doesn’t process the logs of the Fluentd and syslog-ng forwarders by default to avoid infinitely growing message loops. With this change, you can access Fluentd and syslog-ng logs simply by running kubectl logs <name-of-forwarder-pod>
In a future Logging operator version the logs of the aggregators will also be available for routing to external outputs.
Timeout-based configuration checks
Timeout-based configuration checks are different from the normal method: they start a Fluentd or syslog-ng instance
+without the dry-run or syntax-check flags, so output plugins or destination drivers actually try to establish
+connections and will fail if there are any issues , for example, with the credentials.
For jobs/individual pods that run to completion, Istio sidecar injection needs to be disabled, otherwise the affected pods would live forever with the running sidecar container. Configuration checkers and Fluentd drainer pods can be configured with the label sidecar.istio.io/inject set to false. You can configure Fluentd drainer labels in the Logging spec.
Improved buffer metrics
The buffer metrics are now available for both the Fluentd and the SyslogNG based aggregators.
The sidecar configuration has been rewritten to add a new metric and improve performance by avoiding unnecessary cardinality.
The name of the metric has been changed as well, but the original metric was kept in place to avoid breaking existing clients.
Metrics currently supported by the sidecar
Old
+# HELP node_buffer_size_bytes Disk space used [deprecated]
++# TYPE node_buffer_size_bytes gauge
++node_buffer_size_bytes{entity="/buffers"} 32253
+
New
+# HELP logging_buffer_files File count
++# TYPE logging_buffer_files gauge
++logging_buffer_files{entity="/buffers",host="all-to-file-fluentd-0"} 2
++# HELP logging_buffer_size_bytes Disk space used
++# TYPE logging_buffer_size_bytes gauge
++logging_buffer_size_bytes{entity="/buffers",host="all-to-file-fluentd-0"} 32253
+
Other improvements
+
You can now configure the resources of the buffer metrics sidecar.
You can now rerun failed configuration checks if there is no configcheck pod.
As a Fluent Bit restart can take a long time when there are many files to index, Logging operator now supports hot reload for Fluent Bit to reload its configuration on the fly.
You can enable hot reloads under the Logging’s spec.fluentbit.configHotReload (legacy method) option, or the new FluentbitAgent’s spec.configHotReload option:
Many thanks to @zrobisho for contributing this feature!
Kubernetes namespace labels and annotations
Logging operator 4.6 supports the new Fluent Bit Kubernetes filter options that will be released in Fluent Bit 3.0. That way you’ll be able to enrich your logs with Kubernetes namespace labels and annotations right at the source of the log messages.
Fluent Bit 3.0 hasn’t been released yet (at the time of this writing), but you can use a developer image to test the feature, using a FluentbitAgent resource like this:
You can now specify the event tailer image to use in the logging-operator chart.
Fluent Bit can now automatically delete irrecoverable chunks.
The Fluentd statefulset and its components created by the Logging operator now include the whole securityContext object.
The Elasticsearch output of the syslog-ng aggregator now supports the template option.
To avoid problems that might occur when a tenant has a faulty output and backpressure kicks in, Logging operator now creates a dedicated tail input for each tenant.
Removed feature
We have removed support for Pod Security Policies (PSPs), which were deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.
Note that the API was left intact, it just doesn’t do anything.
Version 4.5
The following are the highlights and main changes of Logging operator 4.5. For a complete list of changes and bugfixes, see the Logging operator 4.5 releases page.
Standalone FluentdConfig and SyslogNGConfig CRDs
Starting with Logging operator version 4.5, you can either configure Fluentd in the Logging CR, or you can use a standalone FluentdConfig CR. Similarly, you can use a standalone SyslogNGConfig CRD to configure syslog-ng.
These standalone CRDs are namespaced resources that allow you to configure the Fluentd/syslog-ng aggregator in the control namespace, separately from the Logging resource. That way you can use a multi-tenant model, where tenant owners are responsible for operating their own aggregator, while the Logging resource is in control of the central operations team.
The following are the highlights and main changes of Logging operator 4.4. For a complete list of changes and bugfixes, see the Logging operator 4.4 releases page.
New syslog-ng features
When using syslog-ng as the log aggregator, you can now use the following new outputs:
On a side note, nodegroup level isolation for hard multitenancy is also supported, see the Nodegroup-based multitenancy example.
Forwarder logs
Fluent-bit now doesn’t process the logs of the Fluentd and syslog-ng forwarders by default to avoid infinitely growing message loops. With this change, you can access Fluentd and syslog-ng logs simply by running kubectl logs <name-of-forwarder-pod>
In a future Logging operator version the logs of the aggregators will also be available for routing to external outputs.
Timeout-based configuration checks
Timeout-based configuration checks are different from the normal method: they start a Fluentd or syslog-ng instance
+without the dry-run or syntax-check flags, so output plugins or destination drivers actually try to establish
+connections and will fail if there are any issues , for example, with the credentials.
For jobs/individual pods that run to completion, Istio sidecar injection needs to be disabled, otherwise the affected pods would live forever with the running sidecar container. Configuration checkers and Fluentd drainer pods can be configured with the label sidecar.istio.io/inject set to false. You can configure Fluentd drainer labels in the Logging spec.
Improved buffer metrics
The buffer metrics are now available for both the Fluentd and the SyslogNG based aggregators.
The sidecar configuration has been rewritten to add a new metric and improve performance by avoiding unnecessary cardinality.
The name of the metric has been changed as well, but the original metric was kept in place to avoid breaking existing clients.
Metrics currently supported by the sidecar
Old
+# HELP node_buffer_size_bytes Disk space used [deprecated]
++# TYPE node_buffer_size_bytes gauge
++node_buffer_size_bytes{entity="/buffers"} 32253
+
New
+# HELP logging_buffer_files File count
++# TYPE logging_buffer_files gauge
++logging_buffer_files{entity="/buffers",host="all-to-file-fluentd-0"} 2
++# HELP logging_buffer_size_bytes Disk space used
++# TYPE logging_buffer_size_bytes gauge
++logging_buffer_size_bytes{entity="/buffers",host="all-to-file-fluentd-0"} 32253
+
Other improvements
+
You can now configure the resources of the buffer metrics sidecar.
You can now rerun failed configuration checks if there is no configcheck pod.
The Logging operator solves your logging-related problems in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.
+
+
+
+
+The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages. You can filter and process the incoming log messages using the flow custom resource of the log forwarder to route them to the appropriate output. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify.
+