2 - Output and ClusterOutput
The Output
resource defines an output where your Fluentd Flows can send the log messages. The output is a namespaced
resource which means only a Flow
within the same namespace can access it. You can use secrets
in these definitions, but they must also be in the same namespace.
Outputs are the final stage for a logging flow
. You can define multiple outputs
and attach them to multiple flows
.
ClusterOutput
defines an Output without namespace restrictions. It is only evaluated in the controlNamespace
by default unless allowClusterResourcesFromAllNamespaces
is set to true.Note:
Flow
can be connected toOutput
andClusterOutput
, butClusterFlow
can be attached only toClusterOutput
.
-
-
- For the details of the supported output plugins, see Outputs.
- For the details of
Output
custom resource, see OutputSpec. - For the details of
ClusterOutput
custom resource, see ClusterOutput.
Fluentd S3 output example
The following snippet defines an Amazon S3 bucket as an output.
apiVersion: logging.banzaicloud.io/v1beta1
+For the details of the supported output plugins, see Fluentd outputs. For the details of Output
custom resource, see OutputSpec. For the details of ClusterOutput
custom resource, see ClusterOutput. Fluentd S3 output example
The following snippet defines an Amazon S3 bucket as an output.
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: s3-output-sample
@@ -1412,7 +1412,7 @@ 8.3.8 - Kubernetes Events Timestamp
</filter>
8.3.9 - Parser
Parser Filter
Overview
Parses a string field in event records and mutates its event record with the parsed result.
Configuration
ParserConfig
key_name (string, optional)
Specify field name in the record to parse. If you leave empty the Container Runtime default will be used.
Default: -
reserve_time (bool, optional)
Keep original event time in parsed result.
Default: -
reserve_data (bool, optional)
Keep original key-value pair in parsed result.
Default: -
remove_key_name_field (bool, optional)
Remove key_name field when parsing is succeeded
Default: -
replace_invalid_sequence (bool, optional)
If true, invalid string is replaced with safe characters and re-parse it.
Default: -
inject_key_prefix (string, optional)
Store parsed values with specified key name prefix.
Default: -
hash_value_field (string, optional)
Store parsed values as a hash value in a field.
Default: -
emit_invalid_record_to_error (*bool, optional)
Emit invalid record to @ERROR label. Invalid cases are: key not exist, format is not matched, unexpected error
Default: -
parse (ParseSection, optional)
Default: -
parsers ([]ParseSection, optional)
Deprecated, use parse
instead
Default: -
Parse Section
type (string, optional)
Parse type: apache2, apache_error, nginx, syslog, csv, tsv, ltsv, json, multiline, none, logfmt, grok, multiline_grok
Default: -
expression (string, optional)
Regexp expression to evaluate
Default: -
time_key (string, optional)
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: -
keys (string, optional)
Names for fields on each line. (seperated by coma)
Default: -
null_value_pattern (string, optional)
Specify null value pattern.
Default: -
null_empty_string (bool, optional)
If true, empty string field is replaced with nil
Default: -
estimate_current_event (bool, optional)
If true, use Fluent::EventTime.now(current time) as a timestamp when time_key is specified.
Default: -
keep_time_key (bool, optional)
If true, keep time field in the record.
Default: -
types (string, optional)
Types casting the fields to proper types example: field1:type, field2:type
Default: -
time_format (string, optional)
Process value using specified format. This is available only when time_type is string
Default: -
time_type (string, optional)
Parse/format value according to this type available values: float, unixtime, string
Default: string
local_time (bool, optional)
Ff true, use local time. Otherwise, UTC is used. This is exclusive with utc.
Default: true
utc (bool, optional)
If true, use UTC. Otherwise, local time is used. This is exclusive with localtime
Default: false
timezone (string, optional)
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: nil
format (string, optional)
Only available when using type: multi_format
Default: -
format_firstline (string, optional)
Only available when using type: multi_format
Default: -
delimiter (string, optional)
Only available when using type: ltsv
Default: “\t”
delimiter_pattern (string, optional)
Only available when using type: ltsv
Default: -
label_delimiter (string, optional)
Only available when using type: ltsv
Default: “:”
multiline ([]string, optional)
The multiline parser plugin parses multiline logs.
Default: -
patterns ([]SingleParseSection, optional)
Only available when using type: multi_format Parse Section
Default: -
grok_pattern (string, optional)
Only available when using type: grok, multiline_grok. The pattern of grok. You cannot specify multiple grok pattern with this.
Default: -
custom_pattern_path (*secret.Secret, optional)
Only available when using type: grok, multiline_grok. File that includes custom grok patterns.
Default: -
grok_failure_key (string, optional)
Only available when using type: grok, multiline_grok. The key has grok failure reason.
Default: -
grok_name_key (string, optional)
Only available when using type: grok, multiline_grok. The key name to store grok section’s name.
Default: -
multiline_start_regexp (string, optional)
Only available when using type: multiline_grok The regexp to match beginning of multiline.
Default: -
grok_patterns ([]GrokSection, optional)
Only available when using type: grok, multiline_grok. Grok Section Specify grok pattern series set.
Default: -
Parse Section (single)
type (string, optional) {#parse section-(single)-type}
Parse type: apache2, apache_error, nginx, syslog, csv, tsv, ltsv, json, multiline, none, logfmt, grok, multiline_grok
Default: -
expression (string, optional) {#parse section-(single)-expression}
Regexp expression to evaluate
Default: -
time_key (string, optional) {#parse section-(single)-time_key}
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: -
null_value_pattern (string, optional) {#parse section-(single)-null_value_pattern}
Specify null value pattern.
Default: -
null_empty_string (bool, optional) {#parse section-(single)-null_empty_string}
If true, empty string field is replaced with nil
Default: -
estimate_current_event (bool, optional) {#parse section-(single)-estimate_current_event}
If true, use Fluent::EventTime.now(current time) as a timestamp when time_key is specified.
Default: -
keep_time_key (bool, optional) {#parse section-(single)-keep_time_key}
If true, keep time field in the record.
Default: -
types (string, optional) {#parse section-(single)-types}
Types casting the fields to proper types example: field1:type, field2:type
Default: -
time_format (string, optional) {#parse section-(single)-time_format}
Process value using specified format. This is available only when time_type is string
Default: -
time_type (string, optional) {#parse section-(single)-time_type}
Parse/format value according to this type available values: float, unixtime, string
Default: string
local_time (bool, optional) {#parse section-(single)-local_time}
Ff true, use local time. Otherwise, UTC is used. This is exclusive with utc.
Default: true
utc (bool, optional) {#parse section-(single)-utc}
If true, use UTC. Otherwise, local time is used. This is exclusive with localtime
Default: false
timezone (string, optional) {#parse section-(single)-timezone}
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: nil
format (string, optional) {#parse section-(single)-format}
Only available when using type: multi_format
Default: -
grok_pattern (string, optional) {#parse section-(single)-grok_pattern}
Only available when using format: grok, multiline_grok. The pattern of grok. You cannot specify multiple grok pattern with this.
Default: -
custom_pattern_path (*secret.Secret, optional) {#parse section-(single)-custom_pattern_path}
Only available when using format: grok, multiline_grok. File that includes custom grok patterns.
Default: -
grok_failure_key (string, optional) {#parse section-(single)-grok_failure_key}
Only available when using format: grok, multiline_grok. The key has grok failure reason.
Default: -
grok_name_key (string, optional) {#parse section-(single)-grok_name_key}
Only available when using format: grok, multiline_grok. The key name to store grok section’s name.
Default: -
multiline_start_regexp (string, optional) {#parse section-(single)-multiline_start_regexp}
Only available when using format: multiline_grok The regexp to match beginning of multiline.
Default: -
grok_patterns ([]GrokSection, optional) {#parse section-(single)-grok_patterns}
Only available when using format: grok, multiline_grok. Grok Section Specify grok pattern series set.
Default: -
Grok Section
name (string, optional)
The name of grok section.
Default: -
pattern (string, required)
The pattern of grok.
Default: -
keep_time_key (bool, optional)
If true, keep time field in the record.
Default: -
time_key (string, optional)
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: time
time_format (string, optional)
Process value using specified format. This is available only when time_type is string.
Default: -
timezone (string, optional)
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: -
Example Parser
filter configurations
apiVersion: logging.banzaicloud.io/v1beta1
+8.3.9 - Parser
Parser Filter
Overview
Parses a string field in event records and mutates its event record with the parsed result.
Configuration
ParserConfig
key_name (string, optional)
Specify field name in the record to parse. If you leave empty the Container Runtime default will be used.
Default: -
reserve_time (bool, optional)
Keep original event time in parsed result.
Default: -
reserve_data (bool, optional)
Keep original key-value pair in parsed result.
Default: -
remove_key_name_field (bool, optional)
Remove key_name field when parsing is succeeded
Default: -
replace_invalid_sequence (bool, optional)
If true, invalid string is replaced with safe characters and re-parse it.
Default: -
inject_key_prefix (string, optional)
Store parsed values with specified key name prefix.
Default: -
hash_value_field (string, optional)
Store parsed values as a hash value in a field.
Default: -
emit_invalid_record_to_error (*bool, optional)
Emit invalid record to @ERROR label. Invalid cases are: key not exist, format is not matched, unexpected error
Default: -
parse (ParseSection, optional)
Default: -
parsers ([]ParseSection, optional)
Deprecated, use parse
instead
Default: -
Parse Section
type (string, optional)
Parse type: apache2, apache_error, nginx, syslog, csv, tsv, ltsv, json, multiline, none, logfmt, grok, multiline_grok
Default: -
expression (string, optional)
Regexp expression to evaluate
Default: -
time_key (string, optional)
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: -
keys (string, optional)
Names for fields on each line. (seperated by coma)
Default: -
null_value_pattern (string, optional)
Specify null value pattern.
Default: -
null_empty_string (bool, optional)
If true, empty string field is replaced with nil
Default: -
estimate_current_event (bool, optional)
If true, use Fluent::EventTime.now(current time) as a timestamp when time_key is specified.
Default: -
keep_time_key (bool, optional)
If true, keep time field in the record.
Default: -
types (string, optional)
Types casting the fields to proper types example: field1:type, field2:type
Default: -
time_format (string, optional)
Process value using specified format. This is available only when time_type is string
Default: -
time_type (string, optional)
Parse/format value according to this type available values: float, unixtime, string
Default: string
local_time (bool, optional)
Ff true, use local time. Otherwise, UTC is used. This is exclusive with utc.
Default: true
utc (bool, optional)
If true, use UTC. Otherwise, local time is used. This is exclusive with localtime
Default: false
timezone (string, optional)
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: nil
format (string, optional)
Only available when using type: multi_format
Default: -
format_firstline (string, optional)
Only available when using type: multi_format
Default: -
delimiter (string, optional)
Only available when using type: ltsv
Default: “\t”
delimiter_pattern (string, optional)
Only available when using type: ltsv
Default: -
label_delimiter (string, optional)
Only available when using type: ltsv
Default: “:”
multiline ([]string, optional)
The multiline parser plugin parses multiline logs.
Default: -
patterns ([]SingleParseSection, optional)
Only available when using type: multi_format Parse Section
Default: -
grok_pattern (string, optional)
Only available when using type: grok, multiline_grok. The pattern of grok. You cannot specify multiple grok pattern with this.
Default: -
custom_pattern_path (*secret.Secret, optional)
Only available when using type: grok, multiline_grok. File that includes custom grok patterns.
Default: -
grok_failure_key (string, optional)
Only available when using type: grok, multiline_grok. The key has grok failure reason.
Default: -
grok_name_key (string, optional)
Only available when using type: grok, multiline_grok. The key name to store grok section’s name.
Default: -
multiline_start_regexp (string, optional)
Only available when using type: multiline_grok The regexp to match beginning of multiline.
Default: -
grok_patterns ([]GrokSection, optional)
Only available when using type: grok, multiline_grok. Grok Section Specify grok pattern series set.
Default: -
Parse Section (single)
type (string, optional)
Parse type: apache2, apache_error, nginx, syslog, csv, tsv, ltsv, json, multiline, none, logfmt, grok, multiline_grok
Default: -
expression (string, optional)
Regexp expression to evaluate
Default: -
time_key (string, optional)
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: -
null_value_pattern (string, optional)
Specify null value pattern.
Default: -
null_empty_string (bool, optional)
If true, empty string field is replaced with nil
Default: -
estimate_current_event (bool, optional)
If true, use Fluent::EventTime.now(current time) as a timestamp when time_key is specified.
Default: -
keep_time_key (bool, optional)
If true, keep time field in the record.
Default: -
types (string, optional)
Types casting the fields to proper types example: field1:type, field2:type
Default: -
time_format (string, optional)
Process value using specified format. This is available only when time_type is string
Default: -
time_type (string, optional)
Parse/format value according to this type available values: float, unixtime, string
Default: string
local_time (bool, optional)
Ff true, use local time. Otherwise, UTC is used. This is exclusive with utc.
Default: true
utc (bool, optional)
If true, use UTC. Otherwise, local time is used. This is exclusive with localtime
Default: false
timezone (string, optional)
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: nil
format (string, optional)
Only available when using type: multi_format
Default: -
grok_pattern (string, optional)
Only available when using format: grok, multiline_grok. The pattern of grok. You cannot specify multiple grok pattern with this.
Default: -
custom_pattern_path (*secret.Secret, optional)
Only available when using format: grok, multiline_grok. File that includes custom grok patterns.
Default: -
grok_failure_key (string, optional)
Only available when using format: grok, multiline_grok. The key has grok failure reason.
Default: -
grok_name_key (string, optional)
Only available when using format: grok, multiline_grok. The key name to store grok section’s name.
Default: -
multiline_start_regexp (string, optional)
Only available when using format: multiline_grok The regexp to match beginning of multiline.
Default: -
grok_patterns ([]GrokSection, optional)
Only available when using format: grok, multiline_grok. Grok Section Specify grok pattern series set.
Default: -
Grok Section
name (string, optional)
The name of grok section.
Default: -
pattern (string, required)
The pattern of grok.
Default: -
keep_time_key (bool, optional)
If true, keep time field in the record.
Default: -
time_key (string, optional)
Specify time field for event time. If the event doesn’t have this field, current time is used.
Default: time
time_format (string, optional)
Process value using specified format. This is available only when time_type is string.
Default: -
timezone (string, optional)
Use specified timezone. one can parse/format the time value in the specified timezone.
Default: -
Example Parser
filter configurations
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: demo-flow
@@ -1518,7 +1518,7 @@ 8.3.11 - Record Modifier
</record>
</filter>
-Replace Directive
Specify replace rule. This directive contains three parameters.
key (string, required) {#replace directive-key}
Key to search for
Default: -
expression (string, required) {#replace directive-expression}
Regular expression
Default: -
replace (string, required) {#replace directive-replace}
Value to replace with
Default: -
Replace Directive
Specify replace rule. This directive contains three parameters.
key (string, required)
Key to search for
Default: -
expression (string, required)
Regular expression
Default: -
replace (string, required)
Value to replace with
Default: -
8.3.12 - Record Transformer
Record Transformer
Overview
Mutates/transforms incoming event streams.
Configuration
RecordTransformer
remove_keys (string, optional)
A comma-delimited list of keys to delete
Default: -
keep_keys (string, optional)
A comma-delimited list of keys to keep.
Default: -
renew_record (bool, optional)
Create new Hash to transform incoming data
Default: false
renew_time_key (string, optional)
Specify field name of the record to overwrite the time of events. Its value must be unix time.
Default: -
enable_ruby (bool, optional)
When set to true, the full Ruby syntax is enabled in the ${…} expression.
Default: false
auto_typecast (bool, optional)
Use original value type.
Default: true
records ([]Record, optional)
Add records docs at: https://docs.fluentd.org/filter/record_transformer Records are represented as maps: key: value
Default: -
Example Record Transformer
filter configurations
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
@@ -1623,7 +1623,7 @@ 8.3.16 - Throttle
</filter>
8.4 - Outputs
8.4 - Fluentd outputs
8.4.1 - Alibaba Cloud
Aliyun OSS plugin for Fluentd
Overview
Fluent OSS output plugin buffers event logs in local files and uploads them to OSS periodically in background threads.
This plugin splits events by using the timestamp of event logs. For example, a log ‘2019-04-09 message Hello’ is reached, and then another log ‘2019-04-10 message World’ is reached in this order, the former is stored in “20190409.gz” file, and latter in “20190410.gz” file.
Fluent OSS input plugin reads data from OSS periodically.
This plugin uses MNS on the same region of the OSS bucket. We must setup MNS and OSS event notification before using this plugin.
This document shows how to setup MNS and OSS event notification.
This plugin will poll events from MNS queue and extract object keys from these events, and then will read those objects from OSS. More info at https://github.com/aliyun/fluent-plugin-oss
Configuration
Output Config
endpoint (string, required)
OSS endpoint to connect to’
Default: -
bucket (string, required)
Your bucket name
Default: -
access_key_id (*secret.Secret, required)
Your access key id Secret
Default: -
aaccess_key_secret (*secret.Secret, required)
Your access secret key Secret
Default: -
path (string, optional)
Path prefix of the files on OSS
Default: fluent/logs
upload_crc_enable (bool, optional)
Upload crc enabled
Default: true
download_crc_enable (bool, optional)
Download crc enabled
Default: true
open_timeout (int, optional)
Timeout for open connections
Default: 10
read_timeout (int, optional)
Timeout for read response
Default: 120
oss_sdk_log_dir (string, optional)
OSS SDK log directory
Default: /var/log/td-agent
key_format (string, optional)
The format of OSS object keys
Default: %{path}/%{time_slice}%{index}%{thread_id}.%{file_extension}
store_as (string, optional)
Archive format on OSS: gzip, json, text, lzo, lzma2
Default: gzip
auto_create_bucket (bool, optional)
desc ‘Create OSS bucket if it does not exists
Default: false
overwrite (bool, optional)
Overwrite already existing path
Default: false
check_bucket (bool, optional)
Check bucket if exists or not
Default: true
check_object (bool, optional)
Check object before creation
Default: true
hex_random_length (int, optional)
The length of %{hex_random}
placeholder(4-16)
Default: 4
index_format (string, optional)
sprintf
format for %{index}
Default: %d
warn_for_delay (string, optional)
Given a threshold to treat events as delay, output warning logs if delayed events were put into OSS
Default: -
format (*Format, optional)
Default: -
buffer (*Buffer, optional)
Default: -
slow_flush_log_threshold (string, optional)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Default: -
8.4.2 - Amazon CloudWatch
CloudWatch output plugin for Fluentd
Overview
This plugin has been designed to output logs or metrics to Amazon CloudWatch. diff --git a/docs/configuration/crds/extensions/eventtailer_types/index.html b/docs/configuration/crds/extensions/eventtailer_types/index.html index 72869f329..adfa03f23 100644 --- a/docs/configuration/crds/extensions/eventtailer_types/index.html +++ b/docs/configuration/crds/extensions/eventtailer_types/index.html @@ -40,7 +40,7 @@ Default: -"> - + - + @@ -275,7 +275,7 @@
EventTailer
EventTailerSpec
EventTailerSpec defines the desired state of EventTailer
controlNamespace (string, required)
The resources of EventTailer will be placed into this namespace
Default: -
positionVolume (volume.KubernetesVolume, optional)
Volume definition for tracking fluentbit file positions (optional)
Default: -
workloadMetaOverrides (*types.MetaBase, optional)
Override metadata of the created resources
Default: -
workloadOverrides (*types.PodSpecBase, optional)
Override podSpec fields for the given statefulset
Default: -
containerOverrides (*types.ContainerBase, optional)
Override container fields for the given statefulset
Default: -
EventTailerStatus
EventTailerStatus defines the observed state of EventTailer
EventTailer
EventTailer is the Schema for the eventtailers API
(metav1.TypeMeta, required)
Default: -
metadata (metav1.ObjectMeta, optional)
Default: -
spec (EventTailerSpec, optional)
Default: -
status (EventTailerStatus, optional)
Default: -
EventTailerList
EventTailerList contains a list of EventTailer
(metav1.TypeMeta, required)
Default: -
metadata (metav1.ListMeta, optional)
Default: -
items ([]EventTailer, required)
Default: -