Skip to content

Commit

Permalink
Updates generated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Robert Fekete authored and Robert Fekete committed Aug 14, 2024
1 parent 58005e1 commit ddf59a8
Show file tree
Hide file tree
Showing 9 changed files with 186 additions and 2 deletions.
41 changes: 41 additions & 0 deletions content/docs/configuration/crds/v1beta1/common_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ Metrics defines the service monitor endpoints
### prometheusRules (bool, optional) {#metrics-prometheusrules}


### prometheusRulesOverride ([]PrometheusRulesOverride, optional) {#metrics-prometheusrulesoverride}


### serviceMonitor (bool, optional) {#metrics-servicemonitor}


Expand All @@ -50,6 +53,44 @@ Metrics defines the service monitor endpoints



## PrometheusRulesOverride

### alert (string, optional) {#prometheusrulesoverride-alert}

Name of the alert. Must be a valid label value. Only one of `record` and `alert` must be set.


### annotations (map[string]string, optional) {#prometheusrulesoverride-annotations}

Annotations to add to each alert. Only valid for alerting rules.


### expr (*intstr.IntOrString, optional) {#prometheusrulesoverride-expr}

PromQL expression to evaluate.


### for (*v1.Duration, optional) {#prometheusrulesoverride-for}

Alerts are considered firing once they have been returned for this long. +optional


### keep_firing_for (*v1.NonEmptyDuration, optional) {#prometheusrulesoverride-keep_firing_for}

KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. +optional


### labels (map[string]string, optional) {#prometheusrulesoverride-labels}

Labels to add or overwrite.


### record (string, optional) {#prometheusrulesoverride-record}

Name of the time series to output to. Must be a valid metric name. Only one of `record` and `alert` must be set.



## BufferMetrics

BufferMetrics defines the service monitor endpoints
Expand Down
1 change: 1 addition & 0 deletions content/docs/configuration/crds/v1beta1/fluentbit_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -749,6 +749,7 @@ Configurable TTL for K8s cached namespace metadata. (15m)

Include Kubernetes namespace labels on every record

Default: On

### Regex_Parser (string, optional) {#filterkubernetes-regex_parser}

Expand Down
8 changes: 8 additions & 0 deletions content/docs/configuration/crds/v1beta1/logging_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,14 @@ Namespace for cluster wide configuration resources like ClusterFlow and ClusterO
Default flow for unmatched logs. This Flow configuration collects all logs that didn't matched any other Flow.


### enableDockerParserCompatibilityForCRI (bool, optional) {#loggingspec-enabledockerparsercompatibilityforcri}

Enables a log parser that is compatible with the docker parser. This has the following benefits:

- automatically parses JSON logs using the Merge_Log feature
- downstream parsers can use the `log` field instead of the `message` field, just like with the docker runtime
- the `concat` and `parser` filters are automatically set back to use the `log` field.

### enableRecreateWorkloadOnImmutableFieldChange (bool, optional) {#loggingspec-enablerecreateworkloadonimmutablefieldchange}

EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn't be managed with a simple update.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ Enumerate all loggings with all the destination namespaces expanded

## LoggingRoute

LoggingRoute (experimental)
Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces

### (metav1.TypeMeta, required) {#loggingroute-}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ SyslogNGOutputSpec defines the desired state of SyslogNGOutput
### elasticsearch (*output.ElasticsearchOutput, optional) {#syslogngoutputspec-elasticsearch}


### elasticsearch-datastream (*output.ElasticsearchDatastreamOutput, optional) {#syslogngoutputspec-elasticsearch-datastream}

Available in Logging operator version 4.9 and later.

### file (*output.FileOutput, optional) {#syslogngoutputspec-file}


Expand All @@ -37,6 +41,11 @@ Available in Logging operator version 4.4 and later.
### mongodb (*output.MongoDB, optional) {#syslogngoutputspec-mongodb}


### opentelemetry (*output.OpenTelemetryOutput, optional) {#syslogngoutputspec-opentelemetry}

Available in Logging operator version 4.9 and later.


### openobserve (*output.OpenobserveOutput, optional) {#syslogngoutputspec-openobserve}

Available in Logging operator version 4.5 and later.
Expand Down
4 changes: 4 additions & 0 deletions content/docs/configuration/plugins/outputs/forward.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,10 @@ Server definitions at least one is required [Server](#fluentd-server)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.


### time_as_integer (bool, optional) {#forwardoutput-time_as_integer}

Format forwarded events time as an epoch Integer with second resolution. Useful when forwarding to old ( <= 0.12 ) Fluentd servers.

### tls_allow_self_signed_cert (bool, optional) {#forwardoutput-tls_allow_self_signed_cert}

Allow self signed certificates or not.
Expand Down
7 changes: 6 additions & 1 deletion content/docs/configuration/plugins/outputs/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ spec:
## Configuration
## Kafka
Send your logs to Kafka
Send your logs to Kafka. Set `use_rdkafka` to `true` to use the rdkafka2 client, which offers higher performance than ruby-kafka.

### ack_timeout (int, optional) {#kafka-ack_timeout}

Expand Down Expand Up @@ -240,6 +240,11 @@ Use default for unknown topics

Default: false

### use_rdkafka (bool, optional) {#kafka-use_rdkafka}

Use rdkafka2 instead of the legacy kafka2 output plugin. This plugin requires fluentd image version v1.16-4.9-full or higher.


### username (*secret.Secret, optional) {#kafka-username}

Username when using PLAIN/SCRAM SASL authentication
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Elasticsearch datastream
weight: 200
generated_file: true
---

## Overview

Based on the [ElasticSearch datastream destination of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/configuring-destinations-elasticsearch-datastream/).

Available in Logging operator version 4.9 and later.

## Example

{{< highlight yaml >}}
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGOutput
metadata:
name: elasticsearch-datastream
spec:
elasticsearch-datastream:
url: "https://elastic-endpoint:9200/my-data-stream/_bulk"
user: "username"
password:
valueFrom:
secretKeyRef:
name: elastic
key: password
{{</ highlight >}}


## Configuration
## ElasticsearchDatastreamOutput

### (HTTPOutput, required) {#elasticsearchdatastreamoutput-}


### disk_buffer (*DiskBuffer, optional) {#elasticsearchdatastreamoutput-disk_buffer}

This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).

Default: false

### record (string, optional) {#elasticsearchdatastreamoutput-record}

Arguments to the `$format-json()` template function. Default: `"--scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE}"`



Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
title: OpenTelemetry output
weight: 200
generated_file: true
---

## Overview

Sends messages over OpenTelemetry GRPC. For details on the available options of the output, see the [documentation of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/).

Available in Logging operator version 4.9 and later.

## Example

A simple example sending logs over OpenTelemetry GRPC to a remote OpenTelemetry endpoint:

{{< highlight yaml >}}
kind: SyslogNGOutput
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: otlp
spec:
opentelemetry:
url: otel-server
port: 4379
{{</ highlight >}}



## Configuration
## OpenTelemetryOutput

### (Batch, required) {#opentelemetryoutput-}

Batching parameters

<!-- FIXME -->


### auth (*Auth, optional) {#opentelemetryoutput-auth}

Authentication configuration, see the [documentation of the AxoSyslog syslog-ng distribution](https://axoflow.com/docs/axosyslog-core/chapter-destinations/destination-syslog-ng-otlp/#auth).


### channel_args (filter.ArrowMap, optional) {#opentelemetryoutput-channel_args}

Add GRPC Channel arguments https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/#channel-args
<!-- FIXME -->


### compression (*bool, optional) {#opentelemetryoutput-compression}

Enable or disable compression.

Default: false

### disk_buffer (*DiskBuffer, optional) {#opentelemetryoutput-disk_buffer}

This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).

Default: false

### url (string, required) {#opentelemetryoutput-url}

Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: `http://127.0.0.1:8000`



0 comments on commit ddf59a8

Please sign in to comment.