From c2371c53e7cc1ba841f123bdb668c136070200c0 Mon Sep 17 00:00:00 2001 From: "mergify[bot]" <37929162+mergify[bot]@users.noreply.github.com> Date: Wed, 14 Aug 2024 17:38:31 +0000 Subject: [PATCH] [8.12](backport #4147) Add tags to fix chunking in obs guide (#4152) * Add tags to fix chunking in obs guide (#4147) * Fix chunking * update some leveloffsets --------- Co-authored-by: Colleen McGinnis (cherry picked from commit 048c755d4ec5e6fc47a8219517c4ae41ab9fb562) # Conflicts: # docs/en/observability/apm-ui/api.asciidoc # docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc # docs/en/observability/apm/configure/outputs/kafka.asciidoc # docs/en/observability/apm/configure/outputs/logstash.asciidoc # docs/en/observability/apm/configure/outputs/redis.asciidoc # docs/en/observability/apm/sampling.asciidoc * Resolve conflicts * 2nd pass --------- Co-authored-by: DeDe Morton --- .../apm/configure/outputs/console.asciidoc | 6 ++++ .../configure/outputs/elasticsearch.asciidoc | 29 +++++++++++++++-- .../apm/configure/outputs/kafka.asciidoc | 32 +++++++++++++++++++ .../apm/configure/outputs/logstash.asciidoc | 21 ++++++++++++ .../configure/outputs/output-cloud.asciidoc | 3 +- .../apm/configure/outputs/redis.asciidoc | 22 ++++++++++++- docs/en/observability/apm/sampling.asciidoc | 9 ++++-- 7 files changed, 115 insertions(+), 7 deletions(-) diff --git a/docs/en/observability/apm/configure/outputs/console.asciidoc b/docs/en/observability/apm/configure/outputs/console.asciidoc index c50c4825d5..ce636682e6 100644 --- a/docs/en/observability/apm/configure/outputs/console.asciidoc +++ b/docs/en/observability/apm/configure/outputs/console.asciidoc @@ -33,10 +33,12 @@ ifdef::apm-server[] include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] +[float] === Configuration options You can specify the following `output.console` options in the +{beatname_lc}.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -44,16 +46,19 @@ to false, the output is disabled. The default value is `true`. +[float] ==== `pretty` If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded using the `pretty` option. See <> for more information. +[float] ==== `bulk_max_size` The maximum number of events to buffer internally during publishing. The default is 2048. @@ -65,4 +70,5 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] include::codec.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc b/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc index 63e68a6336..eccbc39d12 100644 --- a/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc +++ b/docs/en/observability/apm/configure/outputs/elasticsearch.asciidoc @@ -61,16 +61,19 @@ output.elasticsearch: See <> for details on each authentication method. +[float] === Compatibility This output works with all compatible versions of {es}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. +[float] === Configuration options You can specify the following options in the `elasticsearch` section of the +{beatname_lc}.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -78,7 +81,7 @@ to `false`, the output is disabled. The default value is `true`. - +[float] [[hosts-option]] ==== `hosts` @@ -102,6 +105,7 @@ output.elasticsearch: In the previous example, the {es} nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`. +[float] ==== `compression_level` The gzip compression level. Setting this value to `0` disables compression. @@ -111,12 +115,14 @@ Increasing the compression level will reduce the network usage but will increase The default value is `0`. +[float] ==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. +[float] ==== `api_key` Instead of using a username and password, you can use API keys to secure communication @@ -124,6 +130,7 @@ with {es}. The value must be the ID of the API key and the API key joined by a c See <> for more information. +[float] ==== `username` The basic authentication username for connecting to {es}. @@ -131,14 +138,17 @@ The basic authentication username for connecting to {es}. This user needs the privileges required to publish events to {es}. To create a user like this, see <>. +[float] ==== `password` The basic authentication password for connecting to {es}. +[float] ==== `parameters` Dictionary of HTTP parameters to pass within the URL with index operations. +[float] [[protocol-option]] ==== `protocol` @@ -147,6 +157,7 @@ The name of the protocol {es} is reachable on. The options are: <>, the value of `protocol` is overridden by whatever scheme you specify in the URL. +[float] [[path-option]] ==== `path` @@ -154,6 +165,7 @@ An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {es} listens behind an HTTP reverse proxy that exports the API under a custom prefix. +[float] ==== `headers` Custom HTTP headers to add to each request created by the {es} output. @@ -168,6 +180,7 @@ output.elasticsearch.headers: It is possible to specify multiple header values for the same header name by separating them with a comma. +[float] ==== `proxy_url` The URL of the proxy to use when connecting to the {es} servers. The @@ -303,6 +316,7 @@ values. You cannot specify format strings within the mapping pairs. endif::apm-server[] ifndef::no_ilm[] +[float] [[ilm-es]] ==== `ilm` @@ -312,6 +326,7 @@ See <> for more information. endif::no_ilm[] ifndef::no-pipeline[] +[float] [[pipeline-option-es]] ==== `pipeline` @@ -347,6 +362,7 @@ TIP: To learn how to add custom fields to events, see the See the <> setting for other ways to set the ingest node pipeline dynamically. +[float] [[pipelines-option-es]] ==== `pipelines` @@ -418,6 +434,7 @@ For more information about ingest node pipelines, see endif::[] +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -433,16 +450,19 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] +[float] ==== `flush_bytes` The bulk request size threshold, in bytes, before flushing to {es}. The value must have a suffix, e.g. `"2MB"`. The default is `1MB`. +[float] ==== `flush_interval` The maximum duration to accumulate events for a bulk request before being flushed to {es}. The value must have a duration suffix, e.g. `"5s"`. The default is `1s`. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to {es} after @@ -451,16 +471,18 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. - +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {es} after a network error. The default is `60s`. +[float] ==== `timeout` The HTTP request timeout in seconds for the {es} request. The default is 90. +[float] ==== `ssl` Configuration options for SSL parameters like the certificate authority to use @@ -471,4 +493,5 @@ See the <> or <> for more information. // Elasticsearch security -include::{apm-server-dir}/https.asciidoc[] +[float] +include::{apm-server-dir}/https.asciidoc[leveloffset=+1] diff --git a/docs/en/observability/apm/configure/outputs/kafka.asciidoc b/docs/en/observability/apm/configure/outputs/kafka.asciidoc index 72c32eeeda..b7d952a62a 100644 --- a/docs/en/observability/apm/configure/outputs/kafka.asciidoc +++ b/docs/en/observability/apm/configure/outputs/kafka.asciidoc @@ -44,16 +44,20 @@ ifdef::apm-server[] include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] +[float] [[kafka-compatibility]] === Compatibility This output works with all Kafka versions in between 0.11 and 2.2.2. Older versions might work as well, but are not supported. +[float] === Configuration options You can specify the following options in the `kafka` section of the +{beatname_lc}.yml+ config file: + +[float] ==== `enabled` The `enabled` config is a boolean setting to enable or disable the output. If set @@ -66,11 +70,13 @@ ifdef::apm-server[] The default value is `false`. endif::[] +[float] ==== `hosts` The list of Kafka broker addresses from where to fetch the cluster metadata. The cluster metadata contain the actual Kafka brokers events are published to. +[float] ==== `version` Kafka version {beatname_lc} is assumed to run against. Defaults to 1.0.0. @@ -81,15 +87,18 @@ Valid values are all Kafka releases in between `0.8.2.0` and `2.0.0`. See <> for information on supported versions. +[float] ==== `username` The username for connecting to Kafka. If username is configured, the password must be configured as well. +[float] ==== `password` The password for connecting to Kafka. +[float] ==== `sasl.mechanism` beta[] @@ -104,6 +113,7 @@ If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` are provided. Otherwise, SASL authentication is disabled. +[float] [[topic-option-kafka]] ==== `topic` @@ -121,6 +131,7 @@ topic: '%{[fields.log_topic]}' See the <> setting for other ways to set the topic dynamically. +[float] [[topics-option-kafka]] ==== `topics` @@ -170,6 +181,7 @@ output.kafka: This configuration results in topics named +critical-{version}+, +error-{version}+, and +logs-{version}+. +[float] ==== `key` Optional formatted string specifying the Kafka event key. If configured, the @@ -178,6 +190,7 @@ event key can be extracted from the event using a format string. See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. +[float] ==== `partition` Kafka output broker event partitioning strategy. Must be one of `random`, @@ -204,20 +217,24 @@ available partitions only. NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. +[float] ==== `client_id` The configurable client ID used for logging, debugging, and auditing purposes. The default is "beats". +[float] ==== `worker` The number of concurrent load-balanced Kafka output workers. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. +[float] ==== `metadata` Kafka metadata update settings. The metadata do contain information about @@ -233,6 +250,7 @@ metadata for the configured topics. The default is false. *`retry.backoff`*:: Waiting time between retries during leader elections. Default is `250ms`. +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -248,6 +266,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] +[float] ==== `backoff.init` The number of seconds to wait before trying to republish to Kafka @@ -256,36 +275,44 @@ tries to republish. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful publish, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to republish to Kafka after a network error. The default is `60s`. +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single Kafka request. The default is 2048. +[float] ==== `bulk_flush_frequency` Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. +[float] ==== `timeout` The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds). +[float] ==== `broker_timeout` The maximum duration a broker will wait for number of required ACKs. The default is `10s`. +[float] ==== `channel_buffer_size` Per Kafka broker number of messages buffered in output pipeline. The default is 256. +[float] ==== `keep_alive` The keep-alive period for an active network connection. If `0s`, keep-alives are disabled. The default is `0s`. +[float] ==== `compression` Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. @@ -296,6 +323,7 @@ Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `g When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. ==== +[float] ==== `compression_level` Sets the compression level used by gzip. Setting this value to 0 disables compression. @@ -305,23 +333,27 @@ Increasing the compression level will reduce the network usage but will increase The default value is 4. +[float] [[kafka-max_message_bytes]] ==== `max_message_bytes` The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. +[float] ==== `required_acks` The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. +[float] ==== `enable_krb5_fast` beta[] Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. It is separate from the standard Kerberos settings because this flag only applies to the Kafka output. The default is `false`. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for Kafka connections. diff --git a/docs/en/observability/apm/configure/outputs/logstash.asciidoc b/docs/en/observability/apm/configure/outputs/logstash.asciidoc index 087a12ef0c..b37dced2f8 100644 --- a/docs/en/observability/apm/configure/outputs/logstash.asciidoc +++ b/docs/en/observability/apm/configure/outputs/logstash.asciidoc @@ -160,17 +160,20 @@ output { <5> In this example, `cloud_id` and `cloud_auth` are stored as {logstash-ref}/environment-variables.html[environment variables] <6> For all other event types, index data directly into the predefined APM data steams +[float] === Compatibility This output works with all compatible versions of {ls}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. +[float] === Configuration options You can specify the following options in the `logstash` section of the +{beatname_lc}.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -178,6 +181,7 @@ to false, the output is disabled. The default value is `false`. +[float] [[hosts]] ==== `hosts` @@ -187,6 +191,7 @@ If one host becomes unreachable, another one is selected randomly. All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. +[float] ==== `compression_level` The gzip compression level. Setting this value to 0 disables compression. @@ -196,18 +201,21 @@ Increasing the compression level will reduce the network usage but will increase The default value is 3. +[float] ==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. +[float] ==== `worker` The number of workers per configured host publishing events to {ls}. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). +[float] [[loadbalance]] ==== `loadbalance` @@ -224,6 +232,7 @@ output.logstash: index: {beatname_lc} ------------------------------------------------------------------------------ +[float] ==== `ttl` Time to live for a connection to {ls} after which the connection will be re-established. @@ -236,6 +245,7 @@ The default value is 0. NOTE: The "ttl" option is not yet supported on an asynchronous {ls} client (one with the "pipelining" option set). +[float] ==== `pipelining` Configures the number of batches to be sent asynchronously to {ls} while waiting @@ -243,6 +253,7 @@ for ACK from {ls}. Output only becomes blocking once number of `pipelining` batches have been written. Pipelining is disabled if a value of 0 is configured. The default value is 2. +[float] ==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The @@ -263,6 +274,7 @@ output.logstash: proxy_url: socks5://user:password@socks5-proxy:2233 ------------------------------------------------------------------------------ +[float] [[logstash-proxy-use-local-resolver]] ==== `proxy_use_local_resolver` @@ -270,6 +282,7 @@ The `proxy_use_local_resolver` option determines if {ls} hostnames are resolved locally when using a proxy. The default value is false, which means that when a proxy is used the name resolution occurs on the proxy server. +[float] [[logstash-index]] ==== `index` @@ -280,16 +293,19 @@ indices (for example, +"{beat_default_index_prefix}-{version}-2017.04.26"+). NOTE: This parameter's value will be assigned to the `metadata.beat` field. It can then be accessed in {ls}'s output section as `%{[@metadata][beat]}`. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for {ls} connections. See <> for more information. To use SSL, you must also configure the {logstash-ref}/plugins-inputs-beats.html[{beats} input plugin for {ls}] to use SSL/TLS. +[float] ==== `timeout` The number of seconds to wait for responses from the {ls} server before timing out. The default is 30 (seconds). +[float] ==== `max_retries` The number of times to retry publishing an event after a publishing failure. @@ -299,6 +315,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single {ls} request. The default is 2048. @@ -316,6 +333,7 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] ==== `slow_start` If enabled, only a subset of events in a batch of events is transferred per transaction. @@ -324,6 +342,7 @@ On error, the number of events per transaction is reduced again. The default is `false`. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to {ls} after @@ -332,10 +351,12 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {ls} after a network error. The default is `60s`. // Logstash security +[float] include::{apm-server-dir}/shared-ssl-logstash-config.asciidoc[] diff --git a/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc b/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc index 5b5e0ff12e..c8ec4ea220 100644 --- a/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc +++ b/docs/en/observability/apm/configure/outputs/output-cloud.asciidoc @@ -40,13 +40,14 @@ These settings can be also specified at the command line, like this: {beatname_lc} -e -E cloud.id="" -E cloud.auth="" ------------------------------------------------------------------------------ - +[float] === `cloud.id` The Cloud ID, which can be found in the {ess} web console, is used by {beatname_uc} to resolve the {es} and {kib} URLs. This setting overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. +[float] === `cloud.auth` When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and diff --git a/docs/en/observability/apm/configure/outputs/redis.asciidoc b/docs/en/observability/apm/configure/outputs/redis.asciidoc index 013a981101..75153951df 100644 --- a/docs/en/observability/apm/configure/outputs/redis.asciidoc +++ b/docs/en/observability/apm/configure/outputs/redis.asciidoc @@ -37,15 +37,18 @@ ifdef::apm-server[] include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] +[float] === Compatibility This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, but are not supported. +[float] === Configuration options You can specify the following `output.redis` options in the +{beatname_lc}.yml+ config file: +[float] ==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set @@ -53,6 +56,7 @@ to false, the output is disabled. The default value is `true`. +[float] ==== `hosts` The list of Redis servers to connect to. If load balancing is enabled, the events are @@ -67,10 +71,12 @@ The `redis` scheme will disable the `ssl` settings for the host, while `rediss` will enforce TLS. If `rediss` is specified and no `ssl` settings are configured, the output uses the system certificate store. +[float] ==== `index` The index name added to the events metadata for use by {ls}. The default is "{beatname_lc}". +[float] [[key-option-redis]] ==== `key` @@ -91,6 +97,7 @@ output.redis: See the <> setting for other ways to set the key dynamically. +[float] [[keys-option-redis]] ==== `keys` @@ -138,14 +145,17 @@ output.redis: mysql: "backend_list" ------------------------------------------------------------------------------ +[float] ==== `password` The password to authenticate with. The default is no authentication. +[float] ==== `db` The Redis database number where the events are published. The default is 0. +[float] ==== `datatype` The Redis data type to use for publishing events.If the data type is `list`, the @@ -154,27 +164,32 @@ If the data type `channel` is used, the Redis `PUBLISH` command is used and mean are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. The default value is `list`. +[float] ==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. +[float] ==== `worker` The number of workers to use for each host configured to publish events to Redis. Use this setting along with the `loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). +[float] ==== `loadbalance` If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the currently selected one becomes unreachable. The default value is true. +[float] ==== `timeout` The Redis connection timeout in seconds. The default is 5 seconds. +[float] ==== `backoff.init` The number of seconds to wait before trying to reconnect to Redis after @@ -183,11 +198,13 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. +[float] ==== `backoff.max` The maximum number of seconds to wait before attempting to connect to Redis after a network error. The default is `60s`. +[float] ==== `max_retries` ifdef::ignores_max_retries[] @@ -203,7 +220,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] - +[float] ==== `bulk_max_size` The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. @@ -221,12 +238,14 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. +[float] ==== `ssl` Configuration options for SSL parameters like the root CA for Redis connections guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See <> for more information. +[float] ==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The @@ -240,6 +259,7 @@ When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the <> option. +[float] [[redis-proxy-use-local-resolver]] ==== `proxy_use_local_resolver` diff --git a/docs/en/observability/apm/sampling.asciidoc b/docs/en/observability/apm/sampling.asciidoc index 071805d6eb..1c374a0b4e 100644 --- a/docs/en/observability/apm/sampling.asciidoc +++ b/docs/en/observability/apm/sampling.asciidoc @@ -135,17 +135,20 @@ Regardless of the above, cost conscious customers are likely to be fine with a l There are three ways to adjust the head-based sampling rate of your APM agents: +[float] ===== Dynamic configuration The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment basis with {kibana-ref}/agent-configuration.html[{apm-agent} Configuration] in {kib}. +[float] ===== {kib} API configuration {apm-agent} configuration exposes an API that can be used to programmatically change your agents' sampling rate. An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. +[float] ===== {apm-agent} configuration Each agent provides a configuration value used to set the transaction sample rate. @@ -178,6 +181,7 @@ IMPORTANT: Please note that from version `8.3.1` APM Server implements a default but, due to how the limit is calculated and enforced the actual disk space may still grow slightly over the limit. +[float] ===== Example configuration This example defines three tail-based sampling polices: @@ -197,14 +201,15 @@ This example defines three tail-based sampling polices: <3> Default policy to sample all remaining traces at 10%, e.g. traces in a different environment, like `dev`, or traces with any other name +[float] ===== Configuration reference **Top-level tail-based sampling settings:** -:leveloffset: +3 +:leveloffset: +4 include::./configure/sampling.asciidoc[tag=tbs-top] **Policy settings:** include::./configure/sampling.asciidoc[tag=tbs-policy] -:leveloffset: -3 +:leveloffset: -4