From 1c5a73664128ed8ecb06c0a51c63830e6a0fafed Mon Sep 17 00:00:00 2001 From: J Stickler Date: Mon, 8 Apr 2024 15:41:08 -0400 Subject: [PATCH] docs: proper fix for #12510 (#12516) --- .../2020-02-Promtail-Push-API.md | 2 +- docs/sources/configure/bp-configure.md | 2 +- docs/sources/configure/storage.md | 10 ++--- docs/sources/get-started/_index.md | 16 ++++---- .../get-started/labels/structured-metadata.md | 12 +++--- docs/sources/get-started/quick-start.md | 12 +++--- .../operations/query-acceleration-blooms.md | 14 +++---- .../request-validation-rate-limits.md | 26 ++++++------ docs/sources/operations/storage/_index.md | 18 ++++----- docs/sources/operations/storage/retention.md | 20 +++++----- docs/sources/operations/troubleshooting.md | 2 +- docs/sources/operations/upgrade.md | 4 +- docs/sources/query/logcli.md | 2 +- docs/sources/release-notes/v2-3.md | 4 +- docs/sources/release-notes/v2-5.md | 2 +- docs/sources/release-notes/v2-9.md | 4 +- docs/sources/send-data/fluentbit/_index.md | 4 +- .../send-data/lambda-promtail/_index.md | 2 +- docs/sources/send-data/otel/_index.md | 2 +- .../send-data/promtail/cloud/ecs/_index.md | 6 +-- .../send-data/promtail/cloud/eks/_index.md | 2 +- .../install/helm/configure-storage/_index.md | 2 +- .../install/helm/install-scalable/_index.md | 2 +- docs/sources/setup/install/tanka.md | 2 +- .../setup/migrate/migrate-to-tsdb/_index.md | 4 +- docs/sources/setup/upgrade/_index.md | 40 +++++++++---------- 26 files changed, 108 insertions(+), 108 deletions(-) diff --git a/docs/sources/community/design-documents/2020-02-Promtail-Push-API.md b/docs/sources/community/design-documents/2020-02-Promtail-Push-API.md index 77268ee5f55c..9aa46c5434ee 100644 --- a/docs/sources/community/design-documents/2020-02-Promtail-Push-API.md +++ b/docs/sources/community/design-documents/2020-02-Promtail-Push-API.md @@ -66,7 +66,7 @@ rejected pushes. Users are recommended to do one of the following: ## Implementation As discussed in this document, this feature will be implemented by copying the -existing [Loki Push API](/docs/loki/latest/api/#post-lokiapiv1push) +existing [Loki Push API](/docs/loki //api/#post-lokiapiv1push) and exposing it via Promtail. ## Considered Alternatives diff --git a/docs/sources/configure/bp-configure.md b/docs/sources/configure/bp-configure.md index 23175d7c1b1a..4b51ef36c118 100644 --- a/docs/sources/configure/bp-configure.md +++ b/docs/sources/configure/bp-configure.md @@ -46,7 +46,7 @@ What can we do about this? What if this was because the sources of these logs we {job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2 ``` -But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself. +But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki //send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself. It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.) diff --git a/docs/sources/configure/storage.md b/docs/sources/configure/storage.md index c14483e46dab..bf2faa295a15 100644 --- a/docs/sources/configure/storage.md +++ b/docs/sources/configure/storage.md @@ -12,16 +12,16 @@ even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki. Loki 2.8 introduced TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki. -More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/tsdb/). +More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki //operations/storage/tsdb/). Loki 2.0 introduced an index mechanism named 'boltdb-shipper' and is what we now call [Single Store](#single-store). This type only requires one store, the object store, for both the index and chunks. -More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/). +More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki //operations/storage/boltdb-shipper/). Prior to Loki 2.0, chunks and index data were stored in separate backends: object storage (or filesystem) for chunk data and NoSQL/Key-Value databases for index data. These "multistore" backends have been deprecated, as noted below. -You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki/latest/operations/storage/). +You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki //operations/storage/). ## Single Store @@ -29,7 +29,7 @@ Single Store refers to using object storage as the storage medium for both Loki' ### TSDB (recommended) -Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper". +Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki //operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper". ### BoltDB (deprecated) @@ -91,7 +91,7 @@ This storage type for chunks is deprecated and may be removed in future major ve ### Cassandra (deprecated) -Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering. +Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki //operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering. {{< collapse title="Title of hidden content" >}} This storage type for indexes is deprecated and may be removed in future major versions of Loki. diff --git a/docs/sources/get-started/_index.md b/docs/sources/get-started/_index.md index 5860ec1cc4fa..213a03a490b8 100644 --- a/docs/sources/get-started/_index.md +++ b/docs/sources/get-started/_index.md @@ -17,26 +17,26 @@ To collect logs and view your log data generally involves the following steps: ![Loki implementation steps](loki-install.png) -1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details. - - [Storage options](https://grafana.com/docs/loki/latest/operations/storage/) - - [Configuration reference](https://grafana.com/docs/loki/latest/configure/) - - There are [examples](https://grafana.com/docs/loki/latest/configure/examples/) for specific Object Storage providers that you can modify. +1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki //setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details. + - [Storage options](https://grafana.com/docs/loki //operations/storage/) + - [Configuration reference](https://grafana.com/docs/loki //configure/) + - There are [examples](https://grafana.com/docs/loki //configure/examples/) for specific Object Storage providers that you can modify. 1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications. 1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file. - 1. Add [labels](https://grafana.com/docs/loki/latest/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/latest/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.). + 1. Add [labels](https://grafana.com/docs/loki //get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki //get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.). 1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/). 1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/): 1. Pick a time range. 1. Choose the Loki datasource. - 1. Use [LogQL](https://grafana.com/docs/loki/latest/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button. + 1. Use [LogQL](https://grafana.com/docs/loki //query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button. -**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/latest/query/). +**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki //query/). ## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki To deploy Grafana Agent to collect Pod logs from your Kubernetes cluster and ship them to Loki, you an use the Grafana Agent Helm chart, and a `values.yaml` file. -1. Install Loki with the [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/). +1. Install Loki with the [Helm chart](https://grafana.com/docs/loki //setup/install/helm/install-scalable/). 1. Deploy the Grafana Agent, using the [Grafana Agent Helm chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) and this example `values.yaml` file updating the value for `forward_to = [loki.write.endpoint.receiver]`: ```yaml diff --git a/docs/sources/get-started/labels/structured-metadata.md b/docs/sources/get-started/labels/structured-metadata.md index bd66e19c2bf9..f734cc70bd0f 100644 --- a/docs/sources/get-started/labels/structured-metadata.md +++ b/docs/sources/get-started/labels/structured-metadata.md @@ -6,7 +6,7 @@ description: Describes how to enable structure metadata for logs and how to quer # What is structured metadata {{% admonition type="warning" %}} -Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/latest/configure/storage/#schema-config) for more details about schema versions. +Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki //configure/storage/#schema-config) for more details about schema versions. {{% /admonition %}} Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index). @@ -29,12 +29,12 @@ It is an antipattern to extract information that already exists in your log line ## Attaching structured metadata to log lines You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp. -For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki/latest/reference/api/#ingest-logs). +For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki //reference/api/#ingest-logs). Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines. -See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/latest/send-data/promtail/stages/structured_metadata/) for more information. +See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki //send-data/promtail/stages/structured_metadata/) for more information. -With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/latest/send-data/logstash/). +With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki //send-data/logstash/). {{% admonition type="warning" %}} There are defaults for how much structured metadata can be attached per log line. @@ -52,7 +52,7 @@ There are defaults for how much structured metadata can be attached per log line ## Querying structured metadata Structured metadata is extracted automatically for each returned log line and added to the labels returned for the query. -You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki/latest/query/log_queries/#label-filter-expression). +You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki //query/log_queries/#label-filter-expression). For example, if you have a label `pod` attached to some of your log lines as structured metadata, you can filter log lines using: @@ -66,7 +66,7 @@ Of course, you can filter by multiple labels of structured metadata at the same {job="example"} | pod="myservice-abc1234-56789" | trace_id="0242ac120002" ``` -Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki/latest/query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki/latest/query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need. +Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki //query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki //query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need. For example: ```logql diff --git a/docs/sources/get-started/quick-start.md b/docs/sources/get-started/quick-start.md index b4213e233546..98a966a2d0a9 100644 --- a/docs/sources/get-started/quick-start.md +++ b/docs/sources/get-started/quick-start.md @@ -7,7 +7,7 @@ description: How to create and use a simple local Loki cluster for testing and e # Quickstart to run Loki locally -If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki/latest/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs. +If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki //get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs. The Docker Compose configuration instantiates the following components, each in its own container: @@ -76,7 +76,7 @@ This quickstart assumes you are running Linux. ## Viewing your logs in Grafana -Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki/latest/query/logcli/), but the easiest way to view your logs is with Grafana. +Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki //query/logcli/), but the easiest way to view your logs is with Grafana. 1. Use Grafana to query the Loki data source. @@ -86,7 +86,7 @@ Once you have collected logs, you will want to view them. You can view your log 1. From the Grafana main menu, click the **Explore** icon (1) to launch the Explore tab. To learn more about Explore, refer the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation. -1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/latest/query/), to query your logs. +1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki //query/), to query your logs. To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/). 1. The Loki query editor has two modes (3): @@ -106,7 +106,7 @@ Once you have collected logs, you will want to view them. You can view your log {container="evaluate-loki-flog-1"} ``` - In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki/latest/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`. + In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki //get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`. 1. To view all the log lines which have the container label "grafana": @@ -140,7 +140,7 @@ Once you have collected logs, you will want to view them. You can view your log 1. Select the first choice, **Parse log lines with logfmt parser**, by clicking **Use this query**. 1. On the Explore tab, click **Label browser**, in the dialog select a container and click **Show logs**. -For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki/latest/query/). +For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki //query/). ## Sample queries (code view) @@ -178,7 +178,7 @@ To see every log line that does not contain the value 401: {container="evaluate-loki-flog-1"} != "401" ``` -For more examples, refer to the [query documentation](https://grafana.com/docs/loki/latest/query/query_examples/). +For more examples, refer to the [query documentation](https://grafana.com/docs/loki //query/query_examples/). ## Complete metrics, logs, traces, and profiling example diff --git a/docs/sources/operations/query-acceleration-blooms.md b/docs/sources/operations/query-acceleration-blooms.md index af1acab835d7..b3ba38a63d27 100644 --- a/docs/sources/operations/query-acceleration-blooms.md +++ b/docs/sources/operations/query-acceleration-blooms.md @@ -196,7 +196,7 @@ Loki will check blooms for any log filtering expression within a query that sati whereas `|~ "f.*oo"` would not be simplifiable. - The filtering expression is a match (`|=`) or regex match (`|~`) filter. We don’t use blooms for not equal (`!=`) or not regex (`!~`) expressions. - For example, `|= "level=error"` would use blooms but `!= "level=error"` would not. -- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki/latest/query/log_queries/#line-format-expression). +- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki //query/log_queries/#line-format-expression). - For example, with `|= "level=error" | logfmt | line_format "ERROR {{.err}}" |= "traceID=3ksn8d4jj3"`, the first filter (`|= "level=error"`) will benefit from blooms but the second one (`|= "traceID=3ksn8d4jj3"`) will not. @@ -213,9 +213,9 @@ Query acceleration introduces a new sharding strategy: `bounded`, which uses blo processed right away during the planning phase in the query frontend, as well as evenly distributes the amount of chunks each sharded query will need to process. -[ring]: https://grafana.com/docs/loki/latest/get-started/hash-rings/ -[tenant-limits]: https://grafana.com/docs/loki/latest/configure/#limits_config -[gateway-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_gateway -[compactor-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_compactor -[microservices]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#microservices-mode -[ssd]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#simple-scalable \ No newline at end of file +[ring]: https://grafana.com/docs/loki //get-started/hash-rings/ +[tenant-limits]: https://grafana.com/docs/loki //configure/#limits_config +[gateway-cfg]: https://grafana.com/docs/loki //configure/#bloom_gateway +[compactor-cfg]: https://grafana.com/docs/loki //configure/#bloom_compactor +[microservices]: https://grafana.com/docs/loki //get-started/deployment-modes/#microservices-mode +[ssd]: https://grafana.com/docs/loki //get-started/deployment-modes/#simple-scalable \ No newline at end of file diff --git a/docs/sources/operations/request-validation-rate-limits.md b/docs/sources/operations/request-validation-rate-limits.md index 726d8af570f0..108e02598c36 100644 --- a/docs/sources/operations/request-validation-rate-limits.md +++ b/docs/sources/operations/request-validation-rate-limits.md @@ -28,11 +28,11 @@ Rate-limits are enforced when Loki cannot handle more requests from a tenant. This rate-limit is enforced when a tenant has exceeded their configured log ingestion rate-limit. -One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`. +One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`. Note that you'll want to make sure your Loki cluster has sufficient resources provisioned to be able to accommodate these higher limits. Otherwise your cluster may experience performance degradation as it tries to handle this higher volume of log lines to ingest. - Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki/latest/send-data/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster. + Another option to address samples being dropped due to `rate_limits` is simply to decrease the rate of log lines being sent to your Loki cluster. Consider collecting logs from fewer targets or setting up `drop` stages in Promtail to filter out certain log lines. Promtail's [limits configuration](/docs/loki //send-data/promtail/configuration/#limits_config) also gives you the ability to control the volume of logs Promtail remote writes to your Loki cluster. | Property | Value | @@ -50,9 +50,9 @@ This limit is enforced when a single stream reaches its rate-limit. Each stream has a rate-limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value). -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`. -Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki/latest/send-data/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`. +Another option you could consider to decrease the rate of samples dropped due to `per_stream_rate_limit` is to split the stream that is getting rate limited into several smaller streams. A third option is to use Promtail's [limit stage](/docs/loki //send-data/promtail/stages/limit/#limit-stage) to limit the rate of samples sent to the stream hitting the `per_stream_rate_limit`. We typically recommend setting `per_stream_rate_limit` no higher than 5MB, and `per_stream_rate_limit_burst` no higher than 20MB. @@ -71,7 +71,7 @@ This limit is enforced when a tenant reaches their maximum number of active stre Active streams are held in memory buffers in the ingesters, and if this value becomes sufficiently large then it will cause the ingesters to run out of memory. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. To increase the allowable active streams, adjust `max_global_streams_per_user`. Alternatively, the number of active streams can be reduced by removing extraneous labels or removing excessive unique label values. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. To increase the allowable active streams, adjust `max_global_streams_per_user`. Alternatively, the number of active streams can be reduced by removing extraneous labels or removing excessive unique label values. | Property | Value | |-------------------------|-------------------------| @@ -90,7 +90,7 @@ Validation errors occur when a request violates a validation rule defined by Lok This error occurs when a log line exceeds the maximum allowable length in bytes. The HTTP response will include the stream to which the offending log line belongs as well as its size in bytes. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. To increase the maximum line size, adjust `max_line_size`. We recommend that you do not increase this value above 256kb for performance reasons. Alternatively, Loki can be configured to ingest truncated versions of log lines over the length limit by using the `max_line_size_truncate` option. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. To increase the maximum line size, adjust `max_line_size`. We recommend that you do not increase this value above 256kb for performance reasons. Alternatively, Loki can be configured to ingest truncated versions of log lines over the length limit by using the `max_line_size_truncate` option. | Property | Value | |-------------------------|------------------| @@ -129,9 +129,9 @@ This validation error is returned when a stream is submitted without any labels. The `too_far_behind` and `out_of_order` reasons are identical. Loki clusters with `unordered_writes=true` (the default value as of Loki v2.4) use `reason=too_far_behind`. Loki clusters with `unordered_writes=false` use `reason=out_of_order`. -This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki/latest/configuration/#accept-out-of-order-writes) about Loki's ordering constraints. +This validation error is returned when a stream is submitted out of order. More details can be found [here](/docs/loki //configuration/#accept-out-of-order-writes) about Loki's ordering constraints. -The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration. +The `unordered_writes` config value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file, whereas `max_chunk_age` is a global configuration. This problem can be solved by ensuring that log delivery is configured correctly, or by increasing the `max_chunk_age` value. @@ -148,7 +148,7 @@ It is recommended to resist modifying the default value of `max_chunk_age` as th If the `reject_old_samples` config option is set to `true` (it is by default), then samples will be rejected with `reason=greater_than_max_sample_age` if they are older than the `reject_old_samples_max_age` value. You should not see samples rejected for `reason=greater_than_max_sample_age` if `reject_old_samples=false`. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `reject_old_samples_max_age` value, or investigating why log delivery is delayed for this particular stream. The stream in question will be returned in the body of the HTTP response. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. This error can be solved by increasing the `reject_old_samples_max_age` value, or investigating why log delivery is delayed for this particular stream. The stream in question will be returned in the body of the HTTP response. | Property | Value | |-------------------------|-------------------| @@ -163,7 +163,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c If a sample's timestamp is greater than the current timestamp, Loki allows for a certain grace period during which samples will be accepted. If the grace period is exceeded, the error will occur. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `creation_grace_period` value, or investigating why this particular stream has a timestamp too far into the future. The stream in question will be returned in the body of the HTTP response. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. This error can be solved by increasing the `creation_grace_period` value, or investigating why this particular stream has a timestamp too far into the future. The stream in question will be returned in the body of the HTTP response. | Property | Value | |-------------------------|-------------------| @@ -178,7 +178,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c If a sample is submitted with more labels than Loki has been configured to allow, it will be rejected with the `max_label_names_per_series` reason. Note that 'series' is the same thing as a 'stream' in Loki - the 'series' term is a legacy name. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_names_per_series` value. The stream to which the offending sample (i.e. the one with too many label names) belongs will be returned in the body of the HTTP response. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_names_per_series` value. The stream to which the offending sample (i.e. the one with too many label names) belongs will be returned in the body of the HTTP response. | Property | Value | |-------------------------|-------------------| @@ -193,7 +193,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c If a sample is sent with a label name that has a length in bytes greater than Loki has been configured to allow, it will be rejected with the `label_name_too_long` reason. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_name_length` value, though we do not recommend raising it significantly above the default value of `1024` for performance reasons. The offending stream will be returned in the body of the HTTP response. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_name_length` value, though we do not recommend raising it significantly above the default value of `1024` for performance reasons. The offending stream will be returned in the body of the HTTP response. | Property | Value | |-------------------------|-------------------| @@ -208,7 +208,7 @@ This value can be modified globally in the [`limits_config`](/docs/loki/latest/c If a sample has a label value with a length in bytes greater than Loki has been configured to allow, it will be rejected for the `label_value_too_long` reason. -This value can be modified globally in the [`limits_config`](/docs/loki/latest/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/latest/configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_value_length` value. The offending stream will be returned in the body of the HTTP response. +This value can be modified globally in the [`limits_config`](/docs/loki //configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki //configuration/#runtime-configuration-file) file. This error can be solved by increasing the `max_label_value_length` value. The offending stream will be returned in the body of the HTTP response. | Property | Value | |-------------------------|-------------------| diff --git a/docs/sources/operations/storage/_index.md b/docs/sources/operations/storage/_index.md index 44f08d91bfb0..d73902ea627e 100644 --- a/docs/sources/operations/storage/_index.md +++ b/docs/sources/operations/storage/_index.md @@ -6,7 +6,7 @@ weight: --- # Manage storage -You can read a high level overview of Loki storage [here](https://grafana.com/docs/loki/latest/configure/storage/) +You can read a high level overview of Loki storage [here](https://grafana.com/docs/loki //configure/storage/) Grafana Loki needs to store two different types of data: **chunks** and **indexes**. @@ -18,21 +18,21 @@ format](#chunk-format) for how chunks are stored internally. The **index** stores each stream's label set and links them to the individual chunks. -Refer to Loki's [configuration](https://grafana.com/docs/loki/latest/configure/) for details on +Refer to Loki's [configuration](https://grafana.com/docs/loki //configure/) for details on how to configure the storage and the index. For more information: -- [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) -- [Retention](https://grafana.com/docs/loki/latest/operations/storage/retention/) -- [Logs Deletion](https://grafana.com/docs/loki/latest/operations/storage/logs-deletion/) +- [Table Manager](https://grafana.com/docs/loki //operations/storage/table-manager/) +- [Retention](https://grafana.com/docs/loki //operations/storage/retention/) +- [Logs Deletion](https://grafana.com/docs/loki //operations/storage/logs-deletion/) ## Supported Stores The following are supported for the index: -- [TSDB](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) index store which stores TSDB index files in the object store. This is the recommended index store for Loki 2.8 and newer. -- [Single Store (boltdb-shipper)](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/) index store which stores boltdb index files in the object store. +- [TSDB](https://grafana.com/docs/loki //operations/storage/tsdb/) index store which stores TSDB index files in the object store. This is the recommended index store for Loki 2.8 and newer. +- [Single Store (boltdb-shipper)](https://grafana.com/docs/loki //operations/storage/boltdb-shipper/) index store which stores boltdb index files in the object store. - [Amazon DynamoDB](https://aws.amazon.com/dynamodb) - [Google Bigtable](https://cloud.google.com/bigtable) - [Apache Cassandra](https://cassandra.apache.org) @@ -76,7 +76,7 @@ When using S3 as object storage, the following permissions are needed: Resources: `arn:aws:s3:::`, `arn:aws:s3:::/*` -See the [AWS deployment section](https://grafana.com/docs/loki/latest/configure/storage/#aws-deployment-s3-single-store) on the storage page for a detailed setup guide. +See the [AWS deployment section](https://grafana.com/docs/loki //configure/storage/#aws-deployment-s3-single-store) on the storage page for a detailed setup guide. ### DynamoDB @@ -134,7 +134,7 @@ Resources: `arn:aws:iam:::role/` When using IBM Cloud Object Storage (COS) as object storage, IAM `Writer` role is needed. -See the [IBM Cloud Object Storage section](https://grafana.com/docs/loki/latest/configure/storage/#ibm-deployment-cos-single-store) on the storage page for a detailed setup guide. +See the [IBM Cloud Object Storage section](https://grafana.com/docs/loki //configure/storage/#ibm-deployment-cos-single-store) on the storage page for a detailed setup guide. ## Chunk Format diff --git a/docs/sources/operations/storage/retention.md b/docs/sources/operations/storage/retention.md index de898282c1eb..9d65b3331658 100644 --- a/docs/sources/operations/storage/retention.md +++ b/docs/sources/operations/storage/retention.md @@ -16,7 +16,7 @@ If you have a lifecycle policy configured on the object store, please ensure tha Granular retention policies to apply retention at per tenant or per stream level are also supported by the Compactor. {{% admonition type="note" %}} -The Compactor does not support retention on [legacy index types](https://grafana.com/docs/loki/latest/configure/storage/#index-storage). Please use the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) when using legacy index types. +The Compactor does not support retention on [legacy index types](https://grafana.com/docs/loki //configure/storage/#index-storage). Please use the [Table Manager](https://grafana.com/docs/loki //operations/storage/table-manager/) when using legacy index types. Both the Table manager and legacy index types are deprecated and may be removed in future major versions of Loki. {{% /admonition %}} @@ -100,7 +100,7 @@ Retention is only available if the index period is 24h. Single store TSDB and si #### Configuring the retention period -Retention period is configured within the [`limits_config`](https://grafana.com/docs/loki/latest/configure/#limits_config) configuration section. +Retention period is configured within the [`limits_config`](https://grafana.com/docs/loki //configure/#limits_config) configuration section. There are two ways of setting retention policies: @@ -129,7 +129,7 @@ limits_config: You can only use label matchers in the `selector` field of a `retention_stream` definition. Arbitrary LogQL expressions are not supported. {{% /admonition %}} -Per tenant retention can be defined by configuring [runtime overrides](https://grafana.com/docs/loki/latest/configure/#runtime-configuration-file). For example: +Per tenant retention can be defined by configuring [runtime overrides](https://grafana.com/docs/loki //configure/#runtime-configuration-file). For example: ```yaml overrides: @@ -181,13 +181,13 @@ The example configurations defined above will result in the following retention ## Table Manager (deprecated) -Retention through the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) is +Retention through the [Table Manager](https://grafana.com/docs/loki //operations/storage/table-manager/) is achieved by relying on the object store TTL feature, and will work for both -[boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/) store and chunk/index stores. +[boltdb-shipper](https://grafana.com/docs/loki //operations/storage/boltdb-shipper/) store and chunk/index stores. In order to enable the retention support, the Table Manager needs to be configured to enable deletions and a retention period. Please refer to the -[`table_manager`](https://grafana.com/docs/loki/latest/configure/#table_manager) +[`table_manager`](https://grafana.com/docs/loki //configure/#table_manager) section of the Loki configuration reference for all available options. Alternatively, the `table-manager.retention-period` and `table-manager.retention-deletes-enabled` command line flags can be used. The @@ -196,13 +196,13 @@ can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.d {{% admonition type="warning" %}} The retention period must be a multiple of the index and chunks table -`period`, configured in the [`period_config`](https://grafana.com/docs/loki/latest/configure/#period_config) block. -See the [Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/#retention) documentation for +`period`, configured in the [`period_config`](https://grafana.com/docs/loki //configure/#period_config) block. +See the [Table Manager](https://grafana.com/docs/loki //operations/storage/table-manager/#retention) documentation for more information. {{% /admonition %}} {{% admonition type="note" %}} -To avoid querying of data beyond the retention period,`max_query_lookback` config in [`limits_config`](https://grafana.com/docs/loki/latest/configure/#limits_config) must be set to a value less than or equal to what is set in `table_manager.retention_period`. +To avoid querying of data beyond the retention period,`max_query_lookback` config in [`limits_config`](https://grafana.com/docs/loki //configure/#limits_config) must be set to a value less than or equal to what is set in `table_manager.retention_period`. {{% /admonition %}} When using S3 or GCS, the bucket storing the chunks needs to have the expiry @@ -223,7 +223,7 @@ intact; you will still be able to see related labels but will be unable to retrieve the deleted log content. For further details on the Table Manager internals, refer to the -[Table Manager](https://grafana.com/docs/loki/latest/operations/storage/table-manager/) documentation. +[Table Manager](https://grafana.com/docs/loki //operations/storage/table-manager/) documentation. ## Example Configuration diff --git a/docs/sources/operations/troubleshooting.md b/docs/sources/operations/troubleshooting.md index 68127a222891..f14b587381db 100644 --- a/docs/sources/operations/troubleshooting.md +++ b/docs/sources/operations/troubleshooting.md @@ -81,7 +81,7 @@ Loki cache generation number errors(Loki >= 2.6) - Check the metric `loki_delete_cache_gen_load_failures_total` on `/metrics`, which is an indicator for the occurrence of the problem. If the value is greater than 1, it means that there is a problem with that component. - Try Http GET request to route: /loki/api/v1/cache/generation_numbers - - If response is equal as `"deletion is not available for this tenant"`, this means the deletion API is not enabled for the tenant. To enable this api, set `allow_deletes: true` for this tenant via the configuration settings. Check more docs: /docs/loki/latest/operations/storage/logs-deletion/ + - If response is equal as `"deletion is not available for this tenant"`, this means the deletion API is not enabled for the tenant. To enable this api, set `allow_deletes: true` for this tenant via the configuration settings. Check more docs: /docs/loki //operations/storage/logs-deletion/ ## Troubleshooting targets diff --git a/docs/sources/operations/upgrade.md b/docs/sources/operations/upgrade.md index 5a0be8626e6a..e3dcc03c0e11 100644 --- a/docs/sources/operations/upgrade.md +++ b/docs/sources/operations/upgrade.md @@ -6,6 +6,6 @@ weight: # Upgrade -- [Upgrade](https://grafana.com/docs/loki/latest/setup/upgrade/) from one Loki version to a newer version. +- [Upgrade](https://grafana.com/docs/loki //setup/upgrade/) from one Loki version to a newer version. -- [Upgrade Helm](https://grafana.com/docs/loki/latest/setup/upgrade/) from Helm v2.x to Helm v3.x. +- [Upgrade Helm](https://grafana.com/docs/loki //setup/upgrade/) from Helm v2.x to Helm v3.x. diff --git a/docs/sources/query/logcli.md b/docs/sources/query/logcli.md index 297730a589ee..d22d2bafe668 100644 --- a/docs/sources/query/logcli.md +++ b/docs/sources/query/logcli.md @@ -229,7 +229,7 @@ Commands: For more information about log queries and metric queries, refer to the LogQL documentation: - https://grafana.com/docs/loki/latest/logql/ + https://grafana.com/docs/loki //logql/ labels [] [