Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: backport #12516 to 3.0 branch #12521

Merged
merged 2 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ rejected pushes. Users are recommended to do one of the following:
## Implementation

As discussed in this document, this feature will be implemented by copying the
existing [Loki Push API](/docs/loki/latest/api/#post-lokiapiv1push)
existing [Loki Push API](/docs/loki/<LOKI_VERSION>/api/#post-lokiapiv1push)
and exposing it via Promtail.

## Considered Alternatives
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ What can we do about this? What if this was because the sources of these logs we
{job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2
```

But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/<LOKI_VERSION>/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.

It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

Expand Down
10 changes: 5 additions & 5 deletions docs/sources/configure/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,24 +12,24 @@ even locally on the filesystem. A small index and highly compressed chunks
simplifies the operation and significantly lowers the cost of Loki.

Loki 2.8 introduced TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki.
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/tsdb/).
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/).

Loki 2.0 introduced an index mechanism named 'boltdb-shipper' and is what we now call [Single Store](#single-store).
This type only requires one store, the object store, for both the index and chunks.
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/).
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/boltdb-shipper/).

Prior to Loki 2.0, chunks and index data were stored in separate backends:
object storage (or filesystem) for chunk data and NoSQL/Key-Value databases for index data. These "multistore" backends have been deprecated, as noted below.

You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki/latest/operations/storage/).
You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/).

## Single Store

Single Store refers to using object storage as the storage medium for both Loki's index as well as its data ("chunks"). There are two supported modes:

### TSDB (recommended)

Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".
Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".

### BoltDB (deprecated)

Expand Down Expand Up @@ -91,7 +91,7 @@ This storage type for chunks is deprecated and may be removed in future major ve

### Cassandra (deprecated)

Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.

{{< collapse title="Title of hidden content" >}}
This storage type for indexes is deprecated and may be removed in future major versions of Loki.
Expand Down
16 changes: 8 additions & 8 deletions docs/sources/get-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,26 @@ To collect logs and view your log data generally involves the following steps:

![Loki implementation steps](loki-install.png)

1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki/latest/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki/latest/configure/)
- There are [examples](https://grafana.com/docs/loki/latest/configure/examples/) for specific Object Storage providers that you can modify.
1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/)
- There are [examples](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/examples/) for specific Object Storage providers that you can modify.
1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications.
1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file.
1. Add [labels](https://grafana.com/docs/loki/latest/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/latest/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Add [labels](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/):
1. Pick a time range.
1. Choose the Loki datasource.
1. Use [LogQL](https://grafana.com/docs/loki/latest/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.
1. Use [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.

**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/latest/query/).
**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).

## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki

To deploy Grafana Agent to collect Pod logs from your Kubernetes cluster and ship them to Loki, you an use the Grafana Agent Helm chart, and a `values.yaml` file.

1. Install Loki with the [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/).
1. Install Loki with the [Helm chart](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/install-scalable/).
1. Deploy the Grafana Agent, using the [Grafana Agent Helm chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) and this example `values.yaml` file updating the value for `forward_to = [loki.write.endpoint.receiver]`:

```yaml
Expand Down
12 changes: 6 additions & 6 deletions docs/sources/get-started/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: How to create and use a simple local Loki cluster for testing and e

# Quickstart to run Loki locally

If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki/latest/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.
If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.

The Docker Compose configuration instantiates the following components, each in its own container:

Expand Down Expand Up @@ -76,7 +76,7 @@ This quickstart assumes you are running Linux.

## Viewing your logs in Grafana

Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki/latest/query/logcli/), but the easiest way to view your logs is with Grafana.
Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki/<LOKI_VERSION>/query/logcli/), but the easiest way to view your logs is with Grafana.

1. Use Grafana to query the Loki data source.

Expand All @@ -86,7 +86,7 @@ Once you have collected logs, you will want to view them. You can view your log

1. From the Grafana main menu, click the **Explore** icon (1) to launch the Explore tab. To learn more about Explore, refer the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation.

1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/latest/query/), to query your logs.
1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/<LOKI_VERSION>/query/), to query your logs.
To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/).

1. The Loki query editor has two modes (3):
Expand All @@ -106,7 +106,7 @@ Once you have collected logs, you will want to view them. You can view your log
{container="evaluate-loki-flog-1"}
```

In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki/latest/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.
In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.

1. To view all the log lines which have the container label "grafana":

Expand Down Expand Up @@ -140,7 +140,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. Select the first choice, **Parse log lines with logfmt parser**, by clicking **Use this query**.
1. On the Explore tab, click **Label browser**, in the dialog select a container and click **Show logs**.

For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki/latest/query/).
For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki/<LOKI_VERSION>/query/).

## Sample queries (code view)

Expand Down Expand Up @@ -178,7 +178,7 @@ To see every log line that does not contain the value 401:
{container="evaluate-loki-flog-1"} != "401"
```

For more examples, refer to the [query documentation](https://grafana.com/docs/loki/latest/query/query_examples/).
For more examples, refer to the [query documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/query/query_examples/).

## Complete metrics, logs, traces, and profiling example

Expand Down
14 changes: 7 additions & 7 deletions docs/sources/operations/query-acceleration-blooms.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ Loki will check blooms for any log filtering expression within a query that sati
whereas `|~ "f.*oo"` would not be simplifiable.
- The filtering expression is a match (`|=`) or regex match (`|~`) filter. We don’t use blooms for not equal (`!=`) or not regex (`!~`) expressions.
- For example, `|= "level=error"` would use blooms but `!= "level=error"` would not.
- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki/latest/query/log_queries/#line-format-expression).
- The filtering expression is placed before a [line format expression](https://grafana.com/docs/loki/<LOKI_VERSION>/query/log_queries/#line-format-expression).
- For example, with `|= "level=error" | logfmt | line_format "ERROR {{.err}}" |= "traceID=3ksn8d4jj3"`,
the first filter (`|= "level=error"`) will benefit from blooms but the second one (`|= "traceID=3ksn8d4jj3"`) will not.

Expand All @@ -213,9 +213,9 @@ Query acceleration introduces a new sharding strategy: `bounded`, which uses blo
processed right away during the planning phase in the query frontend,
as well as evenly distributes the amount of chunks each sharded query will need to process.

[ring]: https://grafana.com/docs/loki/latest/get-started/hash-rings/
[tenant-limits]: https://grafana.com/docs/loki/latest/configure/#limits_config
[gateway-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_gateway
[compactor-cfg]: https://grafana.com/docs/loki/latest/configure/#bloom_compactor
[microservices]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#microservices-mode
[ssd]: https://grafana.com/docs/loki/latest/get-started/deployment-modes/#simple-scalable
[ring]: https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/hash-rings/
[tenant-limits]: https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#limits_config
[gateway-cfg]: https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#bloom_gateway
[compactor-cfg]: https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#bloom_compactor
[microservices]: https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode
[ssd]: https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#simple-scalable
Loading
Loading