Skip to content

Commit

Permalink
docs: proper fix for #12510 (#12516)
Browse files Browse the repository at this point in the history
  • Loading branch information
JStickler authored Apr 8, 2024
1 parent 0925f3a commit 1c5a736
Show file tree
Hide file tree
Showing 26 changed files with 108 additions and 108 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ rejected pushes. Users are recommended to do one of the following:
## Implementation

As discussed in this document, this feature will be implemented by copying the
existing [Loki Push API](/docs/loki/latest/api/#post-lokiapiv1push)
existing [Loki Push API](/docs/loki /<LOKI_VERSION>/api/#post-lokiapiv1push)
and exposing it via Promtail.

## Considered Alternatives
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ What can we do about this? What if this was because the sources of these logs we
{job="syslog", instance="host2"} 00:00:02 i'm a syslog! <- Accepted, still in order for stream 2
```

But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki/latest/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.
But what if the application itself generated logs that were out of order? Well, I'm afraid this is a problem. If you are extracting the timestamp from the log line with something like [the Promtail pipeline stage](/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/timestamp/), you could instead _not_ do this and let Promtail assign a timestamp to the log lines. Or you can hopefully fix it in the application itself.

It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

Expand Down
10 changes: 5 additions & 5 deletions docs/sources/configure/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,24 +12,24 @@ even locally on the filesystem. A small index and highly compressed chunks
simplifies the operation and significantly lowers the cost of Loki.

Loki 2.8 introduced TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki.
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/tsdb/).
More detailed information about TSDB can be found under the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/tsdb/).

Loki 2.0 introduced an index mechanism named 'boltdb-shipper' and is what we now call [Single Store](#single-store).
This type only requires one store, the object store, for both the index and chunks.
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/).
More detailed information about 'boltdb-shipper' can be found under the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/).

Prior to Loki 2.0, chunks and index data were stored in separate backends:
object storage (or filesystem) for chunk data and NoSQL/Key-Value databases for index data. These "multistore" backends have been deprecated, as noted below.

You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki/latest/operations/storage/).
You can find more detailed information about all of the storage options in the [manage section](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/).

## Single Store

Single Store refers to using object storage as the storage medium for both Loki's index as well as its data ("chunks"). There are two supported modes:

### TSDB (recommended)

Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki/latest/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".
Starting in Loki 2.8, the [TSDB index store](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/tsdb/) improves query performance, reduces TCO and has the same feature parity as "boltdb-shipper".

### BoltDB (deprecated)

Expand Down Expand Up @@ -91,7 +91,7 @@ This storage type for chunks is deprecated and may be removed in future major ve

### Cassandra (deprecated)

Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki/latest/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.
Cassandra can also be utilized for the index store and aside from the [boltdb-shipper](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/boltdb-shipper/), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering.

{{< collapse title="Title of hidden content" >}}
This storage type for indexes is deprecated and may be removed in future major versions of Loki.
Expand Down
16 changes: 8 additions & 8 deletions docs/sources/get-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,26 @@ To collect logs and view your log data generally involves the following steps:

![Loki implementation steps](loki-install.png)

1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki/latest/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki/latest/configure/)
- There are [examples](https://grafana.com/docs/loki/latest/configure/examples/) for specific Object Storage providers that you can modify.
1. Install Loki on Kubernetes in simple scalable mode, using the recommended [Helm chart](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/install/helm/install-scalable/). Supply the Helm chart with your object storage authentication details.
- [Storage options](https://grafana.com/docs/loki /<LOKI_VERSION>/operations/storage/)
- [Configuration reference](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/)
- There are [examples](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/examples/) for specific Object Storage providers that you can modify.
1. Deploy the [Grafana Agent](https://grafana.com/docs/agent/latest/flow/) to collect logs from your applications.
1. On Kubernetes, deploy the Grafana Agent using the Helm chart. Configure Grafana Agent to scrape logs from your Kubernetes cluster, and add your Loki endpoint details. See the following section for an example Grafana Agent Flow configuration file.
1. Add [labels](https://grafana.com/docs/loki/latest/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki/latest/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Add [labels](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/) to your logs following our [best practices](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/bp-labels/). Most Loki users start by adding labels which describe where the logs are coming from (region, cluster, environment, etc.).
1. Deploy [Grafana](https://grafana.com/docs/grafana/latest/setup-grafana/) or [Grafana Cloud](https://grafana.com/docs/grafana-cloud/quickstart/) and configure a [Loki datasource](https://grafana.com/docs/grafana/latest/datasources/loki/configure-loki-data-source/).
1. Select the [Explore feature](https://grafana.com/docs/grafana/latest/explore/) in the Grafana main menu. To [view logs in Explore](https://grafana.com/docs/grafana/latest/explore/logs-integration/):
1. Pick a time range.
1. Choose the Loki datasource.
1. Use [LogQL](https://grafana.com/docs/loki/latest/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.
1. Use [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/) in the [query editor](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/), use the Builder view to explore your labels, or select from sample pre-configured queries using the **Kick start your query** button.

**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki/latest/query/).
**Next steps:** Learn more about Loki’s query language, [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/).

## Example Grafana Agent configuration file to ship Kubernetes Pod logs to Loki

To deploy Grafana Agent to collect Pod logs from your Kubernetes cluster and ship them to Loki, you an use the Grafana Agent Helm chart, and a `values.yaml` file.

1. Install Loki with the [Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/install-scalable/).
1. Install Loki with the [Helm chart](https://grafana.com/docs/loki /<LOKI_VERSION>/setup/install/helm/install-scalable/).
1. Deploy the Grafana Agent, using the [Grafana Agent Helm chart](https://grafana.com/docs/agent/latest/flow/setup/install/kubernetes/) and this example `values.yaml` file updating the value for `forward_to = [loki.write.endpoint.receiver]`:

```yaml
Expand Down
12 changes: 6 additions & 6 deletions docs/sources/get-started/labels/structured-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Describes how to enable structure metadata for logs and how to quer
# What is structured metadata

{{% admonition type="warning" %}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/latest/configure/storage/#schema-config) for more details about schema versions.
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki /<LOKI_VERSION>/configure/storage/#schema-config) for more details about schema versions.
{{% /admonition %}}

Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).
Expand All @@ -29,12 +29,12 @@ It is an antipattern to extract information that already exists in your log line
## Attaching structured metadata to log lines

You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp.
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki/latest/reference/api/#ingest-logs).
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki /<LOKI_VERSION>/reference/api/#ingest-logs).

Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines.
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/latest/send-data/promtail/stages/structured_metadata/) for more information.
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/promtail/stages/structured_metadata/) for more information.

With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/latest/send-data/logstash/).
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki /<LOKI_VERSION>/send-data/logstash/).

{{% admonition type="warning" %}}
There are defaults for how much structured metadata can be attached per log line.
Expand All @@ -52,7 +52,7 @@ There are defaults for how much structured metadata can be attached per log line
## Querying structured metadata

Structured metadata is extracted automatically for each returned log line and added to the labels returned for the query.
You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki/latest/query/log_queries/#label-filter-expression).
You can use labels of structured metadata to filter log line using a [label filter expression](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#label-filter-expression).

For example, if you have a label `pod` attached to some of your log lines as structured metadata, you can filter log lines using:

Expand All @@ -66,7 +66,7 @@ Of course, you can filter by multiple labels of structured metadata at the same
{job="example"} | pod="myservice-abc1234-56789" | trace_id="0242ac120002"
```

Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki/latest/query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki/latest/query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need.
Note that since structured metadata is extracted automatically to the results labels, some metric queries might return an error like `maximum of series (50000) reached for a single query`. You can use the [Keep](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#keep-labels-expression) and [Drop](https://grafana.com/docs/loki /<LOKI_VERSION>/query/log_queries/#drop-labels-expression) stages to filter out labels that you don't need.
For example:

```logql
Expand Down
12 changes: 6 additions & 6 deletions docs/sources/get-started/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: How to create and use a simple local Loki cluster for testing and e

# Quickstart to run Loki locally

If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki/latest/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.
If you want to experiment with Loki, you can run Loki locally using the Docker Compose file that ships with Loki. It runs Loki in a [monolithic deployment](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode) mode and includes a sample application to generate logs.

The Docker Compose configuration instantiates the following components, each in its own container:

Expand Down Expand Up @@ -76,7 +76,7 @@ This quickstart assumes you are running Linux.

## Viewing your logs in Grafana

Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki/latest/query/logcli/), but the easiest way to view your logs is with Grafana.
Once you have collected logs, you will want to view them. You can view your logs using the command line interface, [LogCLI](/docs/loki /<LOKI_VERSION>/query/logcli/), but the easiest way to view your logs is with Grafana.

1. Use Grafana to query the Loki data source.

Expand All @@ -86,7 +86,7 @@ Once you have collected logs, you will want to view them. You can view your log

1. From the Grafana main menu, click the **Explore** icon (1) to launch the Explore tab. To learn more about Explore, refer the [Explore](https://grafana.com/docs/grafana/latest/explore/) documentation.

1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki/latest/query/), to query your logs.
1. From the menu in the dashboard header, select the Loki data source (2). This displays the Loki query editor. In the query editor you use the Loki query language, [LogQL](https://grafana.com/docs/loki /<LOKI_VERSION>/query/), to query your logs.
To learn more about the query editor, refer to the [query editor documentation](https://grafana.com/docs/grafana/latest/datasources/loki/query-editor/).

1. The Loki query editor has two modes (3):
Expand All @@ -106,7 +106,7 @@ Once you have collected logs, you will want to view them. You can view your log
{container="evaluate-loki-flog-1"}
```

In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki/latest/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.
In Loki, this is called a log stream. Loki uses [labels](https://grafana.com/docs/loki /<LOKI_VERSION>/get-started/labels/) as metadata to describe log streams. Loki queries always start with a label selector. In the query above, the label selector is `container`.

1. To view all the log lines which have the container label "grafana":

Expand Down Expand Up @@ -140,7 +140,7 @@ Once you have collected logs, you will want to view them. You can view your log
1. Select the first choice, **Parse log lines with logfmt parser**, by clicking **Use this query**.
1. On the Explore tab, click **Label browser**, in the dialog select a container and click **Show logs**.

For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki/latest/query/).
For a thorough introduction to LogQL, refer to the [LogQL reference](https://grafana.com/docs/loki /<LOKI_VERSION>/query/).

## Sample queries (code view)

Expand Down Expand Up @@ -178,7 +178,7 @@ To see every log line that does not contain the value 401:
{container="evaluate-loki-flog-1"} != "401"
```

For more examples, refer to the [query documentation](https://grafana.com/docs/loki/latest/query/query_examples/).
For more examples, refer to the [query documentation](https://grafana.com/docs/loki /<LOKI_VERSION>/query/query_examples/).

## Complete metrics, logs, traces, and profiling example

Expand Down
Loading

0 comments on commit 1c5a736

Please sign in to comment.