diff --git a/docs/sources/operations/query-acceleration-blooms.md b/docs/sources/operations/query-acceleration-blooms.md index 45fe3c76142e..d2d12c183ffb 100644 --- a/docs/sources/operations/query-acceleration-blooms.md +++ b/docs/sources/operations/query-acceleration-blooms.md @@ -123,7 +123,7 @@ The sharding of the data is performed on the client side using DNS discovery of and the [jumphash](https://arxiv.org/abs/1406.2294) algorithm for consistent hashing and even distribution of the stream fingerprints across Bloom Gateway instances. -You can find all the configuration options for this component in the Configure section for the [Bloom Gateways][gateway-cfg]. +You can find all the configuration options for this component in the Configure section for the [Bloom Gateways][bloom-gateway-cfg]. Refer to the [Enable Query Acceleration with Blooms](#enable-query-acceleration-with-blooms) section below for a configuration snippet enabling this feature. ### Sizing @@ -139,7 +139,7 @@ Example calculation for storage requirements of blooms for a single tenant. Since reading blooms depends heavily on disk IOPS, Bloom Gateways should make use of multiple, locally attached SSD disks (NVMe) to increase i/o throughput. -Multiple directories on different disk mounts can be specified using the `-bloom.shipper.working-directory` [setting][gateway-cfg] +Multiple directories on different disk mounts can be specified using the `-bloom.shipper.working-directory` [setting]storage-config-cfg] when using a comma separated list of mount points, for example: ``` -bloom.shipper.working-directory="/mnt/data0,/mnt/data1,/mnt/data2,/mnt/data3" @@ -209,9 +209,9 @@ Loki will check blooms for any log filtering expression within a query that sati the first filter (`|= "level=error"`) will benefit from blooms but the second one (`|= "traceID=3ksn8d4jj3"`) will not. ## Query sharding -Query acceleration does not just happen while processing chunks, -but also happens from the query planning phase where the query frontend applies [query sharding](https://lokidex.com/posts/tsdb/#sharding). -Loki 3.0 introduces a new {per-tenant configuration][tenant-limits] flag `tsdb_sharding_strategy` which defaults to computing +Query acceleration does not just happen while processing chunks, but also happens from the query planning phase where +the query frontend applies [query sharding](https://lokidex.com/posts/tsdb/#sharding). +Loki 3.0 introduces a new [per-tenant configuration][tenant-limits] flag `tsdb_sharding_strategy` which defaults to computing shards as in previous versions of Loki by using the index stats to come up with the closest power of two that would optimistically divide the data to process in shards of roughly the same size. Unfortunately, the amount of data each stream has is often unbalanced with the rest, @@ -223,7 +223,8 @@ as well as evenly distributes the amount of chunks each sharded query will need [ring]: https://grafana.com/docs/loki//get-started/hash-rings/ [tenant-limits]: https://grafana.com/docs/loki//configure/#limits_config -[gateway-cfg]: https://grafana.com/docs/loki//configure/#bloom_gateway -[compactor-cfg]: https://grafana.com/docs/loki//configure/#bloom_compactor +[bloom-gateway-cfg]: https://grafana.com/docs/loki//configure/#bloom_gateway +[bloom-build-cfg]: https://grafana.com/docs/loki//configure/#bloom_build +[storage-config-cfg]: https://grafana.com/docs/loki//configure/#storage_config [microservices]: https://grafana.com/docs/loki//get-started/deployment-modes/#microservices-mode [ssd]: https://grafana.com/docs/loki//get-started/deployment-modes/#simple-scalable \ No newline at end of file