diff --git a/docs/sources/_index.md b/docs/sources/_index.md
index 3428c662cd50f..822c01fd6a7ce 100644
--- a/docs/sources/_index.md
+++ b/docs/sources/_index.md
@@ -1,19 +1,48 @@
---
-title: Grafana Loki documentation
-description: "Technical documentation for Grafana Loki"
+title: Grafana Loki
+description: Grafana Loki is a set of open source components that can be composed into a fully featured logging stack.
aliases:
- /docs/loki/
weight: 100
+hero:
+ title: Grafana Loki
+ level: 1
+ image: /media/docs/loki/logo-grafana-loki.png
+ width: 110
+ height: 110
+ description: Grafana Loki is a set of open source components that can be composed into a fully featured logging stack. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.
+cards:
+ title_class: pt-0 lh-1
+ items:
+ - title: Learn about Loki
+ href: /docs/loki/latest/get-started/
+ description: Learn about the Loki architecture and components, the various deployment modes, and best practices for labels.
+ - title: Set up Loki
+ href: /docs/loki/latest/setup/
+ description: View instructions for how to configure and install Loki, migrate from previous deployments, and upgrade your Loki environment.
+ - title: Configure Loki
+ href: /docs/loki/latest/configure/
+ description: View the Loki configuration reference and configuration examples.
+ - title: Send logs to Loki
+ href: /docs/loki/latest/send-data/
+ description: Select one or more clients to use to send your logs to Loki.
+ - title: Manage Loki
+ href: /docs/loki/latest/operations/
+ description: Learn how to manage tenants, log ingestion, storage, queries, and more.
+ - title: Query with LogQL
+ href: /docs/loki/latest/query/
+ description: Inspired by PromQL, LogQL is Grafana Loki’s query language. LogQL uses labels and operators for filtering.
---
-# Grafana Loki documentation
+{{< docs/hero-simple key="hero" >}}
-
+---
-Grafana Loki is a set of components that can be composed into a fully featured logging stack.
+## Overview
-Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels).
+Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs' labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
-A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.
-For more information, see the [Loki overview]({{< relref "./get-started/overview" >}}).
+## Explore
+
+{{< card-grid key="cards" type="simple" >}}
diff --git a/docs/sources/setup/install/helm/install-microservices/_index.md b/docs/sources/setup/install/helm/install-microservices/_index.md
index 71f94673fe53c..9e0eb4d3307e6 100644
--- a/docs/sources/setup/install/helm/install-microservices/_index.md
+++ b/docs/sources/setup/install/helm/install-microservices/_index.md
@@ -48,73 +48,73 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
3. Create the configuration file `values.yaml`. The example below illustrates how to deploy Loki in test mode using MinIO as storage:
```yaml
- loki:
- schemaConfig:
- configs:
- - from: 2024-04-01
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
- max_concurrent: 4
-
- #gateway:
- # ingress:
- # enabled: true
- # hosts:
- # - host: FIXME
- # paths:
- # - path: /
- # pathType: Prefix
-
- deploymentMode: Distributed
-
- ingester:
- replicas: 3
- querier:
- replicas: 3
- maxUnavailable: 2
- queryFrontend:
- replicas: 2
- maxUnavailable: 1
- queryScheduler:
- replicas: 2
- distributor:
- replicas: 3
- maxUnavailable: 2
- compactor:
- replicas: 1
- indexGateway:
- replicas: 2
- maxUnavailable: 1
-
- bloomCompactor:
- replicas: 0
- bloomGateway:
- replicas: 0
-
- # Enable minio for storage
- minio:
- enabled: true
-
- # Zero out replica counts of other deployment modes
- backend:
- replicas: 0
- read:
- replicas: 0
- write:
- replicas: 0
-
- singleBinary:
- replicas: 0
+ loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ tracing:
+ enabled: true
+ querier:
+ # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
+ max_concurrent: 4
+
+ #gateway:
+ # ingress:
+ # enabled: true
+ # hosts:
+ # - host: FIXME
+ # paths:
+ # - path: /
+ # pathType: Prefix
+
+ deploymentMode: Distributed
+
+ ingester:
+ replicas: 3
+ querier:
+ replicas: 3
+ maxUnavailable: 2
+ queryFrontend:
+ replicas: 2
+ maxUnavailable: 1
+ queryScheduler:
+ replicas: 2
+ distributor:
+ replicas: 3
+ maxUnavailable: 2
+ compactor:
+ replicas: 1
+ indexGateway:
+ replicas: 2
+ maxUnavailable: 1
+
+ bloomCompactor:
+ replicas: 0
+ bloomGateway:
+ replicas: 0
+
+ # Enable minio for storage
+ minio:
+ enabled: true
+
+ # Zero out replica counts of other deployment modes
+ backend:
+ replicas: 0
+ read:
+ replicas: 0
+ write:
+ replicas: 0
+
+ singleBinary:
+ replicas: 0
```
4. Install or upgrade the Loki deployment.
diff --git a/docs/sources/setup/install/helm/install-scalable/_index.md b/docs/sources/setup/install/helm/install-scalable/_index.md
index e27f544b28f0c..fed56e339d969 100644
--- a/docs/sources/setup/install/helm/install-scalable/_index.md
+++ b/docs/sources/setup/install/helm/install-scalable/_index.md
@@ -50,68 +50,68 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
3. Create the configuration file `values.yaml`. The example below illustrates how to deploy Loki in test mode using MinIO as storage:
```yaml
- loki:
- schemaConfig:
- configs:
- - from: 2024-04-01
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
- max_concurrent: 4
-
- #gateway:
- # ingress:
- # enabled: true
- # hosts:
- # - host: FIXME
- # paths:
- # - path: /
- # pathType: Prefix
-
- deploymentMode: SimpleScalable
-
- backend:
- replicas: 3
- read:
- replicas: 3
- write:
- replicas: 3
-
- # Enable minio for storage
- minio:
- enabled: true
-
- # Zero out replica counts of other deployment modes
- singleBinary:
- replicas: 0
-
- ingester:
- replicas: 0
- querier:
- replicas: 0
- queryFrontend:
- replicas: 0
- queryScheduler:
- replicas: 0
- distributor:
- replicas: 0
- compactor:
- replicas: 0
- indexGateway:
- replicas: 0
- bloomCompactor:
- replicas: 0
- bloomGateway:
- replicas: 0
+ loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ tracing:
+ enabled: true
+ querier:
+ # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
+ max_concurrent: 4
+
+ #gateway:
+ # ingress:
+ # enabled: true
+ # hosts:
+ # - host: FIXME
+ # paths:
+ # - path: /
+ # pathType: Prefix
+
+ deploymentMode: SimpleScalable
+
+ backend:
+ replicas: 3
+ read:
+ replicas: 3
+ write:
+ replicas: 3
+
+ # Enable minio for storage
+ minio:
+ enabled: true
+
+ # Zero out replica counts of other deployment modes
+ singleBinary:
+ replicas: 0
+
+ ingester:
+ replicas: 0
+ querier:
+ replicas: 0
+ queryFrontend:
+ replicas: 0
+ queryScheduler:
+ replicas: 0
+ distributor:
+ replicas: 0
+ compactor:
+ replicas: 0
+ indexGateway:
+ replicas: 0
+ bloomCompactor:
+ replicas: 0
+ bloomGateway:
+ replicas: 0
```
4. Install or upgrade the Loki deployment.
@@ -131,162 +131,162 @@ After testing Loki with MinIO, it is recommended to configure Loki with an objec
{{< code >}}
```s3
- loki:
- schemaConfig:
- configs:
- - from: 2024-04-01
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- max_concurrent: 4
-
- storage:
- type: s3
- bucketNames:
- chunks: "chunks"
- ruler: "ruler"
- admin: "admin"
- s3:
- # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name
- s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
- # AWS endpoint URL
- endpoint:
- # AWS region where the S3 bucket is located
- region:
- # AWS secret access key
- secretAccessKey:
- # AWS access key ID
- accessKeyId:
- # AWS signature version (e.g., v2 or v4)
- signatureVersion:
- # Forces the path style for S3 (true/false)
- s3ForcePathStyle: false
- # Allows insecure (HTTP) connections (true/false)
- insecure: false
- # HTTP configuration settings
- http_config: {}
-
- deploymentMode: SimpleScalable
-
- backend:
- replicas: 3
- read:
- replicas: 3
- write:
- replicas: 3
-
- # Disable minio storage
- minio:
- enabled: false
-
- # Zero out replica counts of other deployment modes
- singleBinary:
- replicas: 0
-
- ingester:
- replicas: 0
- querier:
- replicas: 0
- queryFrontend:
- replicas: 0
- queryScheduler:
- replicas: 0
- distributor:
- replicas: 0
- compactor:
- replicas: 0
- indexGateway:
- replicas: 0
- bloomCompactor:
- replicas: 0
- bloomGateway:
- replicas: 0
+loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ tracing:
+ enabled: true
+ querier:
+ max_concurrent: 4
+
+ storage:
+ type: s3
+ bucketNames:
+ chunks: "chunks"
+ ruler: "ruler"
+ admin: "admin"
+ s3:
+ # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name
+ s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
+ # AWS endpoint URL
+ endpoint:
+ # AWS region where the S3 bucket is located
+ region:
+ # AWS secret access key
+ secretAccessKey:
+ # AWS access key ID
+ accessKeyId:
+ # AWS signature version (e.g., v2 or v4)
+ signatureVersion:
+ # Forces the path style for S3 (true/false)
+ s3ForcePathStyle: false
+ # Allows insecure (HTTP) connections (true/false)
+ insecure: false
+ # HTTP configuration settings
+ http_config: {}
+
+deploymentMode: SimpleScalable
+
+backend:
+ replicas: 3
+read:
+ replicas: 3
+write:
+ replicas: 3
+
+# Disable minio storage
+minio:
+ enabled: false
+
+# Zero out replica counts of other deployment modes
+singleBinary:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
```
```azure
- loki:
- schemaConfig:
- configs:
- - from: 2024-04-01
- store: tsdb
- object_store: azure
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- max_concurrent: 4
-
- storage:
- type: azure
- azure:
- # Name of the Azure Blob Storage account
- accountName:
- # Key associated with the Azure Blob Storage account
- accountKey:
- # Comprehensive connection string for Azure Blob Storage account (Can be used to replace endpoint, accountName, and accountKey)
- connectionString:
- # Flag indicating whether to use Azure Managed Identity for authentication
- useManagedIdentity: false
- # Flag indicating whether to use a federated token for authentication
- useFederatedToken: false
- # Client ID of the user-assigned managed identity (if applicable)
- userAssignedId:
- # Timeout duration for requests made to the Azure Blob Storage account (in seconds)
- requestTimeout:
- # Domain suffix of the Azure Blob Storage service endpoint (e.g., core.windows.net)
- endpointSuffix:
- bucketNames:
- chunks: "chunks"
- ruler: "ruler"
- admin: "admin"
-
- deploymentMode: SimpleScalable
-
- backend:
- replicas: 3
- read:
- replicas: 3
- write:
- replicas: 3
-
- # Disable minio storage
- minio:
- enabled: false
-
- # Zero out replica counts of other deployment modes
- singleBinary:
- replicas: 0
-
- ingester:
- replicas: 0
- querier:
- replicas: 0
- queryFrontend:
- replicas: 0
- queryScheduler:
- replicas: 0
- distributor:
- replicas: 0
- compactor:
- replicas: 0
- indexGateway:
- replicas: 0
- bloomCompactor:
- replicas: 0
- bloomGateway:
- replicas: 0
+loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: azure
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ tracing:
+ enabled: true
+ querier:
+ max_concurrent: 4
+
+ storage:
+ type: azure
+ azure:
+ # Name of the Azure Blob Storage account
+ accountName:
+ # Key associated with the Azure Blob Storage account
+ accountKey:
+ # Comprehensive connection string for Azure Blob Storage account (Can be used to replace endpoint, accountName, and accountKey)
+ connectionString:
+ # Flag indicating whether to use Azure Managed Identity for authentication
+ useManagedIdentity: false
+ # Flag indicating whether to use a federated token for authentication
+ useFederatedToken: false
+ # Client ID of the user-assigned managed identity (if applicable)
+ userAssignedId:
+ # Timeout duration for requests made to the Azure Blob Storage account (in seconds)
+ requestTimeout:
+ # Domain suffix of the Azure Blob Storage service endpoint (e.g., core.windows.net)
+ endpointSuffix:
+ bucketNames:
+ chunks: "chunks"
+ ruler: "ruler"
+ admin: "admin"
+
+deploymentMode: SimpleScalable
+
+backend:
+ replicas: 3
+read:
+ replicas: 3
+write:
+ replicas: 3
+
+# Disable minio storage
+minio:
+ enabled: false
+
+# Zero out replica counts of other deployment modes
+singleBinary:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
```
{{< /code >}}
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index cae0094873a84..b287bdea5f37f 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -2752,7 +2752,23 @@ lifecycler:
# CLI flag: -ingester.flush-check-period
[flush_check_period: | default = 30s]
-# The timeout before a flush is cancelled.
+flush_op_backoff:
+ # Minimum backoff period when a flush fails. Each concurrent flush has its own
+ # backoff, see `ingester.concurrent-flushes`.
+ # CLI flag: -ingester.flush-op-backoff-min-period
+ [min_period: | default = 10s]
+
+ # Maximum backoff period when a flush fails. Each concurrent flush has its own
+ # backoff, see `ingester.concurrent-flushes`.
+ # CLI flag: -ingester.flush-op-backoff-max-period
+ [max_period: | default = 1m]
+
+ # Maximum retries for failed flushes.
+ # CLI flag: -ingester.flush-op-backoff-retries
+ [max_retries: | default = 10]
+
+# The timeout for an individual flush. Will be retried up to
+# `flush-op-backoff-retries` times.
# CLI flag: -ingester.flush-op-timeout
[flush_op_timeout: | default = 10m]
diff --git a/pkg/bloombuild/builder/batch.go b/pkg/bloombuild/builder/batch.go
index 3ff52327b4c30..4b5fcdb00ad2e 100644
--- a/pkg/bloombuild/builder/batch.go
+++ b/pkg/bloombuild/builder/batch.go
@@ -168,9 +168,9 @@ func newBatchedBlockLoader(
}
// compiler checks
-var _ v1.Iterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
-var _ v1.CloseableIterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
-var _ v1.ResettableIterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
+var _ v1.Iterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
+var _ v1.CloseableIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
+var _ v1.ResettableIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
// TODO(chaudum): testware
func newBlockLoadingIter(ctx context.Context, blocks []bloomshipper.BlockRef, fetcher FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier], batchSize int) *blockLoadingIter {
@@ -196,13 +196,13 @@ type blockLoadingIter struct {
// internals
initialized bool
err error
- iter v1.Iterator[*v1.SeriesWithBloom]
+ iter v1.Iterator[*v1.SeriesWithBlooms]
loader *batchedLoader[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier, *bloomshipper.CloseableBlockQuerier]
loaded map[io.Closer]struct{}
}
// At implements v1.Iterator.
-func (i *blockLoadingIter) At() *v1.SeriesWithBloom {
+func (i *blockLoadingIter) At() *v1.SeriesWithBlooms {
if !i.initialized {
panic("iterator not initialized")
}
@@ -229,7 +229,7 @@ func (i *blockLoadingIter) init() {
i.overlapping = overlappingBlocksIter(i.inputs)
// set initial iter
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
// set "match all" filter function if not present
if i.filter == nil {
@@ -249,14 +249,14 @@ func (i *blockLoadingIter) loadNext() bool {
loader := newBatchedBlockLoader(i.ctx, i.fetcher, blockRefs, i.batchSize)
filtered := v1.NewFilterIter[*bloomshipper.CloseableBlockQuerier](loader, i.filter)
- iters := make([]v1.PeekingIterator[*v1.SeriesWithBloom], 0, len(blockRefs))
+ iters := make([]v1.PeekingIterator[*v1.SeriesWithBlooms], 0, len(blockRefs))
for filtered.Next() {
bq := filtered.At()
i.loaded[bq] = struct{}{}
iter, err := bq.SeriesIter()
if err != nil {
i.err = err
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
return false
}
iters = append(iters, iter)
@@ -264,7 +264,7 @@ func (i *blockLoadingIter) loadNext() bool {
if err := filtered.Err(); err != nil {
i.err = err
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
return false
}
@@ -278,12 +278,12 @@ func (i *blockLoadingIter) loadNext() bool {
// two overlapping blocks can conceivably have the same series, so we need to dedupe,
// preferring the one with the most chunks already indexed since we'll have
// to add fewer chunks to the bloom
- i.iter = v1.NewDedupingIter[*v1.SeriesWithBloom, *v1.SeriesWithBloom](
- func(a, b *v1.SeriesWithBloom) bool {
+ i.iter = v1.NewDedupingIter[*v1.SeriesWithBlooms, *v1.SeriesWithBlooms](
+ func(a, b *v1.SeriesWithBlooms) bool {
return a.Series.Fingerprint == b.Series.Fingerprint
},
- v1.Identity[*v1.SeriesWithBloom],
- func(a, b *v1.SeriesWithBloom) *v1.SeriesWithBloom {
+ v1.Identity[*v1.SeriesWithBlooms],
+ func(a, b *v1.SeriesWithBlooms) *v1.SeriesWithBlooms {
if len(a.Series.Chunks) > len(b.Series.Chunks) {
return a
}
@@ -294,7 +294,7 @@ func (i *blockLoadingIter) loadNext() bool {
return i.iter.Next()
}
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
i.err = i.overlapping.Err()
return false
}
diff --git a/pkg/bloombuild/builder/batch_test.go b/pkg/bloombuild/builder/batch_test.go
index b2616a37dc1ec..19de5354fb14b 100644
--- a/pkg/bloombuild/builder/batch_test.go
+++ b/pkg/bloombuild/builder/batch_test.go
@@ -5,6 +5,7 @@ import (
"errors"
"testing"
+ "github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
@@ -208,3 +209,12 @@ func TestOverlappingBlocksIter(t *testing.T) {
})
}
}
+
+func genBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
+ bounds := v1.NewBounds(min, max)
+ return bloomshipper.BlockRef{
+ Ref: bloomshipper.Ref{
+ Bounds: bounds,
+ },
+ }
+}
diff --git a/pkg/bloombuild/builder/builder.go b/pkg/bloombuild/builder/builder.go
index 3a6d6ce4e1532..cbbd737a83190 100644
--- a/pkg/bloombuild/builder/builder.go
+++ b/pkg/bloombuild/builder/builder.go
@@ -368,7 +368,7 @@ func (b *Builder) loadWorkForGap(
tenant string,
id tsdb.Identifier,
gap protos.GapWithBlocks,
-) (v1.Iterator[*v1.Series], v1.CloseableResettableIterator[*v1.SeriesWithBloom], error) {
+) (v1.Iterator[*v1.Series], v1.CloseableResettableIterator[*v1.SeriesWithBlooms], error) {
// load a series iterator for the gap
seriesItr, err := b.tsdbStore.LoadTSDB(ctx, table, tenant, id, gap.Bounds)
if err != nil {
diff --git a/pkg/bloombuild/builder/spec.go b/pkg/bloombuild/builder/spec.go
index a56918b0344de..284c0c6d7fc44 100644
--- a/pkg/bloombuild/builder/spec.go
+++ b/pkg/bloombuild/builder/spec.go
@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
- "time"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
@@ -45,7 +44,7 @@ type SimpleBloomGenerator struct {
userID string
store v1.Iterator[*v1.Series]
chunkLoader ChunkLoader
- blocksIter v1.ResettableIterator[*v1.SeriesWithBloom]
+ blocksIter v1.ResettableIterator[*v1.SeriesWithBlooms]
// options to build blocks with
opts v1.BlockOptions
@@ -68,7 +67,7 @@ func NewSimpleBloomGenerator(
opts v1.BlockOptions,
store v1.Iterator[*v1.Series],
chunkLoader ChunkLoader,
- blocksIter v1.ResettableIterator[*v1.SeriesWithBloom],
+ blocksIter v1.ResettableIterator[*v1.SeriesWithBlooms],
readWriterFn func() (v1.BlockWriter, v1.BlockReader),
reporter func(model.Fingerprint),
metrics *Metrics,
@@ -98,44 +97,30 @@ func NewSimpleBloomGenerator(
}
}
-func (s *SimpleBloomGenerator) populator(ctx context.Context) func(series *v1.Series, bloom *v1.Bloom) (int, bool, error) {
- return func(series *v1.Series, bloom *v1.Bloom) (int, bool, error) {
- start := time.Now()
+func (s *SimpleBloomGenerator) populator(ctx context.Context) v1.BloomPopulatorFunc {
+ return func(
+ series *v1.Series,
+ srcBlooms v1.SizedIterator[*v1.Bloom],
+ toAdd v1.ChunkRefs,
+ ch chan *v1.BloomCreation,
+ ) {
level.Debug(s.logger).Log(
"msg", "populating bloom filter",
"stage", "before",
"fp", series.Fingerprint,
"chunks", len(series.Chunks),
)
- chunkItersWithFP, err := s.chunkLoader.Load(ctx, s.userID, series)
- if err != nil {
- return 0, false, errors.Wrapf(err, "failed to load chunks for series: %+v", series)
- }
-
- bytesAdded, skip, err := s.tokenizer.Populate(
- &v1.SeriesWithBloom{
- Series: series,
- Bloom: bloom,
- },
- chunkItersWithFP.itr,
- )
+ chunkItersWithFP := s.chunkLoader.Load(ctx, s.userID, &v1.Series{
+ Fingerprint: series.Fingerprint,
+ Chunks: toAdd,
+ })
- level.Debug(s.logger).Log(
- "msg", "populating bloom filter",
- "stage", "after",
- "fp", series.Fingerprint,
- "chunks", len(series.Chunks),
- "series_bytes", bytesAdded,
- "duration", time.Since(start),
- "err", err,
- )
+ s.tokenizer.Populate(srcBlooms, chunkItersWithFP.itr, ch)
if s.reporter != nil {
s.reporter(series.Fingerprint)
}
- return bytesAdded, skip, err
}
-
}
func (s *SimpleBloomGenerator) Generate(ctx context.Context) *LazyBlockBuilderIterator {
@@ -179,10 +164,10 @@ type LazyBlockBuilderIterator struct {
ctx context.Context
opts v1.BlockOptions
metrics *Metrics
- populate func(*v1.Series, *v1.Bloom) (int, bool, error)
+ populate v1.BloomPopulatorFunc
readWriterFn func() (v1.BlockWriter, v1.BlockReader)
series v1.PeekingIterator[*v1.Series]
- blocks v1.ResettableIterator[*v1.SeriesWithBloom]
+ blocks v1.ResettableIterator[*v1.SeriesWithBlooms]
bytesAdded int
curr *v1.Block
@@ -193,10 +178,10 @@ func NewLazyBlockBuilderIterator(
ctx context.Context,
opts v1.BlockOptions,
metrics *Metrics,
- populate func(*v1.Series, *v1.Bloom) (int, bool, error),
+ populate v1.BloomPopulatorFunc,
readWriterFn func() (v1.BlockWriter, v1.BlockReader),
series v1.PeekingIterator[*v1.Series],
- blocks v1.ResettableIterator[*v1.SeriesWithBloom],
+ blocks v1.ResettableIterator[*v1.SeriesWithBlooms],
) *LazyBlockBuilderIterator {
return &LazyBlockBuilderIterator{
ctx: ctx,
@@ -270,7 +255,7 @@ type ChunkItersByFingerprint struct {
// ChunkLoader loads chunks from a store
type ChunkLoader interface {
- Load(ctx context.Context, userID string, series *v1.Series) (*ChunkItersByFingerprint, error)
+ Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint
}
// StoreChunkLoader loads chunks from a store
@@ -286,7 +271,7 @@ func NewStoreChunkLoader(fetcherProvider stores.ChunkFetcherProvider, metrics *M
}
}
-func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.Series) (*ChunkItersByFingerprint, error) {
+func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint {
// NB(owen-d): This is probably unnecessary as we should only have one fetcher
// because we'll only be working on a single index period at a time, but this should protect
// us in the case of refactoring/changing this and likely isn't a perf bottleneck.
@@ -317,5 +302,5 @@ func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.S
return &ChunkItersByFingerprint{
fp: series.Fingerprint,
itr: newBatchedChunkLoader(ctx, fetchers, inputs, s.metrics, batchedLoaderDefaultBatchSize),
- }, nil
+ }
}
diff --git a/pkg/bloombuild/builder/spec_test.go b/pkg/bloombuild/builder/spec_test.go
index 40225dc45865b..e6b47b1442a6e 100644
--- a/pkg/bloombuild/builder/spec_test.go
+++ b/pkg/bloombuild/builder/spec_test.go
@@ -15,19 +15,19 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
)
-func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBloom, refs []bloomshipper.BlockRef) {
+func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
return blocksFromSchemaWithRange(t, n, options, 0, 0xffff)
}
// splits 100 series across `n` non-overlapping blocks.
// uses options to build blocks with.
-func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fromFP, throughFp model.Fingerprint) (res []*v1.Block, data []v1.SeriesWithBloom, refs []bloomshipper.BlockRef) {
+func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fromFP, throughFp model.Fingerprint) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
if 100%n != 0 {
panic("100 series must be evenly divisible by n")
}
numSeries := 100
- data, _ = v1.MkBasicSeriesWithBlooms(numSeries, 0, fromFP, throughFp, 0, 10000)
+ data, _ = v1.MkBasicSeriesWithBlooms(numSeries, fromFP, throughFp, 0, 10000)
seriesPerBlock := numSeries / n
@@ -46,7 +46,7 @@ func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fro
minIdx, maxIdx := i*seriesPerBlock, (i+1)*seriesPerBlock
- itr := v1.NewSliceIter[v1.SeriesWithBloom](data[minIdx:maxIdx])
+ itr := v1.NewSliceIter[v1.SeriesWithBlooms](data[minIdx:maxIdx])
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
@@ -62,11 +62,11 @@ func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fro
// doesn't actually load any chunks
type dummyChunkLoader struct{}
-func (dummyChunkLoader) Load(_ context.Context, _ string, series *v1.Series) (*ChunkItersByFingerprint, error) {
+func (dummyChunkLoader) Load(_ context.Context, _ string, series *v1.Series) *ChunkItersByFingerprint {
return &ChunkItersByFingerprint{
fp: series.Fingerprint,
itr: v1.NewEmptyIter[v1.ChunkRefWithIter](),
- }, nil
+ }
}
func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v1.Iterator[*v1.Series], blocks []*v1.Block, refs []bloomshipper.BlockRef) *SimpleBloomGenerator {
@@ -132,9 +132,9 @@ func TestSimpleBloomGenerator(t *testing.T) {
} {
t.Run(fmt.Sprintf("%s/%s", tc.desc, enc), func(t *testing.T) {
sourceBlocks, data, refs := blocksFromSchemaWithRange(t, 2, tc.fromSchema, 0x00000, 0x6ffff)
- storeItr := v1.NewMapIter[v1.SeriesWithBloom, *v1.Series](
- v1.NewSliceIter[v1.SeriesWithBloom](data),
- func(swb v1.SeriesWithBloom) *v1.Series {
+ storeItr := v1.NewMapIter[v1.SeriesWithBlooms, *v1.Series](
+ v1.NewSliceIter[v1.SeriesWithBlooms](data),
+ func(swb v1.SeriesWithBlooms) *v1.Series {
return swb.Series
},
)
@@ -150,9 +150,9 @@ func TestSimpleBloomGenerator(t *testing.T) {
// Check all the input series are present in the output blocks.
expectedRefs := v1.PointerSlice(data)
- outputRefs := make([]*v1.SeriesWithBloom, 0, len(data))
+ outputRefs := make([]*v1.SeriesWithBlooms, 0, len(data))
for _, block := range outputBlocks {
- bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize)
+ bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize).Iter()
for bq.Next() {
outputRefs = append(outputRefs, bq.At())
}
@@ -164,13 +164,5 @@ func TestSimpleBloomGenerator(t *testing.T) {
})
}
}
-}
-func genBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
- bounds := v1.NewBounds(min, max)
- return bloomshipper.BlockRef{
- Ref: bloomshipper.Ref{
- Bounds: bounds,
- },
- }
}
diff --git a/pkg/bloombuild/planner/metrics.go b/pkg/bloombuild/planner/metrics.go
index 3f68ab5206303..165f530f1709c 100644
--- a/pkg/bloombuild/planner/metrics.go
+++ b/pkg/bloombuild/planner/metrics.go
@@ -32,6 +32,9 @@ type Metrics struct {
buildCompleted *prometheus.CounterVec
buildTime *prometheus.HistogramVec
+ blocksDeleted prometheus.Counter
+ metasDeleted prometheus.Counter
+
tenantsDiscovered prometheus.Counter
}
@@ -107,6 +110,19 @@ func NewMetrics(
Buckets: prometheus.DefBuckets,
}, []string{"status"}),
+ blocksDeleted: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Namespace: metricsNamespace,
+ Subsystem: metricsSubsystem,
+ Name: "blocks_deleted_total",
+ Help: "Number of blocks deleted",
+ }),
+ metasDeleted: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Namespace: metricsNamespace,
+ Subsystem: metricsSubsystem,
+ Name: "metas_deleted_total",
+ Help: "Number of metas deleted",
+ }),
+
tenantsDiscovered: promauto.With(r).NewCounter(prometheus.CounterOpts{
Namespace: metricsNamespace,
Subsystem: metricsSubsystem,
diff --git a/pkg/bloombuild/planner/planner.go b/pkg/bloombuild/planner/planner.go
index 287a859745f5a..ea2ea5db531b2 100644
--- a/pkg/bloombuild/planner/planner.go
+++ b/pkg/bloombuild/planner/planner.go
@@ -3,6 +3,7 @@ package planner
import (
"context"
"fmt"
+ "math"
"sort"
"sync"
"time"
@@ -156,6 +157,17 @@ func (p *Planner) running(ctx context.Context) error {
}
}
+type tenantTableTaskResults struct {
+ tasksToWait int
+ originalMetas []bloomshipper.Meta
+ resultsCh chan *protos.TaskResult
+}
+
+type tenantTable struct {
+ table config.DayTable
+ tenant string
+}
+
func (p *Planner) runOne(ctx context.Context) error {
var (
start = time.Now()
@@ -171,39 +183,79 @@ func (p *Planner) runOne(ctx context.Context) error {
tables := p.tables(time.Now())
level.Debug(p.logger).Log("msg", "loaded tables", "tables", tables.TotalDays())
- work, err := p.loadWork(ctx, tables)
+ work, err := p.loadTenantWork(ctx, tables)
if err != nil {
return fmt.Errorf("error loading work: %w", err)
}
+ // For deletion, we need to aggregate the results for each table and tenant tuple
+ // We cannot delete the returned tombstoned metas as soon as a task finishes since
+ // other tasks may still be using the now tombstoned metas
+ tasksResultForTenantTable := make(map[tenantTable]tenantTableTaskResults)
var totalTasks int
- for _, w := range work {
- logger := log.With(p.logger, "tenant", w.tenant, "table", w.table.Addr(), "ownership", w.ownershipRange.String())
- gaps, err := p.findGapsForBounds(ctx, w.tenant, w.table, w.ownershipRange)
- if err != nil {
- level.Error(logger).Log("msg", "error finding gaps", "err", err)
- continue
- }
+ for table, tenants := range work {
+ for tenant, ownershipRanges := range tenants {
+ logger := log.With(p.logger, "tenant", tenant, "table", table.Addr())
+ tt := tenantTable{
+ tenant: tenant,
+ table: table,
+ }
- now := time.Now()
- for _, gap := range gaps {
- totalTasks++
+ tasks, existingMetas, err := p.computeTasks(ctx, table, tenant, ownershipRanges)
+ if err != nil {
+ level.Error(logger).Log("msg", "error computing tasks", "err", err)
+ continue
+ }
- task := NewTask(
- ctx, now,
- protos.NewTask(w.table, w.tenant, w.ownershipRange, gap.tsdb, gap.gaps),
- )
+ var tenantTableEnqueuedTasks int
+ resultsCh := make(chan *protos.TaskResult, len(tasks))
- if err := p.enqueueTask(task); err != nil {
- level.Error(logger).Log("msg", "error enqueuing task", "err", err)
- continue
+ now := time.Now()
+ for _, task := range tasks {
+ queueTask := NewQueueTask(ctx, now, task, resultsCh)
+ if err := p.enqueueTask(queueTask); err != nil {
+ level.Error(logger).Log("msg", "error enqueuing task", "err", err)
+ continue
+ }
+
+ totalTasks++
+ tenantTableEnqueuedTasks++
+ }
+
+ tasksResultForTenantTable[tt] = tenantTableTaskResults{
+ tasksToWait: tenantTableEnqueuedTasks,
+ originalMetas: existingMetas,
+ resultsCh: resultsCh,
}
+
+ level.Debug(logger).Log("msg", "enqueued tasks", "tasks", tenantTableEnqueuedTasks)
}
}
level.Debug(p.logger).Log("msg", "planning completed", "tasks", totalTasks)
+ // Create a goroutine to process the results for each table tenant tuple
+ // TODO(salvacorts): This may end up creating too many goroutines.
+ // Create a pool of workers to process table-tenant tuples.
+ var wg sync.WaitGroup
+ for tt, results := range tasksResultForTenantTable {
+ wg.Add(1)
+ go func(table config.DayTable, tenant string, results tenantTableTaskResults) {
+ defer wg.Done()
+
+ if err := p.processTenantTaskResults(
+ ctx, table, tenant,
+ results.originalMetas, results.tasksToWait, results.resultsCh,
+ ); err != nil {
+ level.Error(p.logger).Log("msg", "failed to process tenant task results", "err", err)
+ }
+ }(tt.table, tt.tenant, results)
+ }
+
+ level.Debug(p.logger).Log("msg", "waiting for all tasks to be completed", "tasks", totalTasks, "tenantTables", len(tasksResultForTenantTable))
+ wg.Wait()
+
status = statusSuccess
level.Info(p.logger).Log(
"msg", "bloom build iteration completed",
@@ -212,6 +264,177 @@ func (p *Planner) runOne(ctx context.Context) error {
return nil
}
+// computeTasks computes the tasks for a given table and tenant and ownership range.
+// It returns the tasks to be executed and the metas that are existing relevant for the ownership range.
+func (p *Planner) computeTasks(
+ ctx context.Context,
+ table config.DayTable,
+ tenant string,
+ ownershipRanges []v1.FingerprintBounds,
+) ([]*protos.Task, []bloomshipper.Meta, error) {
+ var tasks []*protos.Task
+ logger := log.With(p.logger, "table", table.Addr(), "tenant", tenant)
+
+ // Fetch source metas to be used in both build and cleanup of out-of-date metas+blooms
+ metas, err := p.bloomStore.FetchMetas(
+ ctx,
+ bloomshipper.MetaSearchParams{
+ TenantID: tenant,
+ Interval: bloomshipper.NewInterval(table.Bounds()),
+ Keyspace: v1.NewBounds(0, math.MaxUint64),
+ },
+ )
+ if err != nil {
+ return nil, nil, fmt.Errorf("failed to get metas: %w", err)
+ }
+
+ for _, ownershipRange := range ownershipRanges {
+ logger := log.With(logger, "ownership", ownershipRange.String())
+
+ // Filter only the metas that overlap in the ownership range
+ metasInBounds := bloomshipper.FilterMetasOverlappingBounds(metas, ownershipRange)
+ level.Debug(logger).Log("msg", "found relevant metas", "metas", len(metasInBounds))
+
+ // Find gaps in the TSDBs for this tenant/table
+ gaps, err := p.findOutdatedGaps(ctx, tenant, table, ownershipRange, metasInBounds, logger)
+ if err != nil {
+ level.Error(logger).Log("msg", "failed to find outdated gaps", "err", err)
+ continue
+ }
+
+ for _, gap := range gaps {
+ tasks = append(tasks, protos.NewTask(table, tenant, ownershipRange, gap.tsdb, gap.gaps))
+ }
+ }
+
+ return tasks, metas, nil
+}
+
+func (p *Planner) processTenantTaskResults(
+ ctx context.Context,
+ table config.DayTable,
+ tenant string,
+ originalMetas []bloomshipper.Meta,
+ totalTasks int,
+ resultsCh <-chan *protos.TaskResult,
+) error {
+ logger := log.With(p.logger, table, table.Addr(), "tenant", tenant)
+ level.Debug(logger).Log("msg", "waiting for all tasks to be completed", "tasks", totalTasks)
+
+ newMetas := make([]bloomshipper.Meta, 0, totalTasks)
+ for i := 0; i < totalTasks; i++ {
+ select {
+ case <-ctx.Done():
+ if err := ctx.Err(); err != nil && !errors.Is(err, context.Canceled) {
+ level.Error(logger).Log("msg", "planner context done with error", "err", err)
+ return err
+ }
+
+ // No error or context canceled, just return
+ level.Debug(logger).Log("msg", "context done while waiting for task results")
+ return nil
+ case result := <-resultsCh:
+ if result == nil {
+ level.Error(logger).Log("msg", "received nil task result")
+ continue
+ }
+ if result.Error != nil {
+ level.Error(logger).Log(
+ "msg", "task failed",
+ "err", result.Error,
+ "task", result.TaskID,
+ )
+ continue
+ }
+
+ newMetas = append(newMetas, result.CreatedMetas...)
+ }
+ }
+
+ level.Debug(logger).Log(
+ "msg", "all tasks completed",
+ "tasks", totalTasks,
+ "originalMetas", len(originalMetas),
+ "newMetas", len(newMetas),
+ )
+
+ if len(newMetas) == 0 {
+ // No new metas were created, nothing to delete
+ // Note: this would only happen if all tasks failed
+ return nil
+ }
+
+ combined := append(originalMetas, newMetas...)
+ outdated := outdatedMetas(combined)
+ level.Debug(logger).Log("msg", "found outdated metas", "outdated", len(outdated))
+
+ if err := p.deleteOutdatedMetasAndBlocks(ctx, table, tenant, outdated); err != nil {
+ return fmt.Errorf("failed to delete outdated metas: %w", err)
+ }
+
+ return nil
+}
+
+func (p *Planner) deleteOutdatedMetasAndBlocks(
+ ctx context.Context,
+ table config.DayTable,
+ tenant string,
+ metas []bloomshipper.Meta,
+) error {
+ logger := log.With(p.logger, "table", table.Addr(), "tenant", tenant)
+
+ client, err := p.bloomStore.Client(table.ModelTime())
+ if err != nil {
+ level.Error(logger).Log("msg", "failed to get client", "err", err)
+ return errors.Wrap(err, "failed to get client")
+ }
+
+ var (
+ deletedMetas int
+ deletedBlocks int
+ )
+ defer func() {
+ p.metrics.metasDeleted.Add(float64(deletedMetas))
+ p.metrics.blocksDeleted.Add(float64(deletedBlocks))
+ }()
+
+ for _, meta := range metas {
+ for _, block := range meta.Blocks {
+ if err := client.DeleteBlocks(ctx, []bloomshipper.BlockRef{block}); err != nil {
+ if client.IsObjectNotFoundErr(err) {
+ level.Debug(logger).Log("msg", "block not found while attempting delete, continuing", "block", block.String())
+ } else {
+ level.Error(logger).Log("msg", "failed to delete block", "err", err, "block", block.String())
+ return errors.Wrap(err, "failed to delete block")
+ }
+ }
+
+ deletedBlocks++
+ level.Debug(logger).Log("msg", "removed outdated block", "block", block.String())
+ }
+
+ err = client.DeleteMetas(ctx, []bloomshipper.MetaRef{meta.MetaRef})
+ if err != nil {
+ if client.IsObjectNotFoundErr(err) {
+ level.Debug(logger).Log("msg", "meta not found while attempting delete, continuing", "meta", meta.MetaRef.String())
+ } else {
+ level.Error(logger).Log("msg", "failed to delete meta", "err", err, "meta", meta.MetaRef.String())
+ return errors.Wrap(err, "failed to delete meta")
+ }
+ }
+ deletedMetas++
+ level.Debug(logger).Log("msg", "removed outdated meta", "meta", meta.MetaRef.String())
+ }
+
+ level.Debug(logger).Log(
+ "msg", "deleted outdated metas and blocks",
+ "metas", deletedMetas,
+ "blocks", deletedBlocks,
+ )
+
+ return nil
+}
+
func (p *Planner) tables(ts time.Time) *dayRangeIterator {
// adjust the minimum by one to make it inclusive, which is more intuitive
// for a configuration variable
@@ -228,21 +451,15 @@ func (p *Planner) tables(ts time.Time) *dayRangeIterator {
return newDayRangeIterator(fromDay, throughDay, p.schemaCfg)
}
-type tenantTableRange struct {
- tenant string
- table config.DayTable
- ownershipRange v1.FingerprintBounds
+type work map[config.DayTable]map[string][]v1.FingerprintBounds
- // TODO: Add tracking
- //finished bool
- //queueTime, startTime, endTime time.Time
-}
-
-func (p *Planner) loadWork(
+// loadTenantWork loads the work for each tenant and table tuple.
+// work is the list of fingerprint ranges that need to be indexed in bloom filters.
+func (p *Planner) loadTenantWork(
ctx context.Context,
tables *dayRangeIterator,
-) ([]tenantTableRange, error) {
- var work []tenantTableRange
+) (work, error) {
+ tenantTableWork := make(map[config.DayTable]map[string][]v1.FingerprintBounds, tables.TotalDays())
for tables.Next() && tables.Err() == nil && ctx.Err() == nil {
table := tables.At()
@@ -252,7 +469,12 @@ func (p *Planner) loadWork(
if err != nil {
return nil, fmt.Errorf("error loading tenants: %w", err)
}
- level.Debug(p.logger).Log("msg", "loaded tenants", "table", table, "tenants", tenants.Len())
+ level.Debug(p.logger).Log("msg", "loaded tenants", "table", table, "tenants", tenants.Remaining())
+
+ // If this is the first this we see this table, initialize the map
+ if tenantTableWork[table] == nil {
+ tenantTableWork[table] = make(map[string][]v1.FingerprintBounds, tenants.Remaining())
+ }
for tenants.Next() && tenants.Err() == nil && ctx.Err() == nil {
p.metrics.tenantsDiscovered.Inc()
@@ -265,13 +487,7 @@ func (p *Planner) loadWork(
splitFactor := p.limits.BloomSplitSeriesKeyspaceBy(tenant)
bounds := SplitFingerprintKeyspaceByFactor(splitFactor)
- for _, bounds := range bounds {
- work = append(work, tenantTableRange{
- tenant: tenant,
- table: table,
- ownershipRange: bounds,
- })
- }
+ tenantTableWork[table][tenant] = bounds
level.Debug(p.logger).Log("msg", "loading work for tenant", "table", table, "tenant", tenant, "splitFactor", splitFactor)
}
@@ -286,7 +502,7 @@ func (p *Planner) loadWork(
return nil, fmt.Errorf("error iterating tables: %w", err)
}
- return work, ctx.Err()
+ return tenantTableWork, ctx.Err()
}
func (p *Planner) tenants(ctx context.Context, table config.DayTable) (*v1.SliceIter[string], error) {
@@ -298,47 +514,6 @@ func (p *Planner) tenants(ctx context.Context, table config.DayTable) (*v1.Slice
return v1.NewSliceIter(tenants), nil
}
-/*
-Planning works as follows, split across many functions for clarity:
- 1. Fetch all meta.jsons for the given tenant and table which overlap the ownership range of this compactor.
- 2. Load current TSDBs for this tenant/table.
- 3. For each live TSDB (there should be only 1, but this works with multiple), find any gaps
- (fingerprint ranges) which are not up-to-date, determined by checking other meta.json files and comparing
- the TSDBs they were generated from as well as their ownership ranges.
-*/
-func (p *Planner) findGapsForBounds(
- ctx context.Context,
- tenant string,
- table config.DayTable,
- ownershipRange v1.FingerprintBounds,
-) ([]blockPlan, error) {
- logger := log.With(p.logger, "org_id", tenant, "table", table.Addr(), "ownership", ownershipRange.String())
-
- // Fetch source metas to be used in both build and cleanup of out-of-date metas+blooms
- metas, err := p.bloomStore.FetchMetas(
- ctx,
- bloomshipper.MetaSearchParams{
- TenantID: tenant,
- Interval: bloomshipper.NewInterval(table.Bounds()),
- Keyspace: ownershipRange,
- },
- )
- if err != nil {
- level.Error(logger).Log("msg", "failed to get metas", "err", err)
- return nil, fmt.Errorf("failed to get metas: %w", err)
- }
-
- level.Debug(logger).Log("msg", "found relevant metas", "metas", len(metas))
-
- // Find gaps in the TSDBs for this tenant/table
- gaps, err := p.findOutdatedGaps(ctx, tenant, table, ownershipRange, metas, logger)
- if err != nil {
- return nil, fmt.Errorf("failed to find outdated gaps: %w", err)
- }
-
- return gaps, nil
-}
-
// blockPlan is a plan for all the work needed to build a meta.json
// It includes:
// - the tsdb (source of truth) which contains all the series+chunks
@@ -507,11 +682,11 @@ func blockPlansForGaps(tsdbs []tsdbGaps, metas []bloomshipper.Meta) ([]blockPlan
return plans, nil
}
-func (p *Planner) addPendingTask(task *Task) {
+func (p *Planner) addPendingTask(task *QueueTask) {
p.pendingTasks.Store(task.ID, task)
}
-func (p *Planner) removePendingTask(task *Task) {
+func (p *Planner) removePendingTask(task *QueueTask) {
p.pendingTasks.Delete(task.ID)
}
@@ -523,7 +698,7 @@ func (p *Planner) totalPendingTasks() (total int) {
return total
}
-func (p *Planner) enqueueTask(task *Task) error {
+func (p *Planner) enqueueTask(task *QueueTask) error {
p.activeUsers.UpdateUserTimestamp(task.Tenant, time.Now())
return p.tasksQueue.Enqueue(task.Tenant, nil, task, func() {
task.timesEnqueued++
@@ -570,7 +745,8 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
return fmt.Errorf("dequeue() call resulted in nil response. builder: %s", builderID)
}
- task := item.(*Task)
+ task := item.(*QueueTask)
+ logger := log.With(logger, "task", task.ID)
queueTime := time.Since(task.queueTime)
p.metrics.queueDuration.Observe(queueTime.Seconds())
@@ -582,7 +758,8 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
continue
}
- if err := p.forwardTaskToBuilder(builder, builderID, task); err != nil {
+ result, err := p.forwardTaskToBuilder(builder, builderID, task)
+ if err != nil {
maxRetries := p.limits.BloomTaskMaxRetries(task.Tenant)
if maxRetries > 0 && task.timesEnqueued >= maxRetries {
p.metrics.tasksFailed.Inc()
@@ -593,6 +770,10 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
"maxRetries", maxRetries,
"err", err,
)
+ task.resultsChannel <- &protos.TaskResult{
+ TaskID: task.ID,
+ Error: fmt.Errorf("task failed after max retries (%d): %w", maxRetries, err),
+ }
continue
}
@@ -601,13 +782,31 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
p.metrics.taskLost.Inc()
p.removePendingTask(task)
level.Error(logger).Log("msg", "error re-enqueuing task. this task will be lost", "err", err)
+ task.resultsChannel <- &protos.TaskResult{
+ TaskID: task.ID,
+ Error: fmt.Errorf("error re-enqueuing task: %w", err),
+ }
continue
}
p.metrics.tasksRequeued.Inc()
- level.Error(logger).Log("msg", "error forwarding task to builder, Task requeued", "err", err)
+ level.Error(logger).Log(
+ "msg", "error forwarding task to builder, Task requeued",
+ "retries", task.timesEnqueued,
+ "err", err,
+ )
+ continue
}
+ level.Debug(logger).Log(
+ "msg", "task completed",
+ "duration", time.Since(task.queueTime).Seconds(),
+ "retries", task.timesEnqueued,
+ )
+ p.removePendingTask(task)
+
+ // Send the result back to the task. The channel is buffered, so this should not block.
+ task.resultsChannel <- result
}
return errPlannerIsNotRunning
@@ -616,16 +815,14 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
func (p *Planner) forwardTaskToBuilder(
builder protos.PlannerForBuilder_BuilderLoopServer,
builderID string,
- task *Task,
-) error {
- defer p.removePendingTask(task)
-
+ task *QueueTask,
+) (*protos.TaskResult, error) {
msg := &protos.PlannerToBuilder{
Task: task.ToProtoTask(),
}
if err := builder.Send(msg); err != nil {
- return fmt.Errorf("error sending task to builder (%s): %w", builderID, err)
+ return nil, fmt.Errorf("error sending task to builder (%s): %w", builderID, err)
}
// Launch a goroutine to wait for the response from the builder so we can
@@ -651,12 +848,14 @@ func (p *Planner) forwardTaskToBuilder(
select {
case result := <-resultsCh:
- // TODO: Return metas forward via channel
- return result.Error
+ // Note: Errors from the result are not returned here since we don't retry tasks
+ // that return with an error. I.e. we won't retry errors forwarded from the builder.
+ // TODO(salvacorts): Filter and return errors that can be retried.
+ return result, nil
case err := <-errCh:
- return err
+ return nil, err
case <-timeout:
- return fmt.Errorf("timeout waiting for response from builder (%s)", builderID)
+ return nil, fmt.Errorf("timeout waiting for response from builder (%s)", builderID)
}
}
@@ -666,7 +865,7 @@ func (p *Planner) forwardTaskToBuilder(
func (p *Planner) receiveResultFromBuilder(
builder protos.PlannerForBuilder_BuilderLoopServer,
builderID string,
- task *Task,
+ task *QueueTask,
) (*protos.TaskResult, error) {
// If connection is closed, Recv() will return an error
res, err := builder.Recv()
diff --git a/pkg/bloombuild/planner/planner_test.go b/pkg/bloombuild/planner/planner_test.go
index b46b987de751c..c76ef0e4d2679 100644
--- a/pkg/bloombuild/planner/planner_test.go
+++ b/pkg/bloombuild/planner/planner_test.go
@@ -3,12 +3,16 @@ package planner
import (
"context"
"fmt"
+ "io"
+ "math"
+ "sync"
"testing"
"time"
"github.com/go-kit/log"
"github.com/grafana/dskit/flagext"
"github.com/grafana/dskit/services"
+ "github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
@@ -17,6 +21,7 @@ import (
"github.com/grafana/loki/v3/pkg/bloombuild/protos"
"github.com/grafana/loki/v3/pkg/storage"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/cache"
"github.com/grafana/loki/v3/pkg/storage/chunk/client/local"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
@@ -25,6 +30,9 @@ import (
"github.com/grafana/loki/v3/pkg/storage/types"
)
+var testDay = parseDayTime("2023-09-01")
+var testTable = config.NewDayTable(testDay, "index_")
+
func tsdbID(n int) tsdb.SingleTenantTSDBIdentifier {
return tsdb.SingleTenantTSDBIdentifier{
TS: time.Unix(int64(n), 0),
@@ -35,7 +43,9 @@ func genMeta(min, max model.Fingerprint, sources []int, blocks []bloomshipper.Bl
m := bloomshipper.Meta{
MetaRef: bloomshipper.MetaRef{
Ref: bloomshipper.Ref{
- Bounds: v1.NewBounds(min, max),
+ TenantID: "fakeTenant",
+ TableName: testTable.Addr(),
+ Bounds: v1.NewBounds(min, max),
},
},
Blocks: blocks,
@@ -141,14 +151,26 @@ func Test_gapsBetweenTSDBsAndMetas(t *testing.T) {
}
func genBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
- bounds := v1.NewBounds(min, max)
+ startTS, endTS := testDay.Bounds()
return bloomshipper.BlockRef{
Ref: bloomshipper.Ref{
- Bounds: bounds,
+ TenantID: "fakeTenant",
+ TableName: testTable.Addr(),
+ Bounds: v1.NewBounds(min, max),
+ StartTimestamp: startTS,
+ EndTimestamp: endTS,
+ Checksum: 0,
},
}
}
+func genBlock(ref bloomshipper.BlockRef) bloomshipper.Block {
+ return bloomshipper.Block{
+ BlockRef: ref,
+ Data: &DummyReadSeekCloser{},
+ }
+}
+
func Test_blockPlansForGaps(t *testing.T) {
for _, tc := range []struct {
desc string
@@ -333,13 +355,14 @@ func Test_blockPlansForGaps(t *testing.T) {
}
}
-func createTasks(n int) []*Task {
- tasks := make([]*Task, 0, n)
+func createTasks(n int, resultsCh chan *protos.TaskResult) []*QueueTask {
+ tasks := make([]*QueueTask, 0, n)
// Enqueue tasks
for i := 0; i < n; i++ {
- task := NewTask(
+ task := NewQueueTask(
context.Background(), time.Now(),
- protos.NewTask(config.NewDayTable(config.NewDayTime(0), "fake"), "fakeTenant", v1.NewBounds(0, 10), tsdbID(1), nil),
+ protos.NewTask(config.NewDayTable(testDay, "fake"), "fakeTenant", v1.NewBounds(0, 10), tsdbID(1), nil),
+ resultsCh,
)
tasks = append(tasks, task)
}
@@ -385,7 +408,12 @@ func createPlanner(
}
reg := prometheus.NewPedanticRegistry()
- planner, err := New(cfg, limits, schemaCfg, storageCfg, storage.ClientMetrics{}, nil, logger, reg)
+ metasCache := cache.NewNoopCache()
+ blocksCache := bloomshipper.NewFsBlocksCache(storageCfg.BloomShipperConfig.BlocksCache, reg, logger)
+ bloomStore, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, storage.ClientMetrics{}, metasCache, blocksCache, reg, logger)
+ require.NoError(t, err)
+
+ planner, err := New(cfg, limits, schemaCfg, storageCfg, storage.ClientMetrics{}, bloomStore, logger, reg)
require.NoError(t, err)
return planner
@@ -432,9 +460,8 @@ func Test_BuilderLoop(t *testing.T) {
modifyBuilder: func(builder *fakeBuilder) {
builder.SetReturnErrorMsg(true)
},
- resetBuilder: func(builder *fakeBuilder) {
- builder.SetReturnErrorMsg(false)
- },
+ // We don't retry on error messages from the builder
+ shouldConsumeAfterModify: true,
},
{
name: "exceed max retries",
@@ -487,7 +514,8 @@ func Test_BuilderLoop(t *testing.T) {
})
// Enqueue tasks
- tasks := createTasks(nTasks)
+ resultsCh := make(chan *protos.TaskResult, nTasks)
+ tasks := createTasks(nTasks, resultsCh)
for _, task := range tasks {
err = planner.enqueueTask(task)
require.NoError(t, err)
@@ -517,6 +545,11 @@ func Test_BuilderLoop(t *testing.T) {
// Finally, the queue should be empty
require.Equal(t, 0, planner.totalPendingTasks())
+ // consume all tasks result to free up the channel for the next round of tasks
+ for i := 0; i < nTasks; i++ {
+ <-resultsCh
+ }
+
if tc.modifyBuilder != nil {
// Configure builders to return errors
for _, builder := range builders {
@@ -568,6 +601,213 @@ func Test_BuilderLoop(t *testing.T) {
}
}
+func putMetas(bloomClient bloomshipper.Client, metas []bloomshipper.Meta) error {
+ for _, meta := range metas {
+ err := bloomClient.PutMeta(context.Background(), meta)
+ if err != nil {
+ return err
+ }
+
+ for _, block := range meta.Blocks {
+ err := bloomClient.PutBlock(context.Background(), genBlock(block))
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+func Test_processTenantTaskResults(t *testing.T) {
+ for _, tc := range []struct {
+ name string
+
+ originalMetas []bloomshipper.Meta
+ taskResults []*protos.TaskResult
+ expectedMetas []bloomshipper.Meta
+ }{
+ {
+ name: "errors",
+ originalMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ taskResults: []*protos.TaskResult{
+ {
+ TaskID: "1",
+ Error: errors.New("fake error"),
+ },
+ {
+ TaskID: "2",
+ Error: errors.New("fake error"),
+ },
+ },
+ expectedMetas: []bloomshipper.Meta{
+ // The original metas should remain unchanged
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ },
+ {
+ name: "no new metas",
+ originalMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ taskResults: []*protos.TaskResult{
+ {
+ TaskID: "1",
+ },
+ {
+ TaskID: "2",
+ },
+ },
+ expectedMetas: []bloomshipper.Meta{
+ // The original metas should remain unchanged
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ },
+ {
+ name: "no original metas",
+ taskResults: []*protos.TaskResult{
+ {
+ TaskID: "1",
+ CreatedMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ },
+ },
+ {
+ TaskID: "2",
+ CreatedMetas: []bloomshipper.Meta{
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ },
+ },
+ expectedMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(10, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(10, 20)}),
+ },
+ },
+ {
+ name: "single meta covers all original",
+ originalMetas: []bloomshipper.Meta{
+ genMeta(0, 5, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 5)}),
+ genMeta(6, 10, []int{0}, []bloomshipper.BlockRef{genBlockRef(6, 10)}),
+ },
+ taskResults: []*protos.TaskResult{
+ {
+ TaskID: "1",
+ CreatedMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{1}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ },
+ },
+ },
+ expectedMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{1}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ },
+ },
+ {
+ name: "multi version ordering",
+ originalMetas: []bloomshipper.Meta{
+ genMeta(0, 5, []int{0}, []bloomshipper.BlockRef{genBlockRef(0, 5)}),
+ genMeta(0, 10, []int{1}, []bloomshipper.BlockRef{genBlockRef(0, 10)}), // only part of the range is outdated, must keep
+ },
+ taskResults: []*protos.TaskResult{
+ {
+ TaskID: "1",
+ CreatedMetas: []bloomshipper.Meta{
+ genMeta(8, 10, []int{2}, []bloomshipper.BlockRef{genBlockRef(8, 10)}),
+ },
+ },
+ },
+ expectedMetas: []bloomshipper.Meta{
+ genMeta(0, 10, []int{1}, []bloomshipper.BlockRef{genBlockRef(0, 10)}),
+ genMeta(8, 10, []int{2}, []bloomshipper.BlockRef{genBlockRef(8, 10)}),
+ },
+ },
+ } {
+ t.Run(tc.name, func(t *testing.T) {
+ logger := log.NewNopLogger()
+ //logger := log.NewLogfmtLogger(os.Stdout)
+
+ cfg := Config{
+ PlanningInterval: 1 * time.Hour,
+ MaxQueuedTasksPerTenant: 10000,
+ }
+ planner := createPlanner(t, cfg, &fakeLimits{}, logger)
+
+ bloomClient, err := planner.bloomStore.Client(testDay.ModelTime())
+ require.NoError(t, err)
+
+ // Create original metas and blocks
+ err = putMetas(bloomClient, tc.originalMetas)
+ require.NoError(t, err)
+
+ ctx, ctxCancel := context.WithCancel(context.Background())
+ defer ctxCancel()
+ resultsCh := make(chan *protos.TaskResult, len(tc.taskResults))
+
+ var wg sync.WaitGroup
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+
+ err = planner.processTenantTaskResults(
+ ctx,
+ testTable,
+ "fakeTenant",
+ tc.originalMetas,
+ len(tc.taskResults),
+ resultsCh,
+ )
+ require.NoError(t, err)
+ }()
+
+ for _, taskResult := range tc.taskResults {
+ if len(taskResult.CreatedMetas) > 0 {
+ // Emulate builder putting new metas to obj store
+ err = putMetas(bloomClient, taskResult.CreatedMetas)
+ require.NoError(t, err)
+ }
+
+ resultsCh <- taskResult
+ }
+
+ // Wait for all tasks to be processed and outdated metas/blocks deleted
+ wg.Wait()
+
+ // Get all metas
+ metas, err := planner.bloomStore.FetchMetas(
+ context.Background(),
+ bloomshipper.MetaSearchParams{
+ TenantID: "fakeTenant",
+ Interval: bloomshipper.NewInterval(testTable.Bounds()),
+ Keyspace: v1.NewBounds(0, math.MaxUint64),
+ },
+ )
+ require.NoError(t, err)
+
+ // TODO(salvacorts): Fix this
+ // For some reason, when the tests are run in the CI, we do not encode the `loc` of model.Time for each TSDB.
+ // As a result, when we fetch them, the loc is empty whereas in the original metas, it is not. Therefore the
+ // comparison fails. As a workaround to fix the issue, we will manually reset the TS of the sources to the
+ // fetched metas
+ for i := range metas {
+ for j := range metas[i].Sources {
+ sec := metas[i].Sources[j].TS.Unix()
+ nsec := metas[i].Sources[j].TS.Nanosecond()
+ metas[i].Sources[j].TS = time.Unix(sec, int64(nsec))
+ }
+ }
+
+ // Compare metas
+ require.Equal(t, len(tc.expectedMetas), len(metas))
+ require.ElementsMatch(t, tc.expectedMetas, metas)
+ })
+ }
+}
+
type fakeBuilder struct {
id string
tasks []*protos.Task
@@ -709,3 +949,17 @@ func parseDayTime(s string) config.DayTime {
Time: model.TimeFromUnix(t.Unix()),
}
}
+
+type DummyReadSeekCloser struct{}
+
+func (d *DummyReadSeekCloser) Read(_ []byte) (n int, err error) {
+ return 0, io.EOF
+}
+
+func (d *DummyReadSeekCloser) Seek(_ int64, _ int) (int64, error) {
+ return 0, nil
+}
+
+func (d *DummyReadSeekCloser) Close() error {
+ return nil
+}
diff --git a/pkg/bloombuild/planner/task.go b/pkg/bloombuild/planner/task.go
index 1da39cea6bfd7..8580dd12a655f 100644
--- a/pkg/bloombuild/planner/task.go
+++ b/pkg/bloombuild/planner/task.go
@@ -7,19 +7,27 @@ import (
"github.com/grafana/loki/v3/pkg/bloombuild/protos"
)
-type Task struct {
+type QueueTask struct {
*protos.Task
+ resultsChannel chan *protos.TaskResult
+
// Tracking
timesEnqueued int
queueTime time.Time
ctx context.Context
}
-func NewTask(ctx context.Context, queueTime time.Time, task *protos.Task) *Task {
- return &Task{
- Task: task,
- ctx: ctx,
- queueTime: queueTime,
+func NewQueueTask(
+ ctx context.Context,
+ queueTime time.Time,
+ task *protos.Task,
+ resultsChannel chan *protos.TaskResult,
+) *QueueTask {
+ return &QueueTask{
+ Task: task,
+ resultsChannel: resultsChannel,
+ ctx: ctx,
+ queueTime: queueTime,
}
}
diff --git a/pkg/bloombuild/planner/versioned_range.go b/pkg/bloombuild/planner/versioned_range.go
new file mode 100644
index 0000000000000..578b5d7ef83a6
--- /dev/null
+++ b/pkg/bloombuild/planner/versioned_range.go
@@ -0,0 +1,261 @@
+package planner
+
+import (
+ "sort"
+
+ "github.com/prometheus/common/model"
+
+ v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
+)
+
+type tsdbToken struct {
+ through model.Fingerprint // inclusive
+ version int // TSDB version
+}
+
+// a ring of token ranges used to identify old metas.
+// each token represents that a TSDB version has covered the entire range
+// up to that point from the previous token.
+type tsdbTokenRange []tsdbToken
+
+func (t tsdbTokenRange) Len() int {
+ return len(t)
+}
+
+func (t tsdbTokenRange) Less(i, j int) bool {
+ return t[i].through < t[j].through
+}
+
+func (t tsdbTokenRange) Swap(i, j int) {
+ t[i], t[j] = t[j], t[i]
+}
+
+// Add ensures a versioned set of bounds is added to the range. If the bounds are already
+// covered by a more up to date version, it returns false.
+func (t tsdbTokenRange) Add(version int, bounds v1.FingerprintBounds) (res tsdbTokenRange, added bool) {
+ // allows attempting to join neighboring token ranges with identical versions
+ // that aren't known until the end of the function
+ var shouldReassemble bool
+ var reassembleFrom int
+ defer func() {
+ if shouldReassemble {
+ res = res.reassemble(reassembleFrom)
+ }
+ }()
+
+ // special case: first token
+ if len(t) == 0 {
+ tok := tsdbToken{through: bounds.Max, version: version}
+ // special case: first token is included in bounds, no need to fill negative space
+ if bounds.Min == 0 {
+ return append(t, tok), true
+ }
+ // Use a negative version to indicate that the range is not covered by any version.
+ return append(t, tsdbToken{through: bounds.Min - 1, version: -1}, tok), true
+ }
+
+ // For non-nil token ranges, we continually update the range with newer versions.
+ for {
+ // find first token that covers the start of the range
+ i := sort.Search(len(t), func(i int) bool {
+ return t[i].through >= bounds.Min
+ })
+
+ if i == len(t) {
+ tok := tsdbToken{through: bounds.Max, version: version}
+
+ // edge case: there is no gap between the previous token range
+ // and the new one;
+ // skip adding a negative token
+ if t[len(t)-1].through == bounds.Min-1 {
+ return append(t, tok), true
+ }
+
+ // the range is not covered by any version and we are at the end of the range.
+ // Add a negative token and the new token.
+ negative := tsdbToken{through: bounds.Min - 1, version: -1}
+ return append(t, negative, tok), true
+ }
+
+ // Otherwise, we've found a token that covers the start of the range.
+ newer := t[i].version < version
+ preExisting := t.boundsForToken(i)
+ if !newer {
+ if bounds.Within(preExisting) {
+ // The range is already covered by a more up to date version, no need
+ // to add anything, but honor if an earlier token was added
+ return t, added
+ }
+
+ // The range is partially covered by a more up to date version;
+ // update the range we need to check and continue
+ bounds = v1.NewBounds(preExisting.Max+1, bounds.Max)
+ continue
+ }
+
+ // If we need to update the range, there are 5 cases:
+ // 1. `equal`: the incoming range equals an existing range ()
+ // ------ # addition
+ // ------ # src
+ // 2. `subset`: the incoming range is a subset of an existing range
+ // ------ # addition
+ // -------- # src
+ // 3. `overflow_both_sides`: the incoming range is a superset of an existing range. This is not possible
+ // because the first token in the ring implicitly covers the left bound (zero) of all possible fps.
+ // Therefore, we can skip this case.
+ // ------ # addition
+ // ---- # src
+ // 4. `right_overflow`: the incoming range overflows the right side of an existing range
+ // ------ # addition
+ // ------ # src
+ // 5. `left_overflow`: the incoming range overflows the left side of an existing range. This can be skipped
+ // for the same reason as `superset`.
+ // ------ # addition
+ // ------ # src
+
+ // 1) (`equal`): we're replacing the same bounds
+ if bounds.Equal(preExisting) {
+ t[i].version = version
+ return t, true
+ }
+
+ // 2) (`subset`): the incoming range is a subset of an existing range
+ if bounds.Within(preExisting) {
+ // 2a) the incoming range touches the existing range's minimum bound
+ if bounds.Min == preExisting.Min {
+ tok := tsdbToken{through: bounds.Max, version: version}
+ t = append(t, tsdbToken{})
+ copy(t[i+1:], t[i:])
+ t[i] = tok
+ return t, true
+ }
+ // 2b) the incoming range touches the existing range's maximum bound
+ if bounds.Max == preExisting.Max {
+ t[i].through = bounds.Min - 1
+ tok := tsdbToken{through: bounds.Max, version: version}
+ t = append(t, tsdbToken{})
+ copy(t[i+2:], t[i+1:])
+ t[i+1] = tok
+ return t, true
+ }
+
+ // 2c) the incoming range is does not touch either edge;
+ // add two tokens (the new one and a new left-bound for the old range)
+ tok := tsdbToken{through: bounds.Max, version: version}
+ t = append(t, tsdbToken{}, tsdbToken{})
+ copy(t[i+2:], t[i:])
+ t[i+1] = tok
+ t[i].through = bounds.Min - 1
+ return t, true
+ }
+
+ // 4) (`right_overflow`): the incoming range overflows the right side of an existing range
+
+ // 4a) shortcut: the incoming range is a right-overlapping superset of the existing range.
+ // replace the existing token's version, update reassembly targets for merging neighboring ranges
+ // w/ the same version, and continue
+ if preExisting.Min == bounds.Min {
+ t[i].version = version
+ bounds.Min = preExisting.Max + 1
+ added = true
+ if !shouldReassemble {
+ reassembleFrom = i
+ shouldReassemble = true
+ }
+ continue
+ }
+
+ // 4b) the incoming range overlaps the right side of the existing range but
+ // does not touch the left side;
+ // add a new token for the right side of the existing range then update the reassembly targets
+ // and continue
+ overlap := tsdbToken{through: t[i].through, version: version}
+ t[i].through = bounds.Min - 1
+ t = append(t, tsdbToken{})
+ copy(t[i+2:], t[i+1:])
+ t[i+1] = overlap
+ added = true
+ bounds.Min = overlap.through + 1
+ if !shouldReassemble {
+ reassembleFrom = i + 1
+ shouldReassemble = true
+ }
+ continue
+ }
+}
+
+func (t tsdbTokenRange) boundsForToken(i int) v1.FingerprintBounds {
+ if i == 0 {
+ return v1.FingerprintBounds{Min: 0, Max: t[i].through}
+ }
+ return v1.FingerprintBounds{Min: t[i-1].through + 1, Max: t[i].through}
+}
+
+// reassemble merges neighboring tokens with the same version
+func (t tsdbTokenRange) reassemble(from int) tsdbTokenRange {
+ reassembleTo := from
+ for i := from; i < len(t)-1; i++ {
+ if t[i].version != t[i+1].version {
+ break
+ }
+ reassembleTo = i + 1
+ }
+
+ if reassembleTo == from {
+ return t
+ }
+ t[from].through = t[reassembleTo].through
+ copy(t[from+1:], t[reassembleTo+1:])
+ return t[:len(t)-(reassembleTo-from)]
+}
+
+func outdatedMetas(metas []bloomshipper.Meta) []bloomshipper.Meta {
+ var outdated []bloomshipper.Meta
+
+ // Sort metas descending by most recent source when checking
+ // for outdated metas (older metas are discarded if they don't change the range).
+ sort.Slice(metas, func(i, j int) bool {
+ a, aExists := metas[i].MostRecentSource()
+ b, bExists := metas[j].MostRecentSource()
+
+ if !aExists && !bExists {
+ // stable sort two sourceless metas by their bounds (easier testing)
+ return metas[i].Bounds.Less(metas[j].Bounds)
+ }
+
+ if !aExists {
+ // If a meta has no sources, it's out of date by definition.
+ // By convention we sort it to the beginning of the list and will mark it for removal later
+ return true
+ }
+
+ if !bExists {
+ // if a exists but b does not, mark b as lesser, sorting b to the
+ // front
+ return false
+ }
+ return !a.TS.Before(b.TS)
+ })
+
+ var (
+ tokenRange tsdbTokenRange
+ added bool
+ )
+
+ for _, meta := range metas {
+ mostRecent, exists := meta.MostRecentSource()
+ if !exists {
+ // if the meta exists but does not reference a TSDB, it's out of date
+ // TODO(owen-d): this shouldn't happen, figure out why
+ outdated = append(outdated, meta)
+ }
+ version := int(model.TimeFromUnixNano(mostRecent.TS.UnixNano()))
+ tokenRange, added = tokenRange.Add(version, meta.Bounds)
+ if !added {
+ outdated = append(outdated, meta)
+ }
+ }
+
+ return outdated
+}
diff --git a/pkg/bloombuild/planner/versioned_range_test.go b/pkg/bloombuild/planner/versioned_range_test.go
new file mode 100644
index 0000000000000..e58f143842f1c
--- /dev/null
+++ b/pkg/bloombuild/planner/versioned_range_test.go
@@ -0,0 +1,322 @@
+package planner
+
+import (
+ "testing"
+
+ "github.com/prometheus/common/model"
+ "github.com/stretchr/testify/require"
+
+ v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
+ "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
+)
+
+func Test_TsdbTokenRange(t *testing.T) {
+ type addition struct {
+ version int
+ bounds v1.FingerprintBounds
+ }
+ type exp struct {
+ added bool
+ err bool
+ }
+ mk := func(version int, min, max model.Fingerprint) addition {
+ return addition{version, v1.FingerprintBounds{Min: min, Max: max}}
+ }
+ tok := func(version int, through model.Fingerprint) tsdbToken {
+ return tsdbToken{version: version, through: through}
+ }
+
+ for _, tc := range []struct {
+ desc string
+ additions []addition
+ exp []bool
+ result tsdbTokenRange
+ }{
+ {
+ desc: "ascending versions",
+ additions: []addition{
+ mk(1, 0, 10),
+ mk(2, 11, 20),
+ mk(3, 15, 25),
+ },
+ exp: []bool{true, true, true},
+ result: tsdbTokenRange{
+ tok(1, 10),
+ tok(2, 14),
+ tok(3, 25),
+ },
+ },
+ {
+ desc: "descending versions",
+ additions: []addition{
+ mk(3, 15, 25),
+ mk(2, 11, 20),
+ mk(1, 0, 10),
+ },
+ exp: []bool{true, true, true},
+ result: tsdbTokenRange{
+ tok(1, 10),
+ tok(2, 14),
+ tok(3, 25),
+ },
+ },
+ {
+ desc: "simple",
+ additions: []addition{
+ mk(3, 0, 10),
+ mk(2, 11, 20),
+ mk(1, 15, 25),
+ },
+ exp: []bool{true, true, true},
+ result: tsdbTokenRange{
+ tok(3, 10),
+ tok(2, 20),
+ tok(1, 25),
+ },
+ },
+ {
+ desc: "simple replacement",
+ additions: []addition{
+ mk(3, 10, 20),
+ mk(2, 0, 9),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(2, 9),
+ tok(3, 20),
+ },
+ },
+ {
+ desc: "complex",
+ additions: []addition{
+ mk(5, 30, 50),
+ mk(4, 20, 45),
+ mk(3, 25, 70),
+ mk(2, 10, 20),
+ mk(1, 1, 5),
+ },
+ exp: []bool{true, true, true, true, true, true},
+ result: tsdbTokenRange{
+ tok(-1, 0),
+ tok(1, 5),
+ tok(-1, 9),
+ tok(2, 19),
+ tok(4, 29),
+ tok(5, 50),
+ tok(3, 70),
+ },
+ },
+ {
+ desc: "neighboring upper range",
+ additions: []addition{
+ mk(5, 30, 50),
+ mk(4, 51, 60),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(5, 50),
+ tok(4, 60),
+ },
+ },
+ {
+ desc: "non-neighboring upper range",
+ additions: []addition{
+ mk(5, 30, 50),
+ mk(4, 55, 60),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(5, 50),
+ tok(-1, 54),
+ tok(4, 60),
+ },
+ },
+ {
+ desc: "earlier version within",
+ additions: []addition{
+ mk(5, 30, 50),
+ mk(4, 40, 45),
+ },
+ exp: []bool{true, false},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(5, 50),
+ },
+ },
+ {
+ desc: "earlier version right overlapping",
+ additions: []addition{
+ mk(5, 10, 20),
+ mk(4, 15, 25),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 9),
+ tok(5, 20),
+ tok(4, 25),
+ },
+ },
+ {
+ desc: "older version overlaps two",
+ additions: []addition{
+ mk(3, 10, 20),
+ mk(2, 21, 30),
+ mk(1, 15, 25),
+ },
+ exp: []bool{true, true, false},
+ result: tsdbTokenRange{
+ tok(-1, 9),
+ tok(3, 20),
+ tok(2, 30),
+ },
+ },
+ {
+ desc: "older version overlaps two w middle",
+ additions: []addition{
+ mk(3, 10, 20),
+ mk(2, 22, 30),
+ mk(1, 15, 25),
+ },
+ exp: []bool{true, true, true},
+ result: tsdbTokenRange{
+ tok(-1, 9),
+ tok(3, 20),
+ tok(1, 21),
+ tok(2, 30),
+ },
+ },
+ {
+ desc: "newer right overflow",
+ additions: []addition{
+ mk(1, 30, 50),
+ mk(2, 40, 60),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(1, 39),
+ tok(2, 60),
+ },
+ },
+ {
+ desc: "newer right overflow superset",
+ additions: []addition{
+ mk(1, 30, 50),
+ mk(2, 30, 60),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(2, 60),
+ },
+ },
+ {
+ desc: "newer right overflow partial",
+ additions: []addition{
+ mk(1, 30, 50),
+ mk(2, 40, 60),
+ },
+ exp: []bool{true, true},
+ result: tsdbTokenRange{
+ tok(-1, 29),
+ tok(1, 39),
+ tok(2, 60),
+ },
+ },
+ } {
+ t.Run(tc.desc, func(t *testing.T) {
+ var (
+ tr tsdbTokenRange
+ added bool
+ )
+ for i, a := range tc.additions {
+ tr, added = tr.Add(a.version, a.bounds)
+ exp := tc.exp[i]
+ require.Equal(t, exp, added, "on iteration %d", i)
+ }
+ require.Equal(t, tc.result, tr)
+ })
+ }
+}
+
+func Test_OutdatedMetas(t *testing.T) {
+ gen := func(bounds v1.FingerprintBounds, tsdbTimes ...model.Time) (meta bloomshipper.Meta) {
+ for _, tsdbTime := range tsdbTimes {
+ meta.Sources = append(meta.Sources, tsdb.SingleTenantTSDBIdentifier{TS: tsdbTime.Time()})
+ }
+ meta.Bounds = bounds
+ return meta
+ }
+
+ for _, tc := range []struct {
+ desc string
+ metas []bloomshipper.Meta
+ exp []bloomshipper.Meta
+ }{
+ {
+ desc: "no metas",
+ metas: nil,
+ exp: nil,
+ },
+ {
+ desc: "single meta",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 10), 0),
+ },
+ exp: nil,
+ },
+ {
+ desc: "single outdated meta",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 10), 0),
+ gen(v1.NewBounds(0, 10), 1),
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 10), 0),
+ },
+ },
+ {
+ desc: "single outdated via partitions",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 10), 1),
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 5), 0),
+ },
+ },
+ {
+ desc: "same tsdb versions",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 10), 1),
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 5), 0),
+ },
+ },
+ {
+ desc: "multi version ordering",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ gen(v1.NewBounds(0, 10), 1), // only part of the range is outdated, must keep
+ gen(v1.NewBounds(8, 10), 2),
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ },
+ },
+ } {
+ t.Run(tc.desc, func(t *testing.T) {
+ outdated := outdatedMetas(tc.metas)
+ require.Equal(t, tc.exp, outdated)
+ })
+ }
+}
diff --git a/pkg/bloomcompactor/batch.go b/pkg/bloomcompactor/batch.go
index 4247fc1e4b52c..4525bca006a07 100644
--- a/pkg/bloomcompactor/batch.go
+++ b/pkg/bloomcompactor/batch.go
@@ -168,9 +168,9 @@ func newBatchedBlockLoader(
}
// compiler checks
-var _ v1.Iterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
-var _ v1.CloseableIterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
-var _ v1.ResettableIterator[*v1.SeriesWithBloom] = &blockLoadingIter{}
+var _ v1.Iterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
+var _ v1.CloseableIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
+var _ v1.ResettableIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
// TODO(chaudum): testware
func newBlockLoadingIter(ctx context.Context, blocks []bloomshipper.BlockRef, fetcher FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier], batchSize int) *blockLoadingIter {
@@ -196,13 +196,13 @@ type blockLoadingIter struct {
// internals
initialized bool
err error
- iter v1.Iterator[*v1.SeriesWithBloom]
+ iter v1.Iterator[*v1.SeriesWithBlooms]
loader *batchedLoader[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier, *bloomshipper.CloseableBlockQuerier]
loaded map[io.Closer]struct{}
}
// At implements v1.Iterator.
-func (i *blockLoadingIter) At() *v1.SeriesWithBloom {
+func (i *blockLoadingIter) At() *v1.SeriesWithBlooms {
if !i.initialized {
panic("iterator not initialized")
}
@@ -229,7 +229,7 @@ func (i *blockLoadingIter) init() {
i.overlapping = overlappingBlocksIter(i.inputs)
// set initial iter
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
// set "match all" filter function if not present
if i.filter == nil {
@@ -249,14 +249,14 @@ func (i *blockLoadingIter) loadNext() bool {
loader := newBatchedBlockLoader(i.ctx, i.fetcher, blockRefs, i.batchSize)
filtered := v1.NewFilterIter[*bloomshipper.CloseableBlockQuerier](loader, i.filter)
- iters := make([]v1.PeekingIterator[*v1.SeriesWithBloom], 0, len(blockRefs))
+ iters := make([]v1.PeekingIterator[*v1.SeriesWithBlooms], 0, len(blockRefs))
for filtered.Next() {
bq := filtered.At()
i.loaded[bq] = struct{}{}
iter, err := bq.SeriesIter()
if err != nil {
i.err = err
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
return false
}
iters = append(iters, iter)
@@ -264,7 +264,7 @@ func (i *blockLoadingIter) loadNext() bool {
if err := filtered.Err(); err != nil {
i.err = err
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
return false
}
@@ -278,12 +278,12 @@ func (i *blockLoadingIter) loadNext() bool {
// two overlapping blocks can conceivably have the same series, so we need to dedupe,
// preferring the one with the most chunks already indexed since we'll have
// to add fewer chunks to the bloom
- i.iter = v1.NewDedupingIter[*v1.SeriesWithBloom, *v1.SeriesWithBloom](
- func(a, b *v1.SeriesWithBloom) bool {
+ i.iter = v1.NewDedupingIter[*v1.SeriesWithBlooms, *v1.SeriesWithBlooms](
+ func(a, b *v1.SeriesWithBlooms) bool {
return a.Series.Fingerprint == b.Series.Fingerprint
},
- v1.Identity[*v1.SeriesWithBloom],
- func(a, b *v1.SeriesWithBloom) *v1.SeriesWithBloom {
+ v1.Identity[*v1.SeriesWithBlooms],
+ func(a, b *v1.SeriesWithBlooms) *v1.SeriesWithBlooms {
if len(a.Series.Chunks) > len(b.Series.Chunks) {
return a
}
@@ -294,7 +294,7 @@ func (i *blockLoadingIter) loadNext() bool {
return i.iter.Next()
}
- i.iter = v1.NewEmptyIter[*v1.SeriesWithBloom]()
+ i.iter = v1.NewEmptyIter[*v1.SeriesWithBlooms]()
i.err = i.overlapping.Err()
return false
}
diff --git a/pkg/bloomcompactor/bloomcompactor.go b/pkg/bloomcompactor/bloomcompactor.go
index b46ec1cba7c87..acfb5ba01f355 100644
--- a/pkg/bloomcompactor/bloomcompactor.go
+++ b/pkg/bloomcompactor/bloomcompactor.go
@@ -303,7 +303,7 @@ func (c *Compactor) loadWork(
if err != nil {
return errors.Wrap(err, "getting tenants")
}
- nTenants := tenants.Len()
+ nTenants := tenants.Remaining()
type ownedTenant struct {
tenant string
diff --git a/pkg/bloomcompactor/controller.go b/pkg/bloomcompactor/controller.go
index f9defdc1fdfbc..277d040d688b9 100644
--- a/pkg/bloomcompactor/controller.go
+++ b/pkg/bloomcompactor/controller.go
@@ -287,7 +287,7 @@ func (s *SimpleBloomController) loadWorkForGap(
tenant string,
id tsdb.Identifier,
gap gapWithBlocks,
-) (v1.Iterator[*v1.Series], v1.CloseableResettableIterator[*v1.SeriesWithBloom], error) {
+) (v1.Iterator[*v1.Series], v1.CloseableResettableIterator[*v1.SeriesWithBlooms], error) {
// load a series iterator for the gap
seriesItr, err := s.tsdbStore.LoadTSDB(ctx, table, tenant, id, gap.bounds)
if err != nil {
diff --git a/pkg/bloomcompactor/spec.go b/pkg/bloomcompactor/spec.go
index 229efe9c16935..2cb16eac02eae 100644
--- a/pkg/bloomcompactor/spec.go
+++ b/pkg/bloomcompactor/spec.go
@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
- "time"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
@@ -45,7 +44,7 @@ type SimpleBloomGenerator struct {
userID string
store v1.Iterator[*v1.Series]
chunkLoader ChunkLoader
- blocksIter v1.ResettableIterator[*v1.SeriesWithBloom]
+ blocksIter v1.ResettableIterator[*v1.SeriesWithBlooms]
// options to build blocks with
opts v1.BlockOptions
@@ -68,7 +67,7 @@ func NewSimpleBloomGenerator(
opts v1.BlockOptions,
store v1.Iterator[*v1.Series],
chunkLoader ChunkLoader,
- blocksIter v1.ResettableIterator[*v1.SeriesWithBloom],
+ blocksIter v1.ResettableIterator[*v1.SeriesWithBlooms],
readWriterFn func() (v1.BlockWriter, v1.BlockReader),
reporter func(model.Fingerprint),
metrics *Metrics,
@@ -98,44 +97,30 @@ func NewSimpleBloomGenerator(
}
}
-func (s *SimpleBloomGenerator) populator(ctx context.Context) func(series *v1.Series, bloom *v1.Bloom) (int, bool, error) {
- return func(series *v1.Series, bloom *v1.Bloom) (int, bool, error) {
- start := time.Now()
+func (s *SimpleBloomGenerator) populator(ctx context.Context) v1.BloomPopulatorFunc {
+ return func(
+ series *v1.Series,
+ srcBlooms v1.SizedIterator[*v1.Bloom],
+ toAdd v1.ChunkRefs,
+ ch chan *v1.BloomCreation,
+ ) {
level.Debug(s.logger).Log(
"msg", "populating bloom filter",
"stage", "before",
"fp", series.Fingerprint,
"chunks", len(series.Chunks),
)
- chunkItersWithFP, err := s.chunkLoader.Load(ctx, s.userID, series)
- if err != nil {
- return 0, false, errors.Wrapf(err, "failed to load chunks for series: %+v", series)
- }
-
- bytesAdded, skip, err := s.tokenizer.Populate(
- &v1.SeriesWithBloom{
- Series: series,
- Bloom: bloom,
- },
- chunkItersWithFP.itr,
- )
+ chunkItersWithFP := s.chunkLoader.Load(ctx, s.userID, &v1.Series{
+ Fingerprint: series.Fingerprint,
+ Chunks: toAdd,
+ })
- level.Debug(s.logger).Log(
- "msg", "populating bloom filter",
- "stage", "after",
- "fp", series.Fingerprint,
- "chunks", len(series.Chunks),
- "series_bytes", bytesAdded,
- "duration", time.Since(start),
- "err", err,
- )
+ s.tokenizer.Populate(srcBlooms, chunkItersWithFP.itr, ch)
if s.reporter != nil {
s.reporter(series.Fingerprint)
}
- return bytesAdded, skip, err
}
-
}
func (s *SimpleBloomGenerator) Generate(ctx context.Context) *LazyBlockBuilderIterator {
@@ -179,10 +164,10 @@ type LazyBlockBuilderIterator struct {
ctx context.Context
opts v1.BlockOptions
metrics *Metrics
- populate func(*v1.Series, *v1.Bloom) (int, bool, error)
+ populate v1.BloomPopulatorFunc
readWriterFn func() (v1.BlockWriter, v1.BlockReader)
series v1.PeekingIterator[*v1.Series]
- blocks v1.ResettableIterator[*v1.SeriesWithBloom]
+ blocks v1.ResettableIterator[*v1.SeriesWithBlooms]
bytesAdded int
curr *v1.Block
@@ -193,10 +178,10 @@ func NewLazyBlockBuilderIterator(
ctx context.Context,
opts v1.BlockOptions,
metrics *Metrics,
- populate func(*v1.Series, *v1.Bloom) (int, bool, error),
+ populate v1.BloomPopulatorFunc,
readWriterFn func() (v1.BlockWriter, v1.BlockReader),
series v1.PeekingIterator[*v1.Series],
- blocks v1.ResettableIterator[*v1.SeriesWithBloom],
+ blocks v1.ResettableIterator[*v1.SeriesWithBlooms],
) *LazyBlockBuilderIterator {
return &LazyBlockBuilderIterator{
ctx: ctx,
@@ -270,7 +255,7 @@ type ChunkItersByFingerprint struct {
// ChunkLoader loads chunks from a store
type ChunkLoader interface {
- Load(ctx context.Context, userID string, series *v1.Series) (*ChunkItersByFingerprint, error)
+ Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint
}
// StoreChunkLoader loads chunks from a store
@@ -286,7 +271,7 @@ func NewStoreChunkLoader(fetcherProvider stores.ChunkFetcherProvider, metrics *M
}
}
-func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.Series) (*ChunkItersByFingerprint, error) {
+func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint {
// NB(owen-d): This is probably unnecessary as we should only have one fetcher
// because we'll only be working on a single index period at a time, but this should protect
// us in the case of refactoring/changing this and likely isn't a perf bottleneck.
@@ -317,5 +302,5 @@ func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.S
return &ChunkItersByFingerprint{
fp: series.Fingerprint,
itr: newBatchedChunkLoader(ctx, fetchers, inputs, s.metrics, batchedLoaderDefaultBatchSize),
- }, nil
+ }
}
diff --git a/pkg/bloomcompactor/spec_test.go b/pkg/bloomcompactor/spec_test.go
index 7e39b8dec57f0..f887d32053226 100644
--- a/pkg/bloomcompactor/spec_test.go
+++ b/pkg/bloomcompactor/spec_test.go
@@ -15,19 +15,19 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
)
-func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBloom, refs []bloomshipper.BlockRef) {
+func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
return blocksFromSchemaWithRange(t, n, options, 0, 0xffff)
}
// splits 100 series across `n` non-overlapping blocks.
// uses options to build blocks with.
-func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fromFP, throughFp model.Fingerprint) (res []*v1.Block, data []v1.SeriesWithBloom, refs []bloomshipper.BlockRef) {
+func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fromFP, throughFp model.Fingerprint) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
if 100%n != 0 {
panic("100 series must be evenly divisible by n")
}
numSeries := 100
- data, _ = v1.MkBasicSeriesWithBlooms(numSeries, 0, fromFP, throughFp, 0, 10000)
+ data, _ = v1.MkBasicSeriesWithBlooms(numSeries, fromFP, throughFp, 0, 10000)
seriesPerBlock := numSeries / n
@@ -46,7 +46,7 @@ func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fro
minIdx, maxIdx := i*seriesPerBlock, (i+1)*seriesPerBlock
- itr := v1.NewSliceIter[v1.SeriesWithBloom](data[minIdx:maxIdx])
+ itr := v1.NewSliceIter[v1.SeriesWithBlooms](data[minIdx:maxIdx])
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
@@ -62,11 +62,11 @@ func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fro
// doesn't actually load any chunks
type dummyChunkLoader struct{}
-func (dummyChunkLoader) Load(_ context.Context, _ string, series *v1.Series) (*ChunkItersByFingerprint, error) {
+func (dummyChunkLoader) Load(_ context.Context, _ string, series *v1.Series) *ChunkItersByFingerprint {
return &ChunkItersByFingerprint{
fp: series.Fingerprint,
itr: v1.NewEmptyIter[v1.ChunkRefWithIter](),
- }, nil
+ }
}
func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v1.Iterator[*v1.Series], blocks []*v1.Block, refs []bloomshipper.BlockRef) *SimpleBloomGenerator {
@@ -132,9 +132,9 @@ func TestSimpleBloomGenerator(t *testing.T) {
} {
t.Run(fmt.Sprintf("%s/%s", tc.desc, enc), func(t *testing.T) {
sourceBlocks, data, refs := blocksFromSchemaWithRange(t, 2, tc.fromSchema, 0x00000, 0x6ffff)
- storeItr := v1.NewMapIter[v1.SeriesWithBloom, *v1.Series](
- v1.NewSliceIter[v1.SeriesWithBloom](data),
- func(swb v1.SeriesWithBloom) *v1.Series {
+ storeItr := v1.NewMapIter[v1.SeriesWithBlooms, *v1.Series](
+ v1.NewSliceIter[v1.SeriesWithBlooms](data),
+ func(swb v1.SeriesWithBlooms) *v1.Series {
return swb.Series
},
)
@@ -150,9 +150,9 @@ func TestSimpleBloomGenerator(t *testing.T) {
// Check all the input series are present in the output blocks.
expectedRefs := v1.PointerSlice(data)
- outputRefs := make([]*v1.SeriesWithBloom, 0, len(data))
+ outputRefs := make([]*v1.SeriesWithBlooms, 0, len(data))
for _, block := range outputBlocks {
- bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize)
+ bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize).Iter()
for bq.Next() {
outputRefs = append(outputRefs, bq.At())
}
diff --git a/pkg/bloomcompactor/versioned_range.go b/pkg/bloomcompactor/versioned_range.go
index 03da12f1d7da5..8af56a0754cc3 100644
--- a/pkg/bloomcompactor/versioned_range.go
+++ b/pkg/bloomcompactor/versioned_range.go
@@ -214,13 +214,24 @@ func outdatedMetas(metas []bloomshipper.Meta) (outdated []bloomshipper.Meta, err
// Sort metas descending by most recent source when checking
// for outdated metas (older metas are discarded if they don't change the range).
sort.Slice(metas, func(i, j int) bool {
- a, err := metas[i].MostRecentSource()
- if err != nil {
- panic(err.Error())
+ a, aExists := metas[i].MostRecentSource()
+ b, bExists := metas[j].MostRecentSource()
+
+ if !aExists && !bExists {
+ // stable sort two sourceless metas by their bounds (easier testing)
+ return metas[i].Bounds.Less(metas[j].Bounds)
}
- b, err := metas[j].MostRecentSource()
- if err != nil {
- panic(err.Error())
+
+ if !aExists {
+ // If a meta has no sources, it's out of date by definition.
+ // By convention we sort it to the beginning of the list and will mark it for removal later
+ return true
+ }
+
+ if !bExists {
+ // if a exists but b does not, mark b as lesser, sorting b to the
+ // front
+ return false
}
return !a.TS.Before(b.TS)
})
@@ -231,9 +242,11 @@ func outdatedMetas(metas []bloomshipper.Meta) (outdated []bloomshipper.Meta, err
)
for _, meta := range metas {
- mostRecent, err := meta.MostRecentSource()
- if err != nil {
- return nil, err
+ mostRecent, exists := meta.MostRecentSource()
+ if !exists {
+ // if the meta exists but does not reference a TSDB, it's out of date
+ // TODO(owen-d): this shouldn't happen, figure out why
+ outdated = append(outdated, meta)
}
version := int(model.TimeFromUnixNano(mostRecent.TS.UnixNano()))
tokenRange, added = tokenRange.Add(version, meta.Bounds)
diff --git a/pkg/bloomcompactor/versioned_range_test.go b/pkg/bloomcompactor/versioned_range_test.go
index a85418bc6e1e5..67db348036ffa 100644
--- a/pkg/bloomcompactor/versioned_range_test.go
+++ b/pkg/bloomcompactor/versioned_range_test.go
@@ -313,6 +313,35 @@ func Test_OutdatedMetas(t *testing.T) {
gen(v1.NewBounds(0, 5), 0),
},
},
+ {
+ desc: "metas without sources are removed",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 10), 1),
+ gen(v1.NewBounds(11, 15)), // Meta without sources
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(11, 15)), // Meta without sources
+ gen(v1.NewBounds(6, 10), 0),
+ gen(v1.NewBounds(0, 5), 0),
+ },
+ },
+ {
+ desc: "metas without sources are interleaved",
+ metas: []bloomshipper.Meta{
+ gen(v1.NewBounds(0, 5), 0),
+ gen(v1.NewBounds(6, 10)), // Meta without sources
+ gen(v1.NewBounds(0, 10), 1),
+ gen(v1.NewBounds(11, 15)), // Meta without sources
+ gen(v1.NewBounds(16, 20), 2),
+ },
+ exp: []bloomshipper.Meta{
+ gen(v1.NewBounds(6, 10)), // Meta without sources
+ gen(v1.NewBounds(11, 15)), // Meta without sources
+ gen(v1.NewBounds(0, 5), 0),
+ },
+ },
} {
t.Run(tc.desc, func(t *testing.T) {
outdated, err := outdatedMetas(tc.metas)
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index 15c9ca2be2d85..fdcd7df117f3f 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -325,7 +325,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
{
Fingerprint: uint64(1000 + 100*idx),
UserID: tenantID,
- From: now.Add(-24 * time.Hour),
+ From: now.Add(-4 * time.Hour),
Through: now,
Checksum: uint32(idx),
},
@@ -335,7 +335,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
TenantID: tenantID,
TableName: "table_1",
Bounds: v1.NewBounds(0, 10000),
- StartTimestamp: now.Add(-24 * time.Hour),
+ StartTimestamp: now.Add(-4 * time.Hour),
EndTimestamp: now,
Checksum: uint32(idx),
},
@@ -343,7 +343,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
expr, err := syntax.ParseExpr(`{foo="bar"} |= "foo"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
- From: now.Add(-24 * time.Hour),
+ From: now.Add(-4 * time.Hour),
Through: now,
Refs: groupRefs(t, chunkRefs),
Plan: plan.QueryPlan{AST: expr},
diff --git a/pkg/bloomgateway/querier.go b/pkg/bloomgateway/querier.go
index c92d6fad30f73..23de7a15e2be7 100644
--- a/pkg/bloomgateway/querier.go
+++ b/pkg/bloomgateway/querier.go
@@ -23,6 +23,7 @@ import (
type querierMetrics struct {
chunksTotal prometheus.Counter
chunksFiltered prometheus.Counter
+ chunksSkipped prometheus.Counter
seriesTotal prometheus.Counter
seriesFiltered prometheus.Counter
seriesSkipped prometheus.Counter
@@ -42,6 +43,12 @@ func newQuerierMetrics(registerer prometheus.Registerer, namespace, subsystem st
Name: "chunks_filtered_total",
Help: "Total amount of chunks that have been filtered out. Does not count chunks in failed requests.",
}),
+ chunksSkipped: promauto.With(registerer).NewCounter(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ Name: "chunks_skipped_total",
+ Help: "Total amount of chunks that have been skipped and returned unfiltered, because no block matched the series.",
+ }),
seriesTotal: promauto.With(registerer).NewCounter(prometheus.CounterOpts{
Namespace: namespace,
Subsystem: subsystem,
@@ -137,6 +144,7 @@ func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from
}
}
+ var skippedGrps [][]*logproto.GroupedChunkRefs
responses := make([][]*logproto.GroupedChunkRefs, 0, 2)
// We can perform requests sequentially, because most of the time the request
// only covers a single day, and if not, it's at most two days.
@@ -152,9 +160,19 @@ func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from
return nil, err
}
- // add chunk refs from series that were not mapped to any blocks
+ skippedGrps = append(skippedGrps, skipped)
responses = append(responses, refs, skipped)
- bq.metrics.seriesSkipped.Add(float64(len(skipped)))
+ }
+
+ // add chunk refs from series that were not mapped to any blocks
+ skippedDeduped, err := mergeSeries(skippedGrps, nil)
+ if err != nil {
+ return nil, errors.Wrap(err, "failed to dedupe skipped series")
+ }
+
+ var chunksSkipped int
+ for _, skippedSeries := range skippedDeduped {
+ chunksSkipped += len(skippedSeries.Refs)
}
deduped, err := mergeSeries(responses, nil)
@@ -185,15 +203,19 @@ func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from
"responses", len(responses),
"preFilterChunks", preFilterChunks,
"postFilterChunks", postFilterChunks,
+ "skippedChunks", chunksSkipped,
"filteredChunks", preFilterChunks-postFilterChunks,
"preFilterSeries", preFilterSeries,
"postFilterSeries", postFilterSeries,
+ "skippedSeries", len(skippedDeduped),
"filteredSeries", preFilterSeries-postFilterSeries,
)
bq.metrics.chunksTotal.Add(float64(preFilterChunks))
+ bq.metrics.chunksSkipped.Add(float64(chunksSkipped))
bq.metrics.chunksFiltered.Add(float64(preFilterChunks - postFilterChunks))
bq.metrics.seriesTotal.Add(float64(preFilterSeries))
+ bq.metrics.seriesSkipped.Add(float64(len(skippedDeduped)))
bq.metrics.seriesFiltered.Add(float64(preFilterSeries - postFilterSeries))
return result, nil
diff --git a/pkg/bloomgateway/util.go b/pkg/bloomgateway/util.go
index 9617202b948c3..df3a93fcafeda 100644
--- a/pkg/bloomgateway/util.go
+++ b/pkg/bloomgateway/util.go
@@ -2,6 +2,7 @@ package bloomgateway
import (
"sort"
+ "time"
"github.com/prometheus/common/model"
"golang.org/x/exp/slices"
@@ -102,38 +103,30 @@ func partitionSeriesByDay(from, through model.Time, seriesWithChunks []*logproto
fromDay, throughDay := truncateDay(from), truncateDay(through)
- for day := fromDay; day.Equal(throughDay) || day.Before(throughDay); day = day.Add(Day) {
+ // because through is exclusive, if it's equal to the truncated day, it means it's the start of the day
+ // and we should not include it in the range
+ if through.Equal(throughDay) {
+ throughDay = throughDay.Add(-24 * time.Hour)
+ }
+
+ for day := fromDay; !throughDay.Before(day); day = day.Add(Day) {
minTs, maxTs := model.Latest, model.Earliest
- nextDay := day.Add(Day)
res := make([]*logproto.GroupedChunkRefs, 0, len(seriesWithChunks))
for _, series := range seriesWithChunks {
chunks := series.Refs
- min := sort.Search(len(chunks), func(i int) bool {
- return chunks[i].From >= day
- })
-
- max := sort.Search(len(chunks), func(i int) bool {
- return chunks[i].From >= nextDay
- })
+ var relevantChunks []*logproto.ShortRef
+ minTs, maxTs, relevantChunks = overlappingChunks(day, day.Add(Day), minTs, maxTs, chunks)
- // All chunks fall outside of the range
- if min == len(chunks) || max == 0 || min == max {
+ if len(relevantChunks) == 0 {
continue
}
- if chunks[min].From < minTs {
- minTs = chunks[min].From
- }
- if chunks[max-1].Through > maxTs {
- maxTs = chunks[max-1].Through
- }
-
res = append(res, &logproto.GroupedChunkRefs{
Fingerprint: series.Fingerprint,
Tenant: series.Tenant,
- Refs: chunks[min:max],
+ Refs: relevantChunks,
})
}
@@ -152,3 +145,28 @@ func partitionSeriesByDay(from, through model.Time, seriesWithChunks []*logproto
return result
}
+
+func overlappingChunks(from, through, minTs, maxTs model.Time, chunks []*logproto.ShortRef) (model.Time, model.Time, []*logproto.ShortRef) {
+
+ // chunks are ordered first by `From`. Can disregard all chunks
+ // that start later than the search range ends
+ maxIdx := sort.Search(len(chunks), func(i int) bool {
+ return chunks[i].From > through
+ })
+
+ res := make([]*logproto.ShortRef, 0, len(chunks[:maxIdx]))
+
+ for _, chunk := range chunks[:maxIdx] {
+ // if chunk ends before the search range starts, skip
+ if from.After(chunk.Through) {
+ continue
+ }
+
+ // Bound min & max ranges to the search range
+ minTs = max(min(minTs, chunk.From), from)
+ maxTs = min(max(maxTs, chunk.Through), through)
+ res = append(res, chunk)
+ }
+
+ return minTs, maxTs, res
+}
diff --git a/pkg/bloomgateway/util_test.go b/pkg/bloomgateway/util_test.go
index a3f219c326efd..849f3a30bbfc0 100644
--- a/pkg/bloomgateway/util_test.go
+++ b/pkg/bloomgateway/util_test.go
@@ -201,7 +201,7 @@ func TestPartitionRequest(t *testing.T) {
{
Fingerprint: 0x00,
Refs: []*logproto.ShortRef{
- {From: ts.Add(-13 * time.Hour), Through: ts.Add(-12 * time.Hour)},
+ {From: ts.Add(-14 * time.Hour), Through: ts.Add(-13 * time.Hour)},
{From: ts.Add(13 * time.Hour), Through: ts.Add(14 * time.Hour)},
},
},
@@ -306,35 +306,69 @@ func TestPartitionRequest(t *testing.T) {
{
Fingerprint: 0x00,
Refs: []*logproto.ShortRef{
- {From: ts.Add(-14 * time.Hour), Through: ts.Add(-13 * time.Hour)},
- {From: ts.Add(-13 * time.Hour), Through: ts.Add(-11 * time.Hour)},
- {From: ts.Add(-11 * time.Hour), Through: ts.Add(-10 * time.Hour)},
+ {From: ts.Add(-14 * time.Hour), Through: ts.Add(-13 * time.Hour)}, // previous day
+ {From: ts.Add(-13 * time.Hour), Through: ts.Add(-11 * time.Hour)}, // previous & target day
+ {From: ts.Add(-11 * time.Hour), Through: ts.Add(-10 * time.Hour)}, // target day
},
},
},
},
exp: []seriesWithInterval{
+ // previous day
{
- interval: bloomshipper.Interval{Start: ts.Add(-14 * time.Hour), End: ts.Add(-11 * time.Hour)},
+ interval: bloomshipper.Interval{Start: ts.Add(-14 * time.Hour), End: ts.Add(-12 * time.Hour)},
day: config.NewDayTime(mktime("2024-01-23 00:00")),
series: []*logproto.GroupedChunkRefs{
{
Fingerprint: 0x00,
Refs: []*logproto.ShortRef{
- {From: ts.Add(-14 * time.Hour), Through: ts.Add(-13 * time.Hour)},
- {From: ts.Add(-13 * time.Hour), Through: ts.Add(-11 * time.Hour)},
+ {From: ts.Add(-14 * time.Hour), Through: ts.Add(-13 * time.Hour)}, // previous day
+ {From: ts.Add(-13 * time.Hour), Through: ts.Add(-11 * time.Hour)}, // previous & target day
+ },
+ },
+ },
+ },
+ // target day
+ {
+ interval: bloomshipper.Interval{Start: ts.Add(-12 * time.Hour), End: ts.Add(-10 * time.Hour)},
+ day: config.NewDayTime(mktime("2024-01-24 00:00")),
+ series: []*logproto.GroupedChunkRefs{
+ {
+ Fingerprint: 0x00,
+ Refs: []*logproto.ShortRef{
+ {From: ts.Add(-13 * time.Hour), Through: ts.Add(-11 * time.Hour)}, // previous & target day
+ {From: ts.Add(-11 * time.Hour), Through: ts.Add(-10 * time.Hour)}, // target day
},
},
},
},
+ },
+ },
+
+ "through target day inclusion": {
+ inp: &logproto.FilterChunkRefRequest{
+ // Only search for the target day, but ensure chunks whose through (but not from)
+ // is on the target day are included
+ From: ts.Add(-1 * time.Hour),
+ Through: ts,
+ Refs: []*logproto.GroupedChunkRefs{
+ {
+ Fingerprint: 0x00,
+ Refs: []*logproto.ShortRef{
+ {From: ts.Add(-13 * time.Hour), Through: ts.Add(-1 * time.Hour)}, // previous & target day
+ },
+ },
+ },
+ },
+ exp: []seriesWithInterval{
{
- interval: bloomshipper.Interval{Start: ts.Add(-11 * time.Hour), End: ts.Add(-10 * time.Hour)},
+ interval: bloomshipper.Interval{Start: ts.Add(-12 * time.Hour), End: ts.Add(-1 * time.Hour)},
day: config.NewDayTime(mktime("2024-01-24 00:00")),
series: []*logproto.GroupedChunkRefs{
{
Fingerprint: 0x00,
Refs: []*logproto.ShortRef{
- {From: ts.Add(-11 * time.Hour), Through: ts.Add(-10 * time.Hour)},
+ {From: ts.Add(-13 * time.Hour), Through: ts.Add(-1 * time.Hour)}, // inherited from the chunk
},
},
},
@@ -358,13 +392,13 @@ func TestPartitionRequest(t *testing.T) {
}
}
-func createBlocks(t *testing.T, tenant string, n int, from, through model.Time, minFp, maxFp model.Fingerprint) ([]bloomshipper.BlockRef, []bloomshipper.Meta, []*bloomshipper.CloseableBlockQuerier, [][]v1.SeriesWithBloom) {
+func createBlocks(t *testing.T, tenant string, n int, from, through model.Time, minFp, maxFp model.Fingerprint) ([]bloomshipper.BlockRef, []bloomshipper.Meta, []*bloomshipper.CloseableBlockQuerier, [][]v1.SeriesWithBlooms) {
t.Helper()
blockRefs := make([]bloomshipper.BlockRef, 0, n)
metas := make([]bloomshipper.Meta, 0, n)
queriers := make([]*bloomshipper.CloseableBlockQuerier, 0, n)
- series := make([][]v1.SeriesWithBloom, 0, n)
+ series := make([][]v1.SeriesWithBlooms, 0, n)
step := (maxFp - minFp) / model.Fingerprint(n)
for i := 0; i < n; i++ {
@@ -410,7 +444,7 @@ func createBlocks(t *testing.T, tenant string, n int, from, through model.Time,
return blockRefs, metas, queriers, series
}
-func createQueryInputFromBlockData(t *testing.T, tenant string, data [][]v1.SeriesWithBloom, nthSeries int) []*logproto.ChunkRef {
+func createQueryInputFromBlockData(t *testing.T, tenant string, data [][]v1.SeriesWithBlooms, nthSeries int) []*logproto.ChunkRef {
t.Helper()
n := 0
res := make([]*logproto.ChunkRef, 0)
@@ -449,3 +483,78 @@ func createBlockRefsFromBlockData(t *testing.T, tenant string, data []*bloomship
}
return res
}
+
+func TestOverlappingChunks(t *testing.T) {
+ mkRef := func(from, through model.Time) *logproto.ShortRef {
+ return &logproto.ShortRef{From: from, Through: through}
+ }
+
+ for _, tc := range []struct {
+ desc string
+ from, through model.Time
+ input []*logproto.ShortRef
+ exp []*logproto.ShortRef
+ expMin, expMax model.Time
+ }{
+ {
+ desc: "simple ordered",
+ from: 0, through: 10,
+ input: []*logproto.ShortRef{
+ mkRef(0, 2),
+ mkRef(3, 5),
+ mkRef(6, 8),
+ mkRef(10, 12),
+ mkRef(14, 16),
+ },
+ exp: []*logproto.ShortRef{
+ mkRef(0, 2),
+ mkRef(3, 5),
+ mkRef(6, 8),
+ mkRef(10, 12),
+ },
+ expMin: 0, expMax: 10,
+ },
+ {
+ desc: "refs through timestamps aren't in monotonic order",
+ from: 0, through: 10,
+ input: []*logproto.ShortRef{
+ mkRef(0, 2),
+ mkRef(3, 5),
+ mkRef(6, 8),
+ mkRef(10, 12),
+ mkRef(14, 16),
+ },
+ exp: []*logproto.ShortRef{
+ mkRef(0, 2),
+ mkRef(3, 5),
+ mkRef(6, 8),
+ mkRef(10, 12),
+ },
+ expMin: 0, expMax: 10,
+ },
+ {
+ desc: "expMin & expMax are within from/through",
+ from: 10, through: 20,
+ input: []*logproto.ShortRef{
+ mkRef(0, 2),
+ mkRef(3, 5),
+ mkRef(6, 8),
+ mkRef(14, 16),
+ mkRef(17, 19),
+ mkRef(21, 30),
+ },
+ exp: []*logproto.ShortRef{
+ mkRef(14, 16),
+ mkRef(17, 19),
+ },
+ expMin: 14, expMax: 19,
+ },
+ } {
+ t.Run(tc.desc, func(t *testing.T) {
+ minTs, maxTs, got := overlappingChunks(tc.from, tc.through, model.Latest, model.Earliest, tc.input)
+ require.Equal(t, tc.expMin, minTs)
+ require.Equal(t, tc.expMax, maxTs)
+ require.Equal(t, tc.exp, got)
+ })
+ }
+}
diff --git a/pkg/distributor/http.go b/pkg/distributor/http.go
index 00c3ba53a2806..ec0660b91bc01 100644
--- a/pkg/distributor/http.go
+++ b/pkg/distributor/http.go
@@ -23,7 +23,26 @@ func (d *Distributor) PushHandler(w http.ResponseWriter, r *http.Request) {
}
func (d *Distributor) OTLPPushHandler(w http.ResponseWriter, r *http.Request) {
- d.pushHandler(w, r, push.ParseOTLPRequest)
+ interceptor := newOtelErrorHeaderInterceptor(w)
+ d.pushHandler(interceptor, r, push.ParseOTLPRequest)
+}
+
+// otelErrorHeaderInterceptor maps 500 errors to 503.
+// According to the OTLP specification, 500 errors are never retried on the client side, but 503 are.
+type otelErrorHeaderInterceptor struct {
+ http.ResponseWriter
+}
+
+func newOtelErrorHeaderInterceptor(w http.ResponseWriter) *otelErrorHeaderInterceptor {
+ return &otelErrorHeaderInterceptor{ResponseWriter: w}
+}
+
+func (i *otelErrorHeaderInterceptor) WriteHeader(statusCode int) {
+ if statusCode == http.StatusInternalServerError {
+ statusCode = http.StatusServiceUnavailable
+ }
+
+ i.ResponseWriter.WriteHeader(statusCode)
}
func (d *Distributor) pushHandler(w http.ResponseWriter, r *http.Request, pushRequestParser push.RequestParser) {
diff --git a/pkg/distributor/http_test.go b/pkg/distributor/http_test.go
index 0ecf70fa9a498..b6281b81bf3d7 100644
--- a/pkg/distributor/http_test.go
+++ b/pkg/distributor/http_test.go
@@ -82,6 +82,38 @@ func TestRequestParserWrapping(t *testing.T) {
require.True(t, called)
}
+func Test_OtelErrorHeaderInterceptor(t *testing.T) {
+ for _, tc := range []struct {
+ name string
+ inputCode int
+ expectedCode int
+ }{
+ {
+ name: "500",
+ inputCode: http.StatusInternalServerError,
+ expectedCode: http.StatusServiceUnavailable,
+ },
+ {
+ name: "400",
+ inputCode: http.StatusBadRequest,
+ expectedCode: http.StatusBadRequest,
+ },
+ {
+ name: "204",
+ inputCode: http.StatusNoContent,
+ expectedCode: http.StatusNoContent,
+ },
+ } {
+ t.Run(tc.name, func(t *testing.T) {
+ r := httptest.NewRecorder()
+ i := newOtelErrorHeaderInterceptor(r)
+
+ http.Error(i, "error", tc.inputCode)
+ require.Equal(t, tc.expectedCode, r.Code)
+ })
+ }
+}
+
func stubParser(_ string, _ *http.Request, _ push.TenantsRetention, _ push.Limits, _ push.UsageTracker) (*logproto.PushRequest, *push.Stats, error) {
return &logproto.PushRequest{}, &push.Stats{}, nil
}
diff --git a/pkg/ingester/flush.go b/pkg/ingester/flush.go
index 00aad05475495..81407abcb2e25 100644
--- a/pkg/ingester/flush.go
+++ b/pkg/ingester/flush.go
@@ -7,7 +7,9 @@ import (
"sync"
"time"
+ "github.com/go-kit/log"
"github.com/go-kit/log/level"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/ring"
"github.com/grafana/dskit/user"
"github.com/prometheus/client_golang/prometheus"
@@ -135,8 +137,9 @@ func (i *Ingester) sweepStream(instance *instance, stream *stream, immediate boo
}
func (i *Ingester) flushLoop(j int) {
+ l := log.With(i.logger, "loop", j)
defer func() {
- level.Debug(i.logger).Log("msg", "Ingester.flushLoop() exited")
+ level.Debug(l).Log("msg", "Ingester.flushLoop() exited")
i.flushQueuesDone.Done()
}()
@@ -147,9 +150,10 @@ func (i *Ingester) flushLoop(j int) {
}
op := o.(*flushOp)
- err := i.flushUserSeries(op.userID, op.fp, op.immediate)
+ m := util_log.WithUserID(op.userID, l)
+ err := i.flushOp(m, op)
if err != nil {
- level.Error(util_log.WithUserID(op.userID, i.logger)).Log("msg", "failed to flush", "err", err)
+ level.Error(m).Log("msg", "failed to flush", "err", err)
}
// If we're exiting & we failed to flush, put the failed operation
@@ -161,7 +165,23 @@ func (i *Ingester) flushLoop(j int) {
}
}
-func (i *Ingester) flushUserSeries(userID string, fp model.Fingerprint, immediate bool) error {
+func (i *Ingester) flushOp(l log.Logger, op *flushOp) error {
+ ctx, cancelFunc := context.WithCancel(context.Background())
+ defer cancelFunc()
+
+ b := backoff.New(ctx, i.cfg.FlushOpBackoff)
+ for b.Ongoing() {
+ err := i.flushUserSeries(ctx, op.userID, op.fp, op.immediate)
+ if err == nil {
+ break
+ }
+ level.Error(l).Log("msg", "failed to flush", "retries", b.NumRetries(), "err", err)
+ b.Wait()
+ }
+ return b.Err()
+}
+
+func (i *Ingester) flushUserSeries(ctx context.Context, userID string, fp model.Fingerprint, immediate bool) error {
instance, ok := i.getInstanceByID(userID)
if !ok {
return nil
@@ -175,9 +195,9 @@ func (i *Ingester) flushUserSeries(userID string, fp model.Fingerprint, immediat
lbs := labels.String()
level.Info(i.logger).Log("msg", "flushing stream", "user", userID, "fp", fp, "immediate", immediate, "num_chunks", len(chunks), "labels", lbs)
- ctx := user.InjectOrgID(context.Background(), userID)
- ctx, cancel := context.WithTimeout(ctx, i.cfg.FlushOpTimeout)
- defer cancel()
+ ctx = user.InjectOrgID(ctx, userID)
+ ctx, cancelFunc := context.WithTimeout(ctx, i.cfg.FlushOpTimeout)
+ defer cancelFunc()
err := i.flushChunks(ctx, fp, labels, chunks, chunkMtx)
if err != nil {
return fmt.Errorf("failed to flush chunks: %w, num_chunks: %d, labels: %s", err, len(chunks), lbs)
diff --git a/pkg/ingester/flush_test.go b/pkg/ingester/flush_test.go
index 6fd52bafa066f..edd6084a2741b 100644
--- a/pkg/ingester/flush_test.go
+++ b/pkg/ingester/flush_test.go
@@ -1,6 +1,7 @@
package ingester
import (
+ "errors"
"fmt"
"os"
"sort"
@@ -102,6 +103,67 @@ func Benchmark_FlushLoop(b *testing.B) {
}
}
+func Test_FlushOp(t *testing.T) {
+ t.Run("no error", func(t *testing.T) {
+ cfg := defaultIngesterTestConfig(t)
+ cfg.FlushOpBackoff.MinBackoff = time.Second
+ cfg.FlushOpBackoff.MaxBackoff = 10 * time.Second
+ cfg.FlushOpBackoff.MaxRetries = 1
+ cfg.FlushCheckPeriod = 100 * time.Millisecond
+
+ _, ing := newTestStore(t, cfg, nil)
+
+ ctx := user.InjectOrgID(context.Background(), "foo")
+ ins, err := ing.GetOrCreateInstance("foo")
+ require.NoError(t, err)
+
+ lbs := makeRandomLabels()
+ req := &logproto.PushRequest{Streams: []logproto.Stream{{
+ Labels: lbs.String(),
+ Entries: entries(5, time.Now()),
+ }}}
+ require.NoError(t, ins.Push(ctx, req))
+
+ time.Sleep(cfg.FlushCheckPeriod)
+ require.NoError(t, ing.flushOp(gokitlog.NewNopLogger(), &flushOp{
+ immediate: true,
+ userID: "foo",
+ fp: ins.getHashForLabels(lbs),
+ }))
+ })
+
+ t.Run("max retries exceeded", func(t *testing.T) {
+ cfg := defaultIngesterTestConfig(t)
+ cfg.FlushOpBackoff.MinBackoff = time.Second
+ cfg.FlushOpBackoff.MaxBackoff = 10 * time.Second
+ cfg.FlushOpBackoff.MaxRetries = 1
+ cfg.FlushCheckPeriod = 100 * time.Millisecond
+
+ store, ing := newTestStore(t, cfg, nil)
+ store.onPut = func(_ context.Context, _ []chunk.Chunk) error {
+ return errors.New("failed to write chunks")
+ }
+
+ ctx := user.InjectOrgID(context.Background(), "foo")
+ ins, err := ing.GetOrCreateInstance("foo")
+ require.NoError(t, err)
+
+ lbs := makeRandomLabels()
+ req := &logproto.PushRequest{Streams: []logproto.Stream{{
+ Labels: lbs.String(),
+ Entries: entries(5, time.Now()),
+ }}}
+ require.NoError(t, ins.Push(ctx, req))
+
+ time.Sleep(cfg.FlushCheckPeriod)
+ require.EqualError(t, ing.flushOp(gokitlog.NewNopLogger(), &flushOp{
+ immediate: true,
+ userID: "foo",
+ fp: ins.getHashForLabels(lbs),
+ }), "terminated after 1 retries")
+ })
+}
+
func Test_Flush(t *testing.T) {
var (
store, ing = newTestStore(t, defaultIngesterTestConfig(t), nil)
@@ -297,6 +359,10 @@ func defaultIngesterTestConfig(t testing.TB) Config {
cfg := Config{}
flagext.DefaultValues(&cfg)
+ cfg.FlushOpBackoff.MinBackoff = 100 * time.Millisecond
+ cfg.FlushOpBackoff.MaxBackoff = 10 * time.Second
+ cfg.FlushOpBackoff.MaxRetries = 1
+ cfg.FlushOpTimeout = 15 * time.Second
cfg.FlushCheckPeriod = 99999 * time.Hour
cfg.MaxChunkIdle = 99999 * time.Hour
cfg.ConcurrentFlushes = 1
diff --git a/pkg/ingester/ingester.go b/pkg/ingester/ingester.go
index 41b358906e0a1..1a89aebe6ef9f 100644
--- a/pkg/ingester/ingester.go
+++ b/pkg/ingester/ingester.go
@@ -21,6 +21,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/concurrency"
"github.com/grafana/dskit/modules"
"github.com/grafana/dskit/multierror"
@@ -34,6 +35,8 @@ import (
"github.com/prometheus/prometheus/model/labels"
"google.golang.org/grpc/health/grpc_health_v1"
+ server_util "github.com/grafana/loki/v3/pkg/util/server"
+
"github.com/grafana/loki/v3/pkg/analytics"
"github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/distributor/writefailures"
@@ -82,6 +85,7 @@ type Config struct {
ConcurrentFlushes int `yaml:"concurrent_flushes"`
FlushCheckPeriod time.Duration `yaml:"flush_check_period"`
+ FlushOpBackoff backoff.Config `yaml:"flush_op_backoff"`
FlushOpTimeout time.Duration `yaml:"flush_op_timeout"`
RetainPeriod time.Duration `yaml:"chunk_retain_period"`
MaxChunkIdle time.Duration `yaml:"chunk_idle_period"`
@@ -127,7 +131,10 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&cfg.ConcurrentFlushes, "ingester.concurrent-flushes", 32, "How many flushes can happen concurrently from each stream.")
f.DurationVar(&cfg.FlushCheckPeriod, "ingester.flush-check-period", 30*time.Second, "How often should the ingester see if there are any blocks to flush. The first flush check is delayed by a random time up to 0.8x the flush check period. Additionally, there is +/- 1% jitter added to the interval.")
- f.DurationVar(&cfg.FlushOpTimeout, "ingester.flush-op-timeout", 10*time.Minute, "The timeout before a flush is cancelled.")
+ f.DurationVar(&cfg.FlushOpBackoff.MinBackoff, "ingester.flush-op-backoff-min-period", 10*time.Second, "Minimum backoff period when a flush fails. Each concurrent flush has its own backoff, see `ingester.concurrent-flushes`.")
+ f.DurationVar(&cfg.FlushOpBackoff.MaxBackoff, "ingester.flush-op-backoff-max-period", time.Minute, "Maximum backoff period when a flush fails. Each concurrent flush has its own backoff, see `ingester.concurrent-flushes`.")
+ f.IntVar(&cfg.FlushOpBackoff.MaxRetries, "ingester.flush-op-backoff-retries", 10, "Maximum retries for failed flushes.")
+ f.DurationVar(&cfg.FlushOpTimeout, "ingester.flush-op-timeout", 10*time.Minute, "The timeout for an individual flush. Will be retried up to `flush-op-backoff-retries` times.")
f.DurationVar(&cfg.RetainPeriod, "ingester.chunks-retain-period", 0, "How long chunks should be retained in-memory after they've been flushed.")
f.DurationVar(&cfg.MaxChunkIdle, "ingester.chunks-idle-period", 30*time.Minute, "How long chunks should sit in-memory with no updates before being flushed if they don't hit the max block size. This means that half-empty chunks will still be flushed after a certain period as long as they receive no further activity.")
f.IntVar(&cfg.BlockSize, "ingester.chunks-block-size", 256*1024, "The targeted _uncompressed_ size in bytes of a chunk block When this threshold is exceeded the head block will be cut and compressed inside the chunk.")
@@ -155,6 +162,15 @@ func (cfg *Config) Validate() error {
return err
}
+ if cfg.FlushOpBackoff.MinBackoff > cfg.FlushOpBackoff.MaxBackoff {
+ return errors.New("invalid flush op min backoff: cannot be larger than max backoff")
+ }
+ if cfg.FlushOpBackoff.MaxRetries <= 0 {
+ return fmt.Errorf("invalid flush op max retries: %d", cfg.FlushOpBackoff.MaxRetries)
+ }
+ if cfg.FlushOpTimeout <= 0 {
+ return fmt.Errorf("invalid flush op timeout: %s", cfg.FlushOpTimeout)
+ }
if cfg.IndexShards <= 0 {
return fmt.Errorf("invalid ingester index shard factor: %d", cfg.IndexShards)
}
@@ -1041,6 +1057,13 @@ func (i *Ingester) asyncStoreMaxLookBack() time.Duration {
// GetChunkIDs is meant to be used only when using an async store like boltdb-shipper or tsdb.
func (i *Ingester) GetChunkIDs(ctx context.Context, req *logproto.GetChunkIDsRequest) (*logproto.GetChunkIDsResponse, error) {
+ gcr, err := i.getChunkIDs(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return gcr, err
+}
+
+// GetChunkIDs is meant to be used only when using an async store like boltdb-shipper or tsdb.
+func (i *Ingester) getChunkIDs(ctx context.Context, req *logproto.GetChunkIDsRequest) (*logproto.GetChunkIDsResponse, error) {
orgID, err := tenant.TenantID(ctx)
if err != nil {
return nil, err
@@ -1168,6 +1191,12 @@ func (i *Ingester) Label(ctx context.Context, req *logproto.LabelRequest) (*logp
// Series queries the ingester for log stream identifiers (label sets) matching a set of matchers
func (i *Ingester) Series(ctx context.Context, req *logproto.SeriesRequest) (*logproto.SeriesResponse, error) {
+ sr, err := i.series(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return sr, err
+}
+
+func (i *Ingester) series(ctx context.Context, req *logproto.SeriesRequest) (*logproto.SeriesResponse, error) {
instanceID, err := tenant.TenantID(ctx)
if err != nil {
return nil, err
@@ -1331,6 +1360,11 @@ func (i *Ingester) getInstances() []*instance {
// Tail logs matching given query
func (i *Ingester) Tail(req *logproto.TailRequest, queryServer logproto.Querier_TailServer) error {
+ err := i.tail(req, queryServer)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return err
+}
+func (i *Ingester) tail(req *logproto.TailRequest, queryServer logproto.Querier_TailServer) error {
select {
case <-i.tailersQuit:
return errors.New("Ingester is stopping")
@@ -1376,6 +1410,12 @@ func (i *Ingester) Tail(req *logproto.TailRequest, queryServer logproto.Querier_
// TailersCount returns count of active tail requests from a user
func (i *Ingester) TailersCount(ctx context.Context, _ *logproto.TailersCountRequest) (*logproto.TailersCountResponse, error) {
+ tcr, err := i.tailersCount(ctx)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return tcr, err
+}
+
+func (i *Ingester) tailersCount(ctx context.Context) (*logproto.TailersCountResponse, error) {
instanceID, err := tenant.TenantID(ctx)
if err != nil {
return nil, err
@@ -1431,6 +1471,12 @@ func (i *Ingester) GetDetectedFields(_ context.Context, r *logproto.DetectedFiel
// GetDetectedLabels returns map of detected labels and unique values from this ingester
func (i *Ingester) GetDetectedLabels(ctx context.Context, req *logproto.DetectedLabelsRequest) (*logproto.LabelToValuesResponse, error) {
+ lvr, err := i.getDetectedLabels(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return lvr, err
+}
+
+func (i *Ingester) getDetectedLabels(ctx context.Context, req *logproto.DetectedLabelsRequest) (*logproto.LabelToValuesResponse, error) {
userID, err := tenant.TenantID(ctx)
if err != nil {
return nil, err
diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go
index 1c438bd6bf2c0..6bb27ad645cc9 100644
--- a/pkg/ingester/ingester_test.go
+++ b/pkg/ingester/ingester_test.go
@@ -12,6 +12,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/flagext"
"github.com/grafana/dskit/httpgrpc"
"github.com/grafana/dskit/middleware"
@@ -676,57 +677,119 @@ func TestIngester_asyncStoreMaxLookBack(t *testing.T) {
func TestValidate(t *testing.T) {
for i, tc := range []struct {
- in Config
- err bool
- expected Config
+ in Config
+ expected Config
+ expectedErr string
}{
{
in: Config{
- MaxChunkAge: time.Minute,
ChunkEncoding: chunkenc.EncGZIP.String(),
- IndexShards: index.DefaultIndexShards,
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ IndexShards: index.DefaultIndexShards,
+ MaxChunkAge: time.Minute,
},
expected: Config{
+ ChunkEncoding: chunkenc.EncGZIP.String(),
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ IndexShards: index.DefaultIndexShards,
MaxChunkAge: time.Minute,
- ChunkEncoding: chunkenc.EncGZIP.String(),
parsedEncoding: chunkenc.EncGZIP,
- IndexShards: index.DefaultIndexShards,
},
},
{
in: Config{
ChunkEncoding: chunkenc.EncSnappy.String(),
- IndexShards: index.DefaultIndexShards,
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ IndexShards: index.DefaultIndexShards,
},
expected: Config{
- ChunkEncoding: chunkenc.EncSnappy.String(),
- parsedEncoding: chunkenc.EncSnappy,
+ ChunkEncoding: chunkenc.EncSnappy.String(),
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
IndexShards: index.DefaultIndexShards,
+ parsedEncoding: chunkenc.EncSnappy,
},
},
{
in: Config{
- IndexShards: index.DefaultIndexShards,
ChunkEncoding: "bad-enc",
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ IndexShards: index.DefaultIndexShards,
+ },
+ expectedErr: "invalid encoding: bad-enc, supported: none, gzip, lz4-64k, snappy, lz4-256k, lz4-1M, lz4, flate, zstd",
+ },
+ {
+ in: Config{
+ ChunkEncoding: chunkenc.EncGZIP.String(),
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ IndexShards: index.DefaultIndexShards,
+ MaxChunkAge: time.Minute,
+ },
+ expectedErr: "invalid flush op max retries: 0",
+ },
+ {
+ in: Config{
+ ChunkEncoding: chunkenc.EncGZIP.String(),
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ IndexShards: index.DefaultIndexShards,
+ MaxChunkAge: time.Minute,
},
- err: true,
+ expectedErr: "invalid flush op timeout: 0s",
},
{
in: Config{
- MaxChunkAge: time.Minute,
ChunkEncoding: chunkenc.EncGZIP.String(),
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
+ FlushOpTimeout: 15 * time.Second,
+ MaxChunkAge: time.Minute,
},
- err: true,
+ expectedErr: "invalid ingester index shard factor: 0",
},
} {
t.Run(fmt.Sprint(i), func(t *testing.T) {
err := tc.in.Validate()
- if tc.err {
- require.NotNil(t, err)
- return
+ if tc.expectedErr != "" {
+ require.EqualError(t, err, tc.expectedErr)
+ } else {
+ require.NoError(t, err)
+ require.Equal(t, tc.expected, tc.in)
}
- require.Nil(t, err)
- require.Equal(t, tc.expected, tc.in)
})
}
}
diff --git a/pkg/ingester/instance.go b/pkg/ingester/instance.go
index 7f1ec78601fff..ecef3f10347b8 100644
--- a/pkg/ingester/instance.go
+++ b/pkg/ingester/instance.go
@@ -49,6 +49,7 @@ import (
"github.com/grafana/loki/v3/pkg/util/deletion"
util_log "github.com/grafana/loki/v3/pkg/util/log"
mathutil "github.com/grafana/loki/v3/pkg/util/math"
+ server_util "github.com/grafana/loki/v3/pkg/util/server"
"github.com/grafana/loki/v3/pkg/validation"
)
@@ -441,6 +442,12 @@ func (i *instance) getLabelsFromFingerprint(fp model.Fingerprint) labels.Labels
}
func (i *instance) Query(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) {
+ it, err := i.query(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return it, err
+}
+
+func (i *instance) query(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) {
expr, err := req.LogSelector()
if err != nil {
return nil, err
@@ -495,6 +502,12 @@ func (i *instance) Query(ctx context.Context, req logql.SelectLogParams) (iter.E
}
func (i *instance) QuerySample(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error) {
+ it, err := i.querySample(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return it, err
+}
+
+func (i *instance) querySample(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error) {
expr, err := req.Expr()
if err != nil {
return nil, err
@@ -556,6 +569,12 @@ func (i *instance) QuerySample(ctx context.Context, req logql.SelectSampleParams
// If label matchers are given only the matching streams are fetched from the index.
// The label names or values are then retrieved from those matching streams.
func (i *instance) Label(ctx context.Context, req *logproto.LabelRequest, matchers ...*labels.Matcher) (*logproto.LabelResponse, error) {
+ lr, err := i.label(ctx, req, matchers...)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return lr, err
+}
+
+func (i *instance) label(ctx context.Context, req *logproto.LabelRequest, matchers ...*labels.Matcher) (*logproto.LabelResponse, error) {
if len(matchers) == 0 {
var labels []string
if req.Values {
@@ -709,6 +728,12 @@ func (i *instance) Series(ctx context.Context, req *logproto.SeriesRequest) (*lo
}
func (i *instance) GetStats(ctx context.Context, req *logproto.IndexStatsRequest) (*logproto.IndexStatsResponse, error) {
+ isr, err := i.getStats(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return isr, err
+}
+
+func (i *instance) getStats(ctx context.Context, req *logproto.IndexStatsRequest) (*logproto.IndexStatsResponse, error) {
matchers, err := syntax.ParseMatchers(req.Matchers, true)
if err != nil {
return nil, err
@@ -765,6 +790,12 @@ func (i *instance) GetStats(ctx context.Context, req *logproto.IndexStatsRequest
}
func (i *instance) GetVolume(ctx context.Context, req *logproto.VolumeRequest) (*logproto.VolumeResponse, error) {
+ vr, err := i.getVolume(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return vr, err
+}
+
+func (i *instance) getVolume(ctx context.Context, req *logproto.VolumeRequest) (*logproto.VolumeResponse, error) {
matchers, err := syntax.ParseMatchers(req.Matchers, true)
if err != nil && req.Matchers != seriesvolume.MatchAny {
return nil, err
diff --git a/pkg/ingester/instance_test.go b/pkg/ingester/instance_test.go
index 7f7dc30361d6a..3055a7fb0c5b7 100644
--- a/pkg/ingester/instance_test.go
+++ b/pkg/ingester/instance_test.go
@@ -18,6 +18,7 @@ import (
"github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/flagext"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
@@ -40,9 +41,15 @@ import (
func defaultConfig() *Config {
cfg := Config{
- BlockSize: 512,
- ChunkEncoding: "gzip",
- IndexShards: 32,
+ BlockSize: 512,
+ ChunkEncoding: "gzip",
+ IndexShards: 32,
+ FlushOpTimeout: 15 * time.Second,
+ FlushOpBackoff: backoff.Config{
+ MinBackoff: 100 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 1,
+ },
}
if err := cfg.Validate(); err != nil {
panic(errors.Wrap(err, "error building default test config"))
diff --git a/pkg/logcli/output/loki.go b/pkg/logcli/output/loki.go
new file mode 100644
index 0000000000000..ad89311bbcb34
--- /dev/null
+++ b/pkg/logcli/output/loki.go
@@ -0,0 +1 @@
+package output
diff --git a/pkg/logproto/compat.go b/pkg/logproto/compat.go
index a11467584b58f..4a296fd8e43b6 100644
--- a/pkg/logproto/compat.go
+++ b/pkg/logproto/compat.go
@@ -506,6 +506,33 @@ func (m *ShardsRequest) LogToSpan(sp opentracing.Span) {
sp.LogFields(fields...)
}
+func (m *DetectedFieldsRequest) GetCachingOptions() (res definitions.CachingOptions) { return }
+
+func (m *DetectedFieldsRequest) WithStartEnd(start, end time.Time) definitions.Request {
+ clone := *m
+ clone.Start = start
+ clone.End = end
+ return &clone
+}
+
+func (m *DetectedFieldsRequest) WithQuery(query string) definitions.Request {
+ clone := *m
+ clone.Query = query
+ return &clone
+}
+
+func (m *DetectedFieldsRequest) LogToSpan(sp opentracing.Span) {
+ fields := []otlog.Field{
+ otlog.String("query", m.GetQuery()),
+ otlog.String("start", m.Start.String()),
+ otlog.String("end", m.End.String()),
+ otlog.String("step", time.Duration(m.Step).String()),
+ otlog.String("field_limit", fmt.Sprintf("%d", m.FieldLimit)),
+ otlog.String("line_limit", fmt.Sprintf("%d", m.LineLimit)),
+ }
+ sp.LogFields(fields...)
+}
+
func (m *QueryPatternsRequest) GetCachingOptions() (res definitions.CachingOptions) { return }
func (m *QueryPatternsRequest) WithStartEnd(start, end time.Time) definitions.Request {
@@ -534,3 +561,33 @@ func (m *QueryPatternsRequest) LogToSpan(sp opentracing.Span) {
}
sp.LogFields(fields...)
}
+
+func (m *DetectedLabelsRequest) GetStep() int64 { return 0 }
+
+func (m *DetectedLabelsRequest) GetCachingOptions() (res definitions.CachingOptions) { return }
+
+func (m *DetectedLabelsRequest) WithStartEnd(start, end time.Time) definitions.Request {
+ clone := *m
+ clone.Start = start
+ clone.End = end
+ return &clone
+}
+
+func (m *DetectedLabelsRequest) WithQuery(query string) definitions.Request {
+ clone := *m
+ clone.Query = query
+ return &clone
+}
+
+func (m *DetectedLabelsRequest) WithStartEndForCache(start, end time.Time) resultscache.Request {
+ return m.WithStartEnd(start, end).(resultscache.Request)
+}
+
+func (m *DetectedLabelsRequest) LogToSpan(sp opentracing.Span) {
+ fields := []otlog.Field{
+ otlog.String("query", m.GetQuery()),
+ otlog.String("start", m.Start.String()),
+ otlog.String("end", m.End.String()),
+ }
+ sp.LogFields(fields...)
+}
diff --git a/pkg/logql/syntax/ast.go b/pkg/logql/syntax/ast.go
index 6e3f18b7cc8e6..e5e80b4d0c172 100644
--- a/pkg/logql/syntax/ast.go
+++ b/pkg/logql/syntax/ast.go
@@ -366,14 +366,6 @@ func newLineFilterExpr(ty log.LineMatchType, op, match string) *LineFilterExpr {
func newOrLineFilter(left, right *LineFilterExpr) *LineFilterExpr {
right.Ty = left.Ty
- if left.Ty == log.LineMatchEqual || left.Ty == log.LineMatchRegexp || left.Ty == log.LineMatchPattern {
- left.Or = right
- right.IsOrChild = true
- return left
- }
-
- // !(left or right) == (!left and !right).
-
// NOTE: Consider, we have chain of "or", != "foo" or "bar" or "baz"
// we parse from right to left, so first time left="bar", right="baz", and we don't know the actual `Ty` (equal: |=, notequal: !=, regex: |~, etc). So
// it will have default (0, LineMatchEqual).
@@ -385,6 +377,13 @@ func newOrLineFilter(left, right *LineFilterExpr) *LineFilterExpr {
tmp = tmp.Or
}
+ if left.Ty == log.LineMatchEqual || left.Ty == log.LineMatchRegexp || left.Ty == log.LineMatchPattern {
+ left.Or = right
+ right.IsOrChild = true
+ return left
+ }
+
+ // !(left or right) == (!left and !right).
return newNestedLineFilterExpr(left, right)
}
diff --git a/pkg/logql/syntax/ast_test.go b/pkg/logql/syntax/ast_test.go
index 9090fc98b7558..d75ff2d0261b6 100644
--- a/pkg/logql/syntax/ast_test.go
+++ b/pkg/logql/syntax/ast_test.go
@@ -545,11 +545,18 @@ func Test_FilterMatcher(t *testing.T) {
[]linecheck{{"foo", false}, {"bar", true}, {"127.0.0.2", true}, {"127.0.0.1", false}},
},
{
- `{app="foo"} |> "foo" or "bar"`,
+ `{app="foo"} |> "<_>foo<_>" or "<_>bar<_>"`,
[]*labels.Matcher{
mustNewMatcher(labels.MatchEqual, "app", "foo"),
},
- []linecheck{{"foo", true}, {"bar", true}, {"none", false}},
+ []linecheck{{"test foo test", true}, {"test bar test", true}, {"none", false}},
+ },
+ {
+ `{app="foo"} |> "<_>foo<_>" or "<_>bar<_>" or "<_>baz<_>"`,
+ []*labels.Matcher{
+ mustNewMatcher(labels.MatchEqual, "app", "foo"),
+ },
+ []linecheck{{"test foo test", true}, {"test bar test", true}, {"test baz test", true}, {"none", false}},
},
{
`{app="foo"} !> "foo" or "bar"`,
@@ -618,6 +625,18 @@ func TestOrLineFilterTypes(t *testing.T) {
_ = newOrLineFilter(left, right)
require.Equal(t, tt.ty, right.Ty)
+ require.Equal(t, tt.ty, left.Ty)
+ })
+
+ t.Run("right inherits left's type with multiple or filters", func(t *testing.T) {
+ f1 := &LineFilterExpr{LineFilter: LineFilter{Ty: tt.ty, Match: "something"}}
+ f2 := &LineFilterExpr{LineFilter: LineFilter{Ty: log.LineMatchEqual, Match: "something"}}
+ f3 := &LineFilterExpr{LineFilter: LineFilter{Ty: log.LineMatchEqual, Match: "something"}}
+
+ _ = newOrLineFilter(f1, newOrLineFilter(f2, f3))
+ require.Equal(t, tt.ty, f1.Ty)
+ require.Equal(t, tt.ty, f2.Ty)
+ require.Equal(t, tt.ty, f3.Ty)
})
}
}
diff --git a/pkg/logql/syntax/parser_test.go b/pkg/logql/syntax/parser_test.go
index f12309f2b24a5..4c2a85203938b 100644
--- a/pkg/logql/syntax/parser_test.go
+++ b/pkg/logql/syntax/parser_test.go
@@ -3173,6 +3173,66 @@ var ParseTestCases = []struct {
},
},
},
+ {
+ in: `{app="foo"} |= "foo" or "bar" or "baz"`,
+ exp: &PipelineExpr{
+ Left: newMatcherExpr([]*labels.Matcher{mustNewMatcher(labels.MatchEqual, "app", "foo")}),
+ MultiStages: MultiStageExpr{
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchEqual,
+ Match: "foo",
+ },
+ Or: newOrLineFilter(
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchEqual,
+ Match: "bar",
+ },
+ IsOrChild: true,
+ },
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchEqual,
+ Match: "baz",
+ },
+ IsOrChild: true,
+ }),
+ IsOrChild: false,
+ },
+ },
+ },
+ },
+ {
+ in: `{app="foo"} |> "foo" or "bar" or "baz"`,
+ exp: &PipelineExpr{
+ Left: newMatcherExpr([]*labels.Matcher{mustNewMatcher(labels.MatchEqual, "app", "foo")}),
+ MultiStages: MultiStageExpr{
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchPattern,
+ Match: "foo",
+ },
+ Or: newOrLineFilter(
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchPattern,
+ Match: "bar",
+ },
+ IsOrChild: true,
+ },
+ &LineFilterExpr{
+ LineFilter: LineFilter{
+ Ty: log.LineMatchPattern,
+ Match: "baz",
+ },
+ IsOrChild: true,
+ }),
+ IsOrChild: false,
+ },
+ },
+ },
+ },
}
func TestParse(t *testing.T) {
diff --git a/pkg/pattern/drain/drain.go b/pkg/pattern/drain/drain.go
index 4d7c52bebf0c6..784beabb2a876 100644
--- a/pkg/pattern/drain/drain.go
+++ b/pkg/pattern/drain/drain.go
@@ -25,7 +25,9 @@ package drain
import (
"math"
"strconv"
+ "strings"
"unicode"
+ "unsafe"
"github.com/hashicorp/golang-lru/v2/simplelru"
"github.com/prometheus/common/model"
@@ -139,7 +141,7 @@ func DefaultConfig() *Config {
// MaxClusterDepth and SimTh, the less the chance that there will be
// "similar" clusters, but the greater the footprint.
SimTh: 0.3,
- MaxChildren: 100,
+ MaxChildren: 15,
ParamString: `<_>`,
MaxClusters: 300,
}
@@ -156,22 +158,24 @@ func New(config *Config, metrics *Metrics) *Drain {
}
d := &Drain{
- config: config,
- rootNode: createNode(),
- idToCluster: createLogClusterCache(config.MaxClusters, evictFn),
- metrics: metrics,
- tokenizer: splittingTokenizer{}, // Default to this for now
+ config: config,
+ rootNode: createNode(),
+ idToCluster: createLogClusterCache(config.MaxClusters, evictFn),
+ metrics: metrics,
+ tokenizer: newPunctuationTokenizer(),
+ maxAllowedLineLength: 3000,
}
return d
}
type Drain struct {
- config *Config
- rootNode *Node
- idToCluster *LogClusterCache
- clustersCounter int
- metrics *Metrics
- tokenizer LineTokenizer
+ config *Config
+ rootNode *Node
+ idToCluster *LogClusterCache
+ clustersCounter int
+ metrics *Metrics
+ tokenizer LineTokenizer
+ maxAllowedLineLength int
}
func (d *Drain) Clusters() []*LogCluster {
@@ -183,10 +187,14 @@ func (d *Drain) TrainTokens(tokens []string, stringer func([]string) string, ts
}
func (d *Drain) Train(content string, ts int64) *LogCluster {
- return d.train(d.tokenizer.Tokenize(content), d.tokenizer.Join, ts)
+ if len(content) > d.maxAllowedLineLength {
+ return nil
+ }
+ tokens, state := d.tokenizer.Tokenize(content)
+ return d.train(tokens, state, ts)
}
-func (d *Drain) train(tokens []string, stringer func([]string) string, ts int64) *LogCluster {
+func (d *Drain) train(tokens []string, state interface{}, ts int64) *LogCluster {
if len(tokens) < 4 {
return nil
}
@@ -196,11 +204,12 @@ func (d *Drain) train(tokens []string, stringer func([]string) string, ts int64)
d.clustersCounter++
clusterID := d.clustersCounter
matchCluster = &LogCluster{
- Tokens: tokens,
- id: clusterID,
- Size: 1,
- Stringer: stringer,
- Chunks: Chunks{},
+ Tokens: tokens,
+ TokenState: state,
+ id: clusterID,
+ Size: 1,
+ Stringer: d.tokenizer.Join,
+ Chunks: Chunks{},
}
matchCluster.append(model.TimeFromUnixNano(ts))
d.idToCluster.Set(clusterID, matchCluster)
@@ -219,15 +228,16 @@ func (d *Drain) train(tokens []string, stringer func([]string) string, ts int64)
}
func (d *Drain) TrainPattern(content string, samples []*logproto.PatternSample) *LogCluster {
- tokens := deduplicatePlaceholders(d.tokenizer.Tokenize(content), d.config.ParamString)
+ tokens, state := d.tokenizer.Tokenize(content)
matchCluster := d.treeSearch(d.rootNode, tokens, d.config.SimTh, true)
// Match no existing log cluster
if matchCluster == nil {
d.clustersCounter++
clusterID := d.clustersCounter
matchCluster = &LogCluster{
- Tokens: tokens,
- id: clusterID,
+ Tokens: tokens,
+ TokenState: state,
+ id: clusterID,
}
d.idToCluster.Set(clusterID, matchCluster)
d.addSeqToPrefixTree(d.rootNode, matchCluster)
@@ -241,24 +251,33 @@ func (d *Drain) TrainPattern(content string, samples []*logproto.PatternSample)
return matchCluster
}
-func deduplicatePlaceholders(tokens []string, param string) []string {
- if len(tokens) < 2 {
- return tokens
+func deduplicatePlaceholders(line string, placeholder string) string {
+ first := strings.Index(line, "<_><_>")
+ if first == -1 {
+ return line
}
- i := 1
- for k := 1; k < len(tokens); k++ {
- if tokens[k] != param || tokens[k] != tokens[k-1] {
- if i != k {
- tokens[i] = tokens[k]
+ builder := make([]byte, 0, len(line))
+ low := 0
+ for i := first; i < len(line)-5; i++ {
+ if line[i:i+len(placeholder)] == placeholder {
+ high := i + 3
+ for ; high < len(line)-2; high += 3 {
+ if line[high:high+len(placeholder)] != placeholder {
+ break
+ }
}
- i++
+ builder = append(builder, line[low:i+len(placeholder)]...)
+ low = high
+ i = high
}
}
- return tokens[:i]
+ builder = append(builder, line[low:]...)
+
+ return unsafe.String(unsafe.SliceData(builder), len(builder))
}
func (d *Drain) PatternString(c *LogCluster) string {
- s := d.tokenizer.Join(deduplicatePlaceholders(c.Tokens, d.config.ParamString))
+ s := deduplicatePlaceholders(d.tokenizer.Join(c.Tokens, c.TokenState), d.config.ParamString)
if s == d.config.ParamString {
return ""
}
@@ -271,7 +290,7 @@ func (d *Drain) Delete(cluster *LogCluster) {
// Match against an already existing cluster. Match shall be perfect (sim_th=1.0). New cluster will not be created as a result of this call, nor any cluster modifications.
func (d *Drain) Match(content string) *LogCluster {
- contentTokens := d.tokenizer.Tokenize(content)
+ contentTokens, _ := d.tokenizer.Tokenize(content)
matchCluster := d.treeSearch(d.rootNode, contentTokens, 1.0, true)
return matchCluster
}
@@ -413,6 +432,7 @@ func (d *Drain) addSeqToPrefixTree(rootNode *Node, cluster *LogCluster) {
// if token not matched in this layer of existing tree.
if _, ok = curNode.keyToChildNode[token]; !ok {
if !d.hasNumbers(token) {
+ // Numbers in token: Prioritize the param string path
if _, ok = curNode.keyToChildNode[d.config.ParamString]; ok {
if len(curNode.keyToChildNode) < d.config.MaxChildren {
newNode := createNode()
@@ -435,6 +455,7 @@ func (d *Drain) addSeqToPrefixTree(rootNode *Node, cluster *LogCluster) {
}
}
} else {
+ // No numbers, use the key as-is to traverse
if _, ok = curNode.keyToChildNode[d.config.ParamString]; !ok {
newNode := createNode()
curNode.keyToChildNode[d.config.ParamString] = newNode
diff --git a/pkg/pattern/drain/drain_benchmark_test.go b/pkg/pattern/drain/drain_benchmark_test.go
index e03770f613c04..35ec024af138e 100644
--- a/pkg/pattern/drain/drain_benchmark_test.go
+++ b/pkg/pattern/drain/drain_benchmark_test.go
@@ -39,8 +39,8 @@ func BenchmarkDrain_TrainExtractsPatterns(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
+ drain := New(DefaultConfig(), nil)
for _, line := range lines {
- drain := New(DefaultConfig(), nil)
drain.Train(line, 0)
}
}
diff --git a/pkg/pattern/drain/drain_test.go b/pkg/pattern/drain/drain_test.go
index cc16f0b7fd64c..34bcf8b4c12a5 100644
--- a/pkg/pattern/drain/drain_test.go
+++ b/pkg/pattern/drain/drain_test.go
@@ -4,6 +4,7 @@ import (
"bufio"
"fmt"
"os"
+ "strings"
"testing"
"github.com/stretchr/testify/require"
@@ -27,34 +28,34 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
drain: New(DefaultConfig(), nil),
inputFile: `testdata/agent-logfmt.txt`,
patterns: []string{
- `ts=2024-04-16T15:10:42.556278698Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*b92ee988-5c26-4c64-bba3-ff6a01723759/grafana/*.log:{app=\"grafana\", conprof=\"true\", container=\"grafana\", instanceId=\"i1111\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"orgnamehere\", plan=\"free\", pod=\"orgnamehere-grafana-7c65678f86-9zhlb\", pod_template_hash=\"7c65678f86\", resource_version=\"143638246\", slug=\"orgnamehere\", stackId=\"866772\"}"`,
- `ts=2024-04-16T15:10:42.556706613Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*b92ee988-5c26-4c64-bba3-ff6a01723759/hgrun/*.log:{app=\"grafana\", conprof=\"true\", container=\"hgrun\", instanceId=\"i1111\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"orgnamehere\", plan=\"free\", pod=\"orgnamehere-grafana-7c65678f86-9zhlb\", pod_template_hash=\"7c65678f86\", resource_version=\"143638246\", slug=\"orgnamehere\", stackId=\"866772\"}"`,
- `ts=2024-04-16T15:10:42.556930066Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*b92ee988-5c26-4c64-bba3-ff6a01723759/hg-plugins/*.log:{app=\"grafana\", conprof=\"true\", container=\"hg-plugins\", instanceId=\"i1111\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"orgnamehere\", plan=\"free\", pod=\"orgnamehere-grafana-7c65678f86-9zhlb\", pod_template_hash=\"7c65678f86\", resource_version=\"143638246\", slug=\"orgnamehere\", stackId=\"866772\"}"`,
- `ts=2024-04-16T15:10:42.557102408Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*b92ee988-5c26-4c64-bba3-ff6a01723759/hosted-grafana-security/*.log:{app=\"grafana\", conprof=\"true\", container=\"hosted-grafana-security\", instanceId=\"i1111\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"orgnamehere\", plan=\"free\", pod=\"orgnamehere-grafana-7c65678f86-9zhlb\", pod_template_hash=\"7c65678f86\", resource_version=\"143638246\", slug=\"orgnamehere\", stackId=\"866772\"}"`,
+ `ts=2024-04-16T15:10:42.<_> level=info msg="finished node evaluation" controller_id=module.http.cloudwatch_pipelines node_id=prometheus.scrape.<_> duration=<_>.<_>`,
`ts=2024-04-16T15:10:43.192290389Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*19a1cce8-5f04-46e0-a124-292b0dd9b343/testcoordinator/*.log:{batch_kubernetes_io_controller_uid=\"25ec5edf-f78e-468b-b6f3-3b9685f0cc8f\", batch_kubernetes_io_job_name=\"testcoordinator-job-2665838\", container=\"testcoordinator\", controller_uid=\"25ec5edf-f78e-468b-b6f3-3b9685f0cc8f\", job=\"k6-cloud/testcoordinator\", job_name=\"testcoordinator-job-2665838\", name=\"testcoordinator\", namespace=\"k6-cloud\", pod=\"testcoordinator-job-2665838-9g8ds\"}"`,
- `ts=2024-04-16T15:10:43.551543875Z caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key="/var/log/pods/*35649bfd-52ff-4281-9294-5f65fd5a89fc/marketplaces-api/*.log:{container=\"marketplaces-api\", job=\"grafana-com/marketplaces-api\", name=\"marketplaces-api\", namespace=\"grafana-com\", pod=\"marketplaces-api-f67ff7567-gqrvb\", pod_template_hash=\"f67ff7567\"}"`,
- `ts=<_> caller=filetarget.go:192 level=info component=logs logs_config=default msg="filetarget:watcher closed, tailer stopped, positions saved" path=<_>`,
- `ts=<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=<_>`,
- `ts=<_> caller=filetarget.go:326 level=info component=logs logs_config=default msg="removing directory from watcher" directory=<_>`,
- `ts=<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=<_> op=CREATE`,
- `ts=<_> caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key=<_> \"kube-proxy\", container=\"kube-proxy\", job=<_> namespace=\"kube-system\", pod=\"kube-proxy-gke-ops-us-east-0-main-n2s32-1-1dd39c-32ae1dde-hmhw\", tier=\"node\"}"`,
- `ts=<_> caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key=<_> \"grafana\", conprof=\"true\", container=\"grafana\", instanceId=<_> job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=<_> plan=\"free\", pod=<_> pod_template_hash=<_> resource_version=<_> slug=<_> stackId=<_>`,
- `ts=<_> caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key=<_> \"grafana\", conprof=\"true\", container=\"hg-plugins\", instanceId=<_> job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=<_> plan=\"free\", pod=<_> pod_template_hash=<_> resource_version=<_> slug=<_> stackId=<_>`,
- `ts=<_> caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key=<_> \"grafana\", conprof=\"true\", container=\"hgrun\", instanceId=<_> job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=<_> plan=\"free\", pod=<_> pod_template_hash=<_> resource_version=<_> slug=<_> stackId=<_>`,
- `ts=<_> caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key=<_> \"grafana\", conprof=\"true\", container=\"hosted-grafana-security\", instanceId=<_> job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=<_> plan=\"free\", pod=<_> pod_template_hash=<_> resource_version=<_> slug=<_> stackId=<_>`,
- `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file <_> ..."`,
- `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked <_> - &{Offset:0 Whence:0}"`,
- `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened <_>`,
- `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for <_> to appear..."`,
- `ts=<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="bufio.Scanner:token too long"`,
- `ts=<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="logfmt syntax error at pos <_> on line 1:unexpected '\"'"`,
- `ts=<_> caller=tailer.go:118 level=info component=logs logs_config=default component=tailer msg="position timer:exited" path=<_>`,
- `ts=<_> caller=tailer.go:147 level=info component=logs logs_config=default component=tailer msg="tail routine:started" path=<_>`,
- `ts=<_> caller=tailer.go:155 level=info component=logs logs_config=default component=tailer msg="tail routine:exited" path=<_>`,
- `ts=<_> caller=tailer.go:164 level=info component=logs logs_config=default component=tailer msg="tail routine:tail channel closed, stopping tailer" path=<_> reason=null`,
- `ts=<_> caller=tailer.go:207 level=info component=logs logs_config=default component=tailer msg="skipping update of position for a file which does not currently exist" path=<_>`,
- `ts=<_> caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=<_>`,
- `ts=<_> level=info msg="finished node evaluation" controller_id=module.http.cloudwatch_pipelines node_id=<_> duration=<_>`,
+ `ts=2024-04-16T15:10:43.551782223Z caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=/var/log/pods/grafana-com_marketplaces-api-f67ff7567-gqrvb_35649bfd-52ff-4281-9294-5f65fd5a89fc/marketplaces-api/0.log`,
+ `ts=2024-04-16T15:10:43.<_> caller=filetargetmanager.go:<_> level=info component=logs logs_config=default msg="<_> target" key="/var/log/pods/*<_>/<_>/*.log:{<_>=\"<_>\", <_>=\"<_><_><_><_><_><_> <_><_><_><_><_>\", namespace=\"<_>\", pod=\"<_>\", <_>=\"<_>\"}"`,
+ `ts=2024-04-16T15:10:43.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_><_> <_> <_> <_><_> <_> <_><_> <_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_> <_><_><_>`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:192 level=info component=logs logs_config=default msg="filetarget: watcher closed, tailer stopped, positions saved" path=/var/log/pods/*<_>/<_>/*.log`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=/var/log/pods/<_>/<_>`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=/var/log/pods/hosted-grafana_.<_>/<_>`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:326 level=info component=logs logs_config=default msg="removing directory from watcher" directory=/var/log/pods/hosted-grafana_.<_>/<_>`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_>/<_>/<_>.log op=CREATE`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_><_><_>/<_><_><_>.<_> op=CREATE`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_><_><_>/<_><_><_>.<_>.<_> op=CREATE`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/hosted-grafana_.<_>/<_>/0.log.<_>.<_> op=CREATE`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:<_> level=info component=logs logs_config=default msg="<_> target" key="/var/log/pods/*<_>/<_>/*.log:{app=\"grafana\", conprof=\"true\", container=\"<_>\", instanceId=\"<_>\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"<_>\", plan=\"free\", pod=\"<_>\", pod_template_hash=\"<_>\", resource_version=\"<_>\", slug=\"<_>\", stackId=\"<_>\"}"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file /var/log/pods/<_>/<_>/<_>.log ..."`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file /var/log/pods/hosted-grafana_.<_>/<_>/0.log ..."`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked /var/log/pods/<_>/<_>/0.log - &{Offset:0 Whence:0}"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked /var/log/pods/hosted-grafana_.<_>/<_>/0.log - &{Offset:0 Whence:0}"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened /var/log/pods/<_>/<_>/<_>.log"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened /var/log/pods/hosted-grafana_.<_>/<_>/0.log"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for /var/log/pods/<_>/<_>/0.log to appear..."`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for /var/log/pods/hosted-grafana_.<_>/<_>/0.log to appear..."`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="bufio.Scanner: token too long"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="logfmt syntax error at pos <_> on line 1: unexpected '\"'"`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=/var/log/pods/hosted-grafana_.<_>/<_>/0.log`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_>: <_>" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_>: <_>" path=/var/log/pods/hosted-grafana_.<_>/<_>/0.log`,
+ `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_><_> <_> <_> <_><_> <_> <_><_> <_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_> <_><_><_>`,
},
},
{
@@ -62,126 +63,103 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: `testdata/ingester-logfmt.txt`,
patterns: []string{
`ts=2024-04-17T09:52:46.363974185Z caller=http.go:194 level=debug traceID=1b48f5156a61ca69 msg="GET /debug/pprof/delta_mutex (200) 1.161082ms"`,
- `ts=<_> caller=head.go:216 level=debug tenant=987678 msg="profile is empty after delta computation" metricName=memory`,
- `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /ingester.v1.IngesterService/Push (200) <_>`,
+ `ts=2024-04-17T09:52:46.<_> caller=head.go:216 level=debug tenant=987678 msg="profile is empty after delta computation" metricName=memory`,
+ `ts=2024-04-17T09:52:46.<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /ingester.v1.IngesterService/Push (200) <_>.<_>"`,
},
},
{
drain: New(DefaultConfig(), nil),
inputFile: `testdata/drone-json.txt`,
patterns: []string{
- `{"duration":<_> "debug","method":"GET","msg":"request completed","referer":"","remote":"10.136.105.40:52702","request":"/metrics","status":200,"time":<_> <_> <_> "GrafanaAgent/v0.40.3 (flow; linux; helm)"}`,
- `{"id":<_> "debug","max-pool":4,"min-pool":0,"msg":"check capacity","pending-builds":0,"running-builds":0,"server-buffer":0,"server-capacity":0,"server-count":0,"time":<_> <_> <_>`,
- `{"id":<_> "debug","msg":"calculate server capacity","time":<_> <_> <_>`,
- `{"id":<_> "debug","msg":"calculate unfinished jobs","time":<_> <_> <_>`,
- `{"id":<_> "debug","msg":"check capacity complete","time":<_> <_> <_>`,
- `{"id":<_> "debug","msg":"no capacity changes required","time":<_> <_> <_>`,
+ `{"duration":<_>,"level":"debug","method":"GET","msg":"request completed","referer":"","remote":"10.136.105.40:52702","request":"/metrics","status":200,"time":"<_>:<_>:<_>","user-agent":"GrafanaAgent/v0.40.3 (flow; linux; helm)"}`,
+ `{"id":"<_>","level":"debug","max-pool":4,"min-pool":0,"msg":"check capacity","pending-builds":0,"running-builds":0,"server-buffer":0,"server-capacity":0,"server-count":0,"time":"<_>:<_>:<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"calculate server capacity","time":"<_>:<_>:<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"calculate unfinished jobs","time":"<_>:<_>:<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"check capacity complete","time":"<_>:<_>:<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"no capacity changes required","time":"<_>:<_>:<_>"}`,
},
},
{
drain: New(DefaultConfig(), nil),
inputFile: "testdata/distributor-logfmt.txt",
patterns: []string{
- `ts=2024-05-02T12:17:22.115385619Z caller=http.go:194 level=debug traceID=7836a12bb7f1964e orgID=75 msg="POST /ingest?aggregationType=sum&from=1714652227107641016&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=100&spyName=gospy&units=samples&until=1714652242109516917 (200) 1.562143ms"`,
- `ts=2024-05-02T12:17:22.242343806Z caller=http.go:194 level=debug traceID=404c6a83a18e66a4 orgID=75 msg="POST /ingest?aggregationType=average&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=0&spyName=gospy&units=goroutines&until=1714652242232506798 (200) 2.902485ms"`,
- `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=<_> 0&spyName=scrape&units=samples&until=1714652240 (200) <_>`,
- `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_> gospy&units=&until=1714652242232506798 (200) <_>`,
- `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /push.v1.PusherService/Push <_> <_>`,
+ `ts=2024-05-02T12:17:22.851228301Z caller=http.go:194 level=debug traceID=1e1fe5ba1756bc38 orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=flamegraph.com%7Bapp_kubernetes_io_instance%3Dflamegraph-com%2Capp_kubernetes_io_name%3Dflamegraph-com%2Ccluster%3Dflamegraph.com%2Cinstance%3D10.0.11.146%3A8001%2Cjob%3Dkubernetes-pods%2Cnamespace%3Dflamegraph-com%2Cpod%3Dflamegraph-com-backend-79c858c7bf-jw2hn%2Cpod_template_hash%3D79c858c7bf%2Cpyroscope_tenant%3Dpyroscope%2Ctier%3Dbackend%7D&sampleRate=0&spyName=scrape&units=samples&until=1714652240 (200) 22.345191ms"`,
+ `ts=2024-05-02T12:17:22.<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_>&spyName=gospy&units=&until=1714652242232506798 (200) <_>.<_>"`,
+ `ts=2024-05-02T12:17:22.<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=<_>&from=<_>&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_>&spyName=gospy&units=<_>&until=<_> (200) <_>.<_>"`,
+ `ts=2024-05-02T12:17:<_>.<_> caller=http.go:194 level=debug traceID=<_> orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=flamegraph.com.frontend%7Bapp_kubernetes_io_instance%3Dflamegraph-com%2Capp_kubernetes_io_name%3Dflamegraph-com%2Ccluster%3Dflamegraph.com%2Cinstance%3D10.0.9.115%3A9091%2Cjob%3Dkubernetes-pods%2Cnamespace%3Dflamegraph-com%2Cpod%3Dflamegraph-com-frontend-6fb87f8785-pd87k%2Cpod_template_hash%3D6fb87f8785%2Cpyroscope_tenant%3Dpyroscope%2Ctier%3Dfrontend%7D&sampleRate=0&spyName=scrape&units=samples&until=1714652240 (200) <_>.<_>"`,
+ `ts=2024-05-02T12:17:<_>.<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /push.v1.PusherService/Push (<_>) <_>.<_>"`,
},
},
{
drain: New(DefaultConfig(), nil),
inputFile: "testdata/journald.txt",
patterns: []string{
- ` exec /bin/hgrun -log.level=debug launch -bundledPluginsManifest /proc/$(pidof plugins-pause)/root/manifest.json -bundledPluginsDir /proc/$(pidof plugins-pause)/root/plugins],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10000,Protocol:TCP,HostIP:,},ContainerPort{Name:profiling,HostPort:0,ContainerPort:6060,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:HG_API,Value:http://hosted-grafana-api,ValueFrom:nil,},EnvVar{Name:HG_INSTANCE_SLUG,Value:<_> nil,},EnvVar{Name:HG_INSTANCE_SECRET,Value:<_> nil,},EnvVar{Name:EXTRA_OPTIONS,Value:-profile -profile-port=6060 -profile-addr=0.0.0.0,ValueFrom:nil,},EnvVar{Name:HG_CREATE_TIME_MS,Value:<_> nil,},EnvVar{Name:HG_PULL_POLICY,Value:Always,ValueFrom:nil,},EnvVar{Name:HG_START_REASON,Value:active,ValueFrom:nil,},EnvVar{Name:HGRUN_SECURE_PLUGINS,Value:false,ValueFrom:nil,},EnvVar{Name:HGRUN_PLUGIN_RUNNER_ROOT_CA,Value:false,ValueFrom:nil,},EnvVar{Name:OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,Value:http://jaeger-agent.jaeger.svc.cluster.local:4317,ValueFrom:nil,},EnvVar{Name:JAEGER_SAMPLER_PARAM,Value:1,ValueFrom:nil,},EnvVar{Name:OTEL_RESOURCE_ATTRIBUTES,Value:cluster=dev-us-central-0,namespace=hosted-grafana,ValueFrom:nil,},EnvVar{Name:HG_PROBE_PATH,Value:/api/health,ValueFrom:nil,},EnvVar{Name:HGRUN_EXIT_ON_PLUGIN_FAIL,Value:true,ValueFrom:nil,},EnvVar{Name:HGRUN_PLUGIN_INSTALL_RETRIES,Value:2,ValueFrom:nil,},EnvVar{Name:HGRUN_PLUGIN_INSTALL_CONCURRENCY,Value:1,ValueFrom:nil,},EnvVar{Name:HGRUN_LAUNCH_TIMEOUT,Value:3m0s,ValueFrom:nil,},EnvVar{Name:GOMEMLIMIT,Value:429496730,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{26 -3} {} 26m DecimalSI},memory: {{293601280 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/api/health,Port:{0 80 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/hgrun check],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/hgrun drain -timeout 1m0s -waitTime 55s],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_PTRACE],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod <_> ErrImagePull: [rpc error: code =NotFound desc =failed to pull and unpack image "us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference "us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> not found, failed to pull and unpack image "us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference "us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> unexpected status from HEAD request to https:<_> 403 Forbidden]`,
` ln --force -s /proc/$(pidof hgrun-pause)/root/bin/hgrun /bin/hgrun;`,
` while [ "$(pidof plugins-pause)" = "" ]; do sleep 0.5; done;`,
` ts=2024-05-07T11:59:32.025687537Z level=error caller=http_client.go:56 app=hgrun hgrun_version=0.1.453-59-gf3f63162a msg="request`,
- ` ts=2024-05-07T11:59:<_> level=error caller=http_client.go:56 app=hgrun <_> msg="request failed" error="Get \"http://127.0.0.1:3000/api/health\": dial tcp 127.0.0.1:3000: connect: connection refused" method=GET url=http://127.0.0.1:3000/api/health`,
+ ` ts=2024-05-07T11:59:<_>.<_> level=error caller=http_client.go:56 app=hgrun hgrun_version=0.1.<_> msg="request failed" error="Get \"http://127.0.0.1:3000/api/health\": dial tcp 127.0.0.1:3000: connect: connection refused" method=GET url=http://127.0.0.1:3000/api/health`,
`2024-05-07T11:59:43.484606Z INFO ExtHandler ExtHandler Downloading agent manifest`,
- `2024-05-07T11:59:<_> INFO TelemetryEventsCollector ExtHandler Collected 2 events for extension: Microsoft.Azure.Extensions.CustomScript`,
- `<_> Consumed <_> CPU time.`,
- `<_> Deactivated successfully.`,
+ `2024-05-07T11:59:<_>.<_> INFO TelemetryEventsCollector ExtHandler Collected 2 events for extension: Microsoft.Azure.Extensions.CustomScript`,
+ `<_>.scope: Consumed <_>.<_> CPU time.`,
+ `<_>.scope: Deactivated successfully.`,
`AVC apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
- `E0507 11:59:29.725681 3089 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"azure-resourcemanager-exporter\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=azure-resourcemanager-exporter pod=azure-resourcemanager-exporter-6b5b58c666-rsttd_infra-exporters(5a95f801-309c-4f33-864a-406262c6ece6)\"" pod="infra-exporters/azure-resourcemanager-exporter-6b5b58c666-rsttd" podUID="5a95f801-309c-4f33-864a-406262c6ece6"`,
- `E0507 11:59:31.554203 4531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frontend\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=frontend pod=otel-demo-alt-dev-frontend-79ccf98858-mbj4x_otel-demo-alt(d08e620e-00d0-49f1-a195-820a62e8de8f)\"" pod="otel-demo-alt/otel-demo-alt-dev-frontend-79ccf98858-mbj4x" podUID="d08e620e-00d0-49f1-a195-820a62e8de8f"`,
- `E0507 11:59:31.928148 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[terraform-drift-detector-data], unattached volumes=[terraform-drift-detector-data], failed to process volumes=[]:context deadline exceeded" pod="terraform-drift-detector/terraform-drift-detector-d68b4c545-jg2vj" podUID="6c607496-ef26-454e-b2f2-4cb75b233fa3"`,
- `E0507 11:59:34.856101 4727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana-render-security\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/hosted-grafana-security:0.1.181\\\"\"" pod="integration/grafana-render-service-cbff479fc-cj9tp" podUID="0e3114d1-2f3a-49d6-a71d-dbc75050d8e0"`,
+ `E0507 11:59:31.928148 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[terraform-drift-detector-data], unattached volumes=[terraform-drift-detector-data], failed to process volumes=[]: context deadline exceeded" pod="terraform-drift-detector/terraform-drift-detector-d68b4c545-jg2vj" podUID="6c607496-ef26-454e-b2f2-4cb75b233fa3"`,
`E0507 11:59:34.923938 3027 kuberuntime_manager.go:1261] container &Container{Name:mysqld-exporter,Image:prom/mysqld-exporter:v0.13.0,Command:[],Args:[--collect.info_schema.innodb_metrics],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_USER,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:username,Optional:nil,},},},EnvVar{Name:MYSQL_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:password,Optional:nil,},},},EnvVar{Name:MYSQL_HOST,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:endpoint,Optional:nil,},},},EnvVar{Name:MYSQL_PORT,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:port,Optional:nil,},},},EnvVar{Name:MYSQL_TLS_MODE,Value:preferred,ValueFrom:nil,},EnvVar{Name:DATA_SOURCE_NAME,Value:$(MYSQL_USER):$(MYSQL_PASSWORD)@tcp($(MYSQL_HOST):$(MYSQL_PORT))/?tls=$(MYSQL_TLS_MODE),ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzx7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod testcrossplane-exporter-c67cfc58f-vbzl4_crossplane-playground(3d49134d-3378-4ec3-824c-5ff4ea2590a5): CreateContainerConfigError: secret "testcrossplane-user-exporter" not found`,
- `E0507 11:59:34.923984 3027 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysqld-exporter\" with CreateContainerConfigError: \"secret \\\"testcrossplane-user-exporter\\\" not found\"" pod="crossplane-playground/testcrossplane-exporter-c67cfc58f-vbzl4" podUID="3d49134d-3378-4ec3-824c-5ff4ea2590a5"`,
- `E0507 11:59:35.928465 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[custom-grafana-agent], unattached volumes=[], failed to process volumes=[]:context deadline exceeded" pod="loki-dev-010/custom-grafana-agent-856948968f-6jfks" podUID="17b244cc-ecb9-4fbc-beaa-8fa47fafe013"`,
- `E0507 11:59:37.252214 4736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ksm\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=ksm pod=new-relic-nri-bundle-nrk8s-ksm-6c785668f5-jcxh2_integration(f7cc3cca-2ffb-4fde-a73e-a4ba8b0f6b3c)\"" pod="integration/new-relic-nri-bundle-nrk8s-ksm-6c785668f5-jcxh2" podUID="f7cc3cca-2ffb-4fde-a73e-a4ba8b0f6b3c"`,
- `E0507 11:59:39.149450 4729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-agent\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=cluster-agent pod=appdynamics-cluster-agent-appdynamics-cluster-agent-56667dmbnkv_integration(69bc5e6c-0451-443e-af8a-c831871afbb8)\"" pod="integration/appdynamics-cluster-agent-appdynamics-cluster-agent-56667dmbnkv" podUID="69bc5e6c-0451-443e-af8a-c831871afbb8"`,
- `E0507 11:59:41.375655 4736 kuberuntime_manager.go:1256] container &Container{Name:ruler,Image:grafana/enterprise-metrics:v2.12.0,Command:[],Args:[-target=ruler -config.expand-env=true -config.file=/etc/mimir/mimir.yaml -distributor.remote-timeout=10s],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:memberlist,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:JAEGER_AGENT_HOST,Value:alloy-otlp.alloy-otlp.svc.cluster.local.,ValueFrom:nil,},EnvVar{Name:JAEGER_TAGS,Value:namespace=ge-metrics-federation,cluster=dev-us-central-0,ValueFrom:nil,},EnvVar{Name:JAEGER_SAMPLER_MANAGER_HOST_PORT,Value:http://alloy-otlp.alloy-otlp.svc.cluster.local.:5778/sampling,ValueFrom:nil,},EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/var/secrets/google/credentials.json,ValueFrom:nil,},EnvVar{Name:AM_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:ruler-alertmanager-token,},Key:token,Optional:nil,},},},EnvVar{Name:JAEGER_REPORTER_MAX_QUEUE_SIZE,Value:1000,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:gcs-credentials,ReadOnly:false,MountPath:/var/secrets/google/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/mimir,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:license,ReadOnly:false,MountPath:/license,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:runtime-config,ReadOnly:false,MountPath:/var/mimir,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:active-queries,ReadOnly:false,MountPath:/active-query-tracker,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jtnbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{1 0 http-metrics},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:45,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gem-mimir-ruler-5f56f7846b-fgxdm_ge-metrics-federation(07c06e21-137b-4fdd-b7d3-703f0a567720): CreateContainerConfigError: secret "ruler-alertmanager-token" not found`,
- `E0507 11:59:<_> 4731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"overrides-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/enterprise-logs:callum-shard-firstlast-08\\\"\"" pod="loki-dev-010/overrides-exporter-98c77fd66-6zj6m" podUID="1ff5bf3e-5856-4f6f-ae04-273f2dee170b"`,
- `E0507 11:59:<_> <_> kuberuntime_manager.go:1256] container &Container{Name:grafana,Image:us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> [/bin/sh],Args:[-c set -e; while [ "$(pidof hgrun-pause)" ="" ]; do sleep 0.5; done;`,
- `E0507 11:59:<_> <_> kuberuntime_manager.go:1256] container &Container{Name:pdc,Image:us.gcr.io/hosted-grafana/pdc:0.1.415,Command:[],Args:[-proxy.auth.ca-keys-dir=/var/run/secrets/pdc-certs -proxy.socks-server.addr=:10443 -proxy.ssh-server.addr=:2222 -proxy.use-socks-username-for-routing -proxy.api.http-address=:9182 -proxy.check-connpool-address-in-ring -memberlist.join=dns+gossip-ring.pdc.svc.cluster.local:7946 -api.http-address=:11443 -distributor.enabled=true -distributor.addr=:10444 -distributor.use-socks-username-for-routing -gateway.enabled=true -gateway.addr=:2244 -log.level=debug -certs.ca-private-key-file=/var/run/secrets/pdc-certs/ca.key -certs.ca-cert-file=/var/run/secrets/pdc-certs/ca.crt -certs.ca-pub-file=/var/run/secrets/pdc-certs/ca.pub -certs.cluster=local-k8s -shard-size=3 -graceful-shutdown-period=30s -enable-multiple-networks],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:socks,HostPort:0,ContainerPort:10443,Protocol:TCP,HostIP:,},ContainerPort{Name:ssh,HostPort:0,ContainerPort:2222,Protocol:TCP,HostIP:,},ContainerPort{Name:distributor,HostPort:0,ContainerPort:10444,Protocol:TCP,HostIP:,},ContainerPort{Name:gateway,HostPort:0,ContainerPort:2244,Protocol:TCP,HostIP:,},ContainerPort{Name:api,HostPort:0,ContainerPort:11443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:pdc-certs,ReadOnly:true,MountPath:/var/run/secrets/pdc-certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:<_> true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 11443 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:40,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/sleep 5],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod <_> ErrImageNeverPull: Container image "us.gcr.io/hosted-grafana/pdc:0.1.415" is not present with pull policy of Never`,
- `E0507 11:59:<_> <_> kuberuntime_manager.go:1256] container &Container{Name:ruler,Image:grafana/enterprise-metrics:v2.11.1,Command:[],Args:[-target=ruler -config.expand-env=true -config.file=/etc/mimir/mimir.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:memberlist,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:JAEGER_AGENT_HOST,Value:<_> nil,},EnvVar{Name:JAEGER_TAGS,Value:namespace=ge-metrics-federation,cluster=dev-us-central-0,ValueFrom:nil,},EnvVar{Name:JAEGER_SAMPLER_MANAGER_HOST_PORT,Value:http:<_> 5778/sampling,ValueFrom:nil,},EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/var/secrets/google/credentials.json,ValueFrom:nil,},EnvVar{Name:AM_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:ruler-alertmanager-token,},Key:token,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:gcs-credentials,ReadOnly:false,MountPath:/var/secrets/google/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/mimir,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:license,ReadOnly:false,MountPath:/license,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:runtime-config,ReadOnly:false,MountPath:/var/mimir,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:active-queries,ReadOnly:false,MountPath:/active-query-tracker,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:<_> true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{1 0 http-metrics},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:45,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod <_> CreateContainerConfigError: secret "ruler-alertmanager-token" not found`,
- `E0507 11:59:<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gcom-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/frontend-monitoring:6a8eb5a\\\"\"" <_> <_>`,
- `E0507 11:59:<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ErrImagePull: \"[rpc error: code =NotFound desc =failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> not found, failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> unexpected status from HEAD request to https:<_> 403 Forbidden]\"" <_> <_>`,
- `E0507 11:59:<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> <_> <_>`,
- `E0507 11:59:<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pdc\" with ErrImageNeverPull: \"Container image \\\"us.gcr.io/hosted-grafana/pdc:0.1.415\\\" is not present with pull policy of Never\"" <_> <_>`,
- `E0507 11:59:<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ruler\" with CreateContainerConfigError: \"secret \\\"ruler-alertmanager-token\\\" not found\"" <_> <_>`,
- `E0507 11:59:<_> <_> prober.go:104] "Probe errored" err="rpc error: code =NotFound desc =failed to exec in container: failed to load task: no running task found: task <_> not found: not found" probeType="Readiness" <_> <_> containerName="grafana"`,
- `E0507 11:59:<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code =NotFound desc =failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> not found" image="us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>`,
- `E0507 11:59:<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code =Unknown desc =failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> unexpected status from HEAD request to https:<_> 403 Forbidden" image="us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>`,
- `E0507 11:59:<_> <_> remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code =NotFound desc =an error occurred when try to find container <_> not found" <_>`,
- `E0507 11:59:<_> <_> remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code =NotFound desc =failed to exec in container: failed to load task: no running task found: task <_> not found: not found" <_> cmd=["/bin/hgrun","check"]`,
- `E0507 <_> 4733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=prometheus pod=bryan-prometheus-0_bryan-prometheus(6dadfe71-eb19-4231-a96e-c64bb5499a1e)\"" pod="bryan-prometheus/bryan-prometheus-0" podUID="6dadfe71-eb19-4231-a96e-c64bb5499a1e"`,
- `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"agent\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=agent pod=<_> pod=<_> podUID=<_>`,
- `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cortex-gw\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=cortex-gw pod=<_> pod=<_> podUID=<_>`,
- `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldpinger\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=goldpinger pod=<_> pod=<_> podUID=<_>`,
- `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with CrashLoopBackOff:\"back-off <_> restarting failed container=grafana pod=<_> pod=<_> podUID=<_>`,
- `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"support-agent\" with CrashLoopBackOff:\"back-off 5m0s restarting failed container=support-agent pod=<_> pod=<_> podUID=<_>`,
- `E0507 <_> <_> prober.go:239] "Unable to write all bytes from execInContainer" err="short write" expectedBytes=<_> actualBytes=10240`,
- `I0507 11:59:29.320184 1537502 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="logs-endpoint-dev-005/kafka-controller-0" secret="" err="secret \"not-needed\" not found"`,
+ `E0507 11:59:35.928465 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[custom-grafana-agent], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="loki-dev-010/custom-grafana-agent-856948968f-6jfks" podUID="17b244cc-ecb9-4fbc-beaa-8fa47fafe013"`,
+ `E0507 11:59:<_>.<_> <_> kuberuntime_manager.go:1256] container &Container{Name:grafana,Image:us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>,Command:[/bin/sh],Args:[-c set -e; while [ "$(pidof hgrun-pause)" = "" ]; do sleep 0.5; done;`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with CrashLoopBackOff: \"back-off <_> restarting failed container=<_> pod=<_>(<_>)\"" pod="<_>/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with CreateContainerConfigError: \"secret \\\"<_>\\\" not found\"" pod="<_>/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/<_>:<_>.<_>.<_>\\\"\"" pod="<_>/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/<_>:<_>\\\"\"" pod="<_>/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ErrImagePull: \"[rpc error: code = NotFound desc = failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": failed to resolve reference \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not found, failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": failed to resolve reference \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_>.1.<_>: 403 Forbidden]\"" pod="hosted-grafana/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pdc\" with ErrImageNeverPull: \"Container image \\\"us.gcr.io/hosted-grafana/pdc:0.1.415\\\" is not present with pull policy of Never\"" pod="pdc/<_>" podUID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana"`,
+ `E0507 11:59:<_>.<_> <_> prober.go:239] "Unable to write all bytes from execInContainer" err="short write" expectedBytes=<_> actualBytes=10240`,
+ `E0507 11:59:<_>.<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not found" image="us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>"`,
+ `E0507 11:59:<_>.<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_>.1.<_>: 403 Forbidden" image="us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>"`,
+ `E0507 11:59:<_>.<_> <_> remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found" containerID="<_>"`,
+ `E0507 11:59:<_>.<_> <_> remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" containerID="<_>" cmd=["/bin/hgrun","check"]`,
`I0507 11:59:31.815514 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hosted-grafana-pro) is not from ACR, return empty authentication`,
- `I0507 11:59:32.409568 581823 cache.go:40] re-using cached key and certificate`,
- `I0507 11:59:33.422254 1537502 kubelet_getters.go:187] "Pod status updated" pod="kube-system/kube-proxy-gke-dev-us-central-0-main-n2s16-3-1dd-9b502d96-x28r" status="Running"`,
`I0507 11:59:34.518822 3224 kuberuntime_container.go:745] "Killing container with a grace period" pod="hosted-grafana/hosted-grafana-api-7b6bd9b949-9csb4" podUID="25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" containerName="hgapi" containerID="containerd://c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e" gracePeriod=30`,
`I0507 11:59:34.834734 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95j2t\" (UniqueName: \"kubernetes.io/projected/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-kube-api-access-95j2t\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
`I0507 11:59:34.834794 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pdc-certs\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
`I0507 11:59:34.834835 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcs-serviceaccount\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-gcs-serviceaccount\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
- `I0507 11:59:34.836955 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs" (OuterVolumeSpecName: "pdc-certs") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "pdc-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
`I0507 11:59:34.841404 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-kube-api-access-95j2t" (OuterVolumeSpecName: "kube-api-access-95j2t") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "kube-api-access-95j2t". PluginName "kubernetes.io/projected", VolumeGidValue ""`,
- `I0507 11:59:34.841447 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-gcs-serviceaccount" (OuterVolumeSpecName: "gcs-serviceaccount") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "gcs-serviceaccount". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
- `I0507 11:59:34.854084 4727 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="integration/grafana-render-service-cbff479fc-cj9tp" secret="" err="secret \"us-gcr-io-hosted-grafana\" not found"`,
`I0507 11:59:34.936025 3224 reconciler_common.go:300] "Volume detached for volume \"pdc-certs\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
- `I0507 11:59:37.133005 3782 prober.go:107] "Probe failed" probeType="Readiness" pod="loki-dev-014/loki-dev-014-rollout-operator-58fc68b876-2qhmp" podUID="e6504036-2514-4ecc-b78c-c47061f60c9f" containerName="rollout-operator" probeResult="failure" output="HTTP probe failed with statuscode:500"`,
- `I0507 11:59:37.915108 4726 prober.go:107] "Probe failed" probeType="Readiness" pod="agent-management-dev-002/agent-management-api-7ff7b9b9-k9nft" podUID="9893f9ac-f3e4-41fb-8da7-592061d2386c" containerName="agent-management-api" probeResult="failure" output="HTTP probe failed with statuscode:400"`,
+ `I0507 11:59:34.<_> 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/<_>" (OuterVolumeSpecName: "<_>") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "<_>". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
+ `I0507 11:59:34.<_> 3224 reconciler_common.go:300] "Volume detached for volume \"<_>\" (UniqueName: \"kubernetes.io/<_>/<_>\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
+ `I0507 11:59:37.<_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="<_>/<_>" podUID="<_>" containerName="<_>" probeResult="failure" output="HTTP probe failed with statuscode: <_>"`,
`I0507 11:59:38.116658 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hg-plugins) is not from ACR, return empty authentication`,
`I0507 11:59:39.168633 2776 kubelet.go:2493] "SyncLoop (probe)" probe="readiness" status="" pod="hosted-grafana/dafdeveuwest2-grafana-7845d969b5-f8h5q"`,
- `I0507 11:59:39.560605 4739 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="logs-endpoint-dev-005/kafka-exporter-766c6757b5-bggf6" secret="" err="secret \"not-needed\" not found"`,
- `I0507 11:59:<_> 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hgrun) is not from ACR, return empty authentication`,
- `I0507 11:59:<_> 3224 reconciler_common.go:300] "Volume detached for volume <_> (UniqueName: <_> on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
- `I0507 11:59:<_> 6247 prober.go:107] "Probe failed" probeType="Readiness" pod="grafana-agent/grafana-agent-helm-4" podUID="c36c5200-1cd6-4093-893c-c022f91af996" containerName="grafana-agent" probeResult="failure" output="Get \"http://10.0.99.125:3090/-/ready\": dial tcp 10.0.99.125:3090: connect: connection refused"`,
- `I0507 11:59:<_> <_> generic.go:334] "Generic (PLEG): container finished" <_> <_> exitCode=1`,
- `I0507 11:59:<_> <_> kubelet.go:<_> "SyncLoop (PLEG): event for pod" <_> event={"ID":<_> "ContainerDied","Data":<_>`,
- `I0507 11:59:<_> <_> kubelet.go:<_> "SyncLoop (PLEG): event for pod" <_> event={"ID":<_> "ContainerStarted","Data":<_>`,
- `I0507 11:59:<_> <_> kubelet.go:<_> "SyncLoop DELETE" source="api" <_>`,
- `I0507 11:59:<_> <_> kubelet.go:<_> "SyncLoop REMOVE" source="api" <_>`,
- `I0507 11:59:<_> <_> kubelet_getters.go:187] "Pod status updated" <_> status="Running"`,
- `I0507 11:59:<_> <_> kubelet_volumes.go:<_> "Cleaned up orphaned pod volumes dir" <_> <_>`,
- `I0507 11:59:<_> <_> pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":<_> err="failed to get container status <_> rpc error: code =NotFound desc =an error occurred when try to find container <_> not found"`,
- `I0507 11:59:<_> <_> scope.go:117] "RemoveContainer" <_>`,
- `I0507 11:59:<_> <_> cache.go:40] re-using cached key and certificate`,
- `I0507 <_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod=<_>`,
- `I0507 <_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="readiness" status="ready" pod=<_>`,
- `I0507 <_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod=<_> secret="" err="secret \"dockerhub\" not found"`,
- `I0507 <_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod=<_> secret="" err="secret \"gcr\" not found"`,
- `I0507 <_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod=<_> podUID=<_> containerName="grafana" probeResult="failure" output=<`,
- `IPv4: martian source <_> from <_> on dev eth0`,
+ `I0507 11:59:<_>.<_> 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hgrun) is not from ACR, return empty authentication`,
+ `I0507 11:59:<_>.<_> 6247 prober.go:107] "Probe failed" probeType="Readiness" pod="grafana-agent/grafana-agent-helm-4" podUID="c36c5200-1cd6-4093-893c-c022f91af996" containerName="grafana-agent" probeResult="failure" output="Get \"http://10.0.99.125:3090/-/ready\": dial tcp 10.0.99.125:3090: connect: connection refused"`,
+ `I0507 11:59:<_>.<_> <_> generic.go:334] "Generic (PLEG): container finished" podID="<_>" containerID="<_>" exitCode=1`,
+ `I0507 11:59:<_>.<_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hosted-grafana/<_>"`,
+ `I0507 11:59:<_>.<_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="readiness" status="ready" pod="hosted-grafana/<_>"`,
+ `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop (PLEG): event for pod" pod="<_>/<_>" event={"ID":"<_>","Type":"<_>","Data":"<_>"}`,
+ `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop DELETE" source="api" pods=["hosted-grafana/<_>"]`,
+ `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop REMOVE" source="api" pods=["hosted-grafana/<_>"]`,
+ `I0507 11:59:<_>.<_> <_> kubelet_getters.go:187] "Pod status updated" pod="kube-system/<_>" status="Running"`,
+ `I0507 11:59:<_>.<_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="<_>/<_>" secret="" err="secret \"<_>\" not found"`,
+ `I0507 11:59:<_>.<_> <_> kubelet_volumes.go:<_>] "Cleaned up orphaned pod volumes dir" podUID="<_>" path="/var/lib/kubelet/pods/<_>/volumes"`,
+ `I0507 11:59:<_>.<_> <_> pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"<_>"} err="failed to get container status \"<_>\": rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
+ `I0507 11:59:<_>.<_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana" probeResult="failure" output=<`,
+ `I0507 11:59:<_>.<_> <_> scope.go:117] "RemoveContainer" containerID="<_>"`,
+ `I0507 11:59:<_>.<_> <_> cache.go:40] re-using cached key and certificate`,
+ `IPv4: martian source 10.132.<_>.<_> from 10.132.<_>.<_>, on dev eth0`,
`PRC: Renewing lease on eth0.`,
`RCV: Reply message on eth0 from fe80::e9:7eff:fedf:3d37.`,
`Removed slice libcontainer container kubepods-burstable-pod25cb986c_3d6c_4ed0_abf3_ee59ed6175f9.slice.`,
- `Started libcontainer container <_>`,
+ `Started cri-containerd-95bf586cd79d43120ff44582d4dbd2476de61744411f8515b9b2c527a41fd5d9.scope.`,
+ `Started libcontainer container <_>.`,
`XMT: Renew on eth0, interval 9700ms.`,
- `XMT: Solicit on eth0, interval <_>`,
- `audit:type=1400 <_> apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
+ `XMT: Solicit on eth0, interval <_>.`,
+ `audit: type=1400 audit(<_>.<_>:<_>): apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
`kauditd_printk_skb: <_> callbacks suppressed`,
`ll header: 00000000: 42 01 0a 80 00 <_> 42 01 0a 80 00 01 08 00`,
`net_ratelimit: 2 callbacks suppressed`,
- `time="2024-05-07T11:59:32.755926053Z" level=info msg="CreateContainer within sandbox \"81e019a0248a0300a328fd59f9939c3eaa1b98aa7f325a7f6e00592633275ef6\" for container &ContainerMetadata{Name:checkoutservice,Attempt:3417,}"`,
+ `run-containerd-io.containerd.runtime.v2.task-k8s.<_>.mount: Deactivated successfully.`,
+ `run-containerd-runc-k8s.io-e5f17d69eee483ec8d43b26d5d628246984ba92f794ee5f3748935f5b6448b9b-runc.6eAyHn.mount: Deactivated successfully.`,
`time="2024-05-07T11:59:34.519591759Z" level=info msg="StopContainer for \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" with timeout 30 (s)"`,
`time="2024-05-07T11:59:34.520032214Z" level=info msg="Stop container \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" with signal terminated"`,
`time="2024-05-07T11:59:34.591282703Z" level=info msg="StopContainer for \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" returns successfully"`,
@@ -189,34 +167,33 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`time="2024-05-07T11:59:34.592084495Z" level=info msg="Container to stop \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""`,
`time="2024-05-07T11:59:34.706960850Z" level=info msg="TearDown network for sandbox \"c605ad2cdc74c6b5288f2532ad71cce81a28ef6965f97a89ff6609deb825553a\" successfully"`,
`time="2024-05-07T11:59:34.707025668Z" level=info msg="StopPodSandbox for \"c605ad2cdc74c6b5288f2532ad71cce81a28ef6965f97a89ff6609deb825553a\" returns successfully"`,
- `time="2024-05-07T11:59:36.177858616Z" level=info msg="CreateContainer within sandbox \"81e019a0248a0300a328fd59f9939c3eaa1b98aa7f325a7f6e00592633275ef6\" for &ContainerMetadata{Name:checkoutservice,Attempt:3417,} returns container id \"95bf586cd79d43120ff44582d4dbd2476de61744411f8515b9b2c527a41fd5d9\""`,
- `time="2024-05-07T11:59:38.484586527Z" level=error msg="Failed to delete exec process \"d9e0a1867ce73695ad859f2b0a76fe8f5053db8a5e49142d747e53a445729bd4\" for container \"6ad3e55547f2192f865518e75009243418b177091c1c781236e2ac8f0324b408\"" error="ttrpc:closed:unknown"`,
- `time="2024-05-07T11:59:43.941729092Z" level=info msg="CreateContainer within sandbox \"ee9dc07bca79ef7dffe2a6eb326e27236e9e97c35913c7aae16ee0a62632fc25\" for container &ContainerMetadata{Name:cortex-gw,Attempt:1660,}"`,
- `time="2024-05-07T11:59:43.954289531Z" level=info msg="CreateContainer within sandbox \"ee9dc07bca79ef7dffe2a6eb326e27236e9e97c35913c7aae16ee0a62632fc25\" for &ContainerMetadata{Name:cortex-gw,Attempt:1660,} returns container id \"93fa5decd62691912f90c9b27526f5e00183239bfa4d3f4ea8578a7873b9c2b4\""`,
- `time="2024-05-07T11:59:<_> level=error msg="ExecSync for <_> failed" error="rpc error: code =NotFound desc =failed to exec in container: failed to load task: no running task found: task <_> not found: not found"`,
- `time="2024-05-07T11:59:<_> level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed" error="failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> unexpected status from HEAD request to https:<_> 403 Forbidden"`,
- `time="2024-05-07T11:59:<_> level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed" error="rpc error: code =NotFound desc =failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> us.gcr.io/hosted-grafana/hosted-grafana-pro:<_> not found"`,
- `time="2024-05-07T11:59:<_> level=info msg="CreateContainer within sandbox <_> for &ContainerMetadata{Name:grafana,Attempt:<_> returns container id <_>`,
- `time="2024-05-07T11:59:<_> level=info msg="CreateContainer within sandbox <_> for &ContainerMetadata{Name:hgrun,Attempt:0,} returns container id <_>`,
- `time="2024-05-07T11:59:<_> level=info msg="CreateContainer within sandbox <_> for container &ContainerMetadata{Name:grafana,Attempt:<_>`,
- `time="2024-05-07T11:59:<_> level=info msg="CreateContainer within sandbox <_> for container &ContainerMetadata{Name:hgrun,Attempt:0,}"`,
- `time="2024-05-07T11:59:<_> level=info msg="ImageCreate event name:<_> <_> labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_> level=info msg="ImageUpdate event name:<_> <_> labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_> level=info msg="PullImage \"us.gcr.io/hosted-grafana/hgrun:0.1.452\" returns image reference \"sha256:9fb1bce3e4a228f50768d21842cd7d7fafc1d586eaa0326c9d3c86d79a36868a\""`,
- `time="2024-05-07T11:59:<_> level=info msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:11.1.0-70397\" returns image reference \"sha256:0036b00b52fc547c944c1c820817d91fba6e20775cbf4e6c3e09ad2e682dbd73\""`,
- `time="2024-05-07T11:59:<_> level=info msg="Pulled image \"us.gcr.io/hosted-grafana/hgrun:0.1.452\" with image id \"sha256:9fb1bce3e4a228f50768d21842cd7d7fafc1d586eaa0326c9d3c86d79a36868a\", repo tag \"us.gcr.io/hosted-grafana/hgrun:0.1.452\", repo digest \"us.gcr.io/hosted-grafana/hgrun@sha256:b492dbbbee9faf9dba63c9fd89e6f9e148239765454c6a54c4284a2828dec153\", size \"19109699\" in <_>`,
- `time="2024-05-07T11:59:<_> level=info msg="Pulled image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:11.1.0-70397\" with image id \"sha256:0036b00b52fc547c944c1c820817d91fba6e20775cbf4e6c3e09ad2e682dbd73\", repo tag \"us.gcr.io/hosted-grafana/hosted-grafana-pro:11.1.0-70397\", repo digest \"us.gcr.io/hosted-grafana/hosted-grafana-pro@sha256:0853965a142fb95648de3281a7c71de0d05fb51616bc32b523dc2f1da6ca06dc\", size \"173405048\" in <_>`,
- `time=<_> level=error msg="ContainerStatus for <_> failed" error="rpc error:code = NotFound desc = an error occurred when try to find container <_> not found"`,
- `time=<_> level=info msg="PullImage <_>`,
- `time=<_> level=info msg="RemoveContainer for <_>`,
- `time=<_> level=info msg="RemoveContainer for <_> returns successfully"`,
- `time=<_> level=info msg="StartContainer for <_>`,
- `time=<_> level=info msg="StartContainer for <_> returns successfully"`,
- `time=<_> level=info msg="cleaning up dead shim" namespace=k8s.io`,
- `time=<_> level=info msg="shim disconnected" id=<_> namespace=k8s.io`,
- `time=<_> level=info msg="stop pulling image <_> active requests=0, bytes read=<_>`,
- `time=<_> level=info msg="trying next host - response was http.StatusNotFound" host=us.gcr.io`,
- `time=<_> level=warning msg="cleaning up after shim disconnected" id=<_> namespace=k8s.io`,
+ `time="2024-05-07T11:59:38.117772842Z" level=info msg="PullImage \"us.gcr.io/hosted-grafana/hg-plugins:2024-05-07-v545244-f51851984\""`,
+ `time="2024-05-07T11:59:38.484586527Z" level=error msg="Failed to delete exec process \"d9e0a1867ce73695ad859f2b0a76fe8f5053db8a5e49142d747e53a445729bd4\" for container \"6ad3e55547f2192f865518e75009243418b177091c1c781236e2ac8f0324b408\"" error="ttrpc: closed: unknown"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=error msg="ContainerStatus for \"<_>\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=error msg="ExecSync for \"<_>\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\" failed" error="failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_>.1.<_>: 403 Forbidden"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not found"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="CreateContainer within sandbox \"<_>\" for &ContainerMetadata{Name:<_>,Attempt:<_>,} returns container id \"<_>\""`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="CreateContainer within sandbox \"<_>\" for container &ContainerMetadata{Name:<_>,Attempt:<_>,}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/<_>@sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/<_>@sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" returns image reference \"sha256:<_>\""`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\""`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="Pulled image \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" with image id \"sha256:<_>\", repo tag \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\", repo digest \"us.gcr.io/hosted-grafana/<_>@sha256:<_>\", size \"<_>\" in <_>.<_>"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="RemoveContainer for \"<_>\" returns successfully"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="RemoveContainer for \"<_>\""`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="StartContainer for \"<_>\" returns successfully"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="StartContainer for \"<_>\""`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="cleaning up dead shim" namespace=k8s.io`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="shim disconnected" id=<_> namespace=k8s.io`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="stop pulling image us.gcr.io/hosted-grafana/<_>:<_>.1.<_>: active requests=0, bytes read=<_>"`,
+ `time="2024-05-07T11:59:<_>.<_>" level=info msg="trying next host - response was http.StatusNotFound" host=us.gcr.io`,
+ `time="2024-05-07T11:59:<_>.<_>" level=warning msg="cleaning up after shim disconnected" id=<_> namespace=k8s.io`,
+ `var-lib-containerd-tmpmounts-containerd\<_>.mount: Deactivated successfully.`,
},
},
{
@@ -224,21 +201,19 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: "testdata/kafka.txt",
patterns: []string{
`[2024-05-07 10:55:40,626] INFO [LocalLog partition=ingest-6, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=180391157, size=16991045, lastModifiedTime=1715075754780, largestRecordTimestamp=Some(1715075754774)),LogSegment(baseOffset=180393429, size=16997692, lastModifiedTime=1715075760206, largestRecordTimestamp=Some(1715075760186)),LogSegment(baseOffset=180395889, size=16998200, lastModifiedTime=1715075765542, largestRecordTimestamp=Some(1715075765526)),LogSegment(baseOffset=180398373, size=16977347, lastModifiedTime=1715075770515, largestRecordTimestamp=Some(1715075770504)) (kafka.log.LocalLog$)`,
- `[2024-05-07 10:55:40,638] INFO [LocalLog partition=ingest-6, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=180400817, size=16997594, lastModifiedTime=1715075775780, largestRecordTimestamp=Some(1715075775771)),LogSegment(baseOffset=180403261, size=16992344, lastModifiedTime=1715075781053, largestRecordTimestamp=Some(1715075781021)),LogSegment(baseOffset=180405723, size=16989895, lastModifiedTime=1715075786205, largestRecordTimestamp=Some(1715075786174)),LogSegment(baseOffset=180408118, size=16998698, lastModifiedTime=1715075791681, largestRecordTimestamp=Some(1715075791673)),LogSegment(baseOffset=180410608, size=16995676, lastModifiedTime=1715075796438, largestRecordTimestamp=Some(1715075796430)),LogSegment(baseOffset=180412733, size=16963278, lastModifiedTime=1715075800534, largestRecordTimestamp=Some(1715075800511)),LogSegment(baseOffset=180414883, size=16984328, lastModifiedTime=1715075805272, largestRecordTimestamp=Some(1715075805230)),LogSegment(baseOffset=180417063, size=16989109, lastModifiedTime=1715075810381, largestRecordTimestamp=Some(1715075810372)),LogSegment(baseOffset=180419267, size=16996871, lastModifiedTime=1715075815153, largestRecordTimestamp=Some(1715075815125)),LogSegment(baseOffset=180421560, size=16988558, lastModifiedTime=1715075819785, largestRecordTimestamp=Some(1715075819763)),LogSegment(baseOffset=180424008, size=16999292, lastModifiedTime=1715075825336, largestRecordTimestamp=Some(1715075825303)),LogSegment(baseOffset=180426459, size=16990595, lastModifiedTime=1715075830839, largestRecordTimestamp=Some(1715075830827)),LogSegment(baseOffset=180428944, size=16995859, lastModifiedTime=1715075835942, largestRecordTimestamp=Some(1715075835904)),LogSegment(baseOffset=180431327, size=16992294, lastModifiedTime=1715075841219, largestRecordTimestamp=Some(1715075841214)),LogSegment(baseOffset=180433867, size=16966736, lastModifiedTime=1715075846443, largestRecordTimestamp=Some(1715075846401)),LogSegment(baseOffset=180436204, size=16894731, lastModifiedTime=1715075853273, largestRecordTimestamp=Some(1715075853244)),LogSegment(baseOffset=180438984, size=16983529, lastModifiedTime=1715075858911, largestRecordTimestamp=Some(1715075858891)),LogSegment(baseOffset=180441466, size=16996933, lastModifiedTime=1715075863566, largestRecordTimestamp=Some(1715075863554)),LogSegment(baseOffset=180443778, size=16999841, lastModifiedTime=1715075866199, largestRecordTimestamp=Some(1715075866185)),LogSegment(baseOffset=180445367, size=16992471, lastModifiedTime=1715075870385, largestRecordTimestamp=Some(1715075870347)),LogSegment(baseOffset=180447366, size=16999996, lastModifiedTime=1715075875102, largestRecordTimestamp=Some(1715075875091)),LogSegment(baseOffset=180449601, size=16994426, lastModifiedTime=1715075879927, largestRecordTimestamp=Some(1715075879926)),LogSegment(baseOffset=180452079, size=16998020, lastModifiedTime=1715075885293, largestRecordTimestamp=Some(1715075885263)),LogSegment(baseOffset=180454546, size=16992231, lastModifiedTime=1715075890424, largestRecordTimestamp=Some(1715075890409)),LogSegment(baseOffset=180456986, size=16970315, lastModifiedTime=1715075895719, largestRecordTimestamp=Some(1715075895690)),LogSegment(baseOffset=180459366, size=16990785, lastModifiedTime=1715075900996, largestRecordTimestamp=Some(1715075900985)),LogSegment(baseOffset=180461885, size=16996655, lastModifiedTime=1715075905847, largestRecordTimestamp=Some(1715075905841)),LogSegment(baseOffset=180464299, size=16982181, lastModifiedTime=1715075911052, largestRecordTimestamp=Some(1715075911028)),LogSegment(baseOffset=180466821, size=16997630, lastModifiedTime=1715075915962, largestRecordTimestamp=Some(1715075915953)),LogSegment(baseOffset=180468968, size=16995723, lastModifiedTime=1715075920325, largestRecordTimestamp=Some(1715075920308)),LogSegment(baseOffset=180471046, size=16979316, lastModifiedTime=1715075924724, largestRecordTimestamp=Some(1715075924697)),LogSegment(baseOffset=180473259, size=16995238, lastModifiedTime=1715075929645, largestRecordTimestamp=Some(1715075929624)),LogSegment(baseOffset=180475486, size=16988461, lastModifiedTime=1715075934288, largestRecordTimestamp=Some(1715075934283)),LogSegment(baseOffset=180477735, size=16993767, lastModifiedTime=1715075939277, largestRecordTimestamp=Some(1715075939270)),LogSegment(baseOffset=180480095, size=16995409, lastModifiedTime=1715075944639, largestRecordTimestamp=Some(1715075944635)),LogSegment(baseOffset=180482560, size=16992784, lastModifiedTime=1715075949760, largestRecordTimestamp=Some(1715075949760)),LogSegment(baseOffset=180484967, size=16990838, lastModifiedTime=1715075954937, largestRecordTimestamp=Some(1715075954929)),LogSegment(baseOffset=180487377, size=16976794, lastModifiedTime=1715075960151, largestRecordTimestamp=Some(1715075960119)),LogSegment(baseOffset=180489919, size=16997379, lastModifiedTime=1715075965116, largestRecordTimestamp=Some(1715075965085)),LogSegment(baseOffset=180492304, size=16956613, lastModifiedTime=1715075970448, largestRecordTimestamp=Some(1715075970424)),LogSegment(baseOffset=180494832, size=16895640, lastModifiedTime=1715075975354, largestRecordTimestamp=Some(1715075975341)),LogSegment(baseOffset=180496930, size=16998328, lastModifiedTime=1715075979813, largestRecordTimestamp=Some(1715075979796)),LogSegment(baseOffset=180499079, size=16995699, lastModifiedTime=1715075984309, largestRecordTimestamp=Some(1715075984285)),LogSegment(baseOffset=180501183, size=16993785, lastModifiedTime=1715075989086, largestRecordTimestamp=Some(1715075989064)),LogSegment(baseOffset=180503431, size=16989600, lastModifiedTime=1715075993713, largestRecordTimestamp=Some(1715075993683)),LogSegment(baseOffset=180505674, size=16984790, lastModifiedTime=1715075998337, largestRecordTimestamp=Some(1715075998318)),LogSegment(baseOffset=180508022, size=16982630, lastModifiedTime=1715076003671, largestRecordTimestamp=Some(1715076003660)),LogSegment(baseOffset=180510439, size=16999488, lastModifiedTime=1715076009000, largestRecordTimestamp=Some(1715076008996)),LogSegment(baseOffset=180512848, size=16997845, lastModifiedTime=1715076014033, largestRecordTimestamp=Some(1715076014032)),LogSegment(baseOffset=180515281, size=16990661, lastModifiedTime=1715076019245, largestRecordTimestamp=Some(1715076019216)),LogSegment(baseOffset=180517815, size=16996244, lastModifiedTime=1715076023989, largestRecordTimestamp=Some(1715076023963)),LogSegment(baseOffset=180520112, size=16992012, lastModifiedTime=1715076029243, largestRecordTimestamp=Some(1715076029231)) (kafka.log.LocalLog$)`,
`[2024-05-07 10:55:53,038] INFO [LocalLog partition=mimir-dev-09-aggregations-offsets-1, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=447957, size=948, lastModifiedTime=1715059232052, largestRecordTimestamp=Some(1715059232002)),LogSegment(baseOffset=447969, size=948, lastModifiedTime=1715059424352, largestRecordTimestamp=Some(1715059424301)) (kafka.log.LocalLog$)`,
- `[2024-05-07 10:55:<_> INFO Deleted log <_> (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_> INFO Deleted offset index <_> (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_> INFO Deleted producer state snapshot <_> (kafka.log.SnapshotFile)`,
- `[2024-05-07 10:55:<_> INFO Deleted time index <_> (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_> INFO [ProducerStateManager <_> Wrote producer snapshot at offset <_> with 0 producer ids in <_> ms. (kafka.log.ProducerStateManager)`,
- `[2024-05-07 <_> INFO [LocalLog partition=<_> dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=<_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> (kafka.log.LocalLog$)`,
- `[2024-05-07 <_> INFO [LocalLog partition=<_> dir=/bitnami/kafka/data] Rolled new log segment at offset <_> in <_> ms. (kafka.log.LocalLog)`,
- `[2024-05-07 <_> INFO [LocalLog partition=mimir-dev-09-aggregations-offsets-0, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=<_> size=948, lastModifiedTime=<_> largestRecordTimestamp=<_> (kafka.log.LocalLog$)`,
- `[2024-05-07 <_> INFO [UnifiedLog partition=<_> dir=/bitnami/kafka/data] Deleting segment LogSegment(baseOffset=<_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> due to retention size <_> breach. Log size after deletion will be <_> (kafka.log.UnifiedLog)`,
- `[2024-05-07 <_> INFO [UnifiedLog partition=<_> dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach:LogSegment(baseOffset=<_> size=948, lastModifiedTime=<_> largestRecordTimestamp=<_> <_> size=948, lastModifiedTime=<_> largestRecordTimestamp=<_> (kafka.log.UnifiedLog)`,
- `[2024-05-07 <_> INFO [UnifiedLog partition=<_> dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach:LogSegment(baseOffset=<_> size=<_> lastModifiedTime=<_> largestRecordTimestamp=<_> (kafka.log.UnifiedLog)`,
- `[2024-05-07 <_> INFO [UnifiedLog partition=<_> dir=/bitnami/kafka/data] Incremented log start offset to <_> due to leader offset increment (kafka.log.UnifiedLog)`,
- `[2024-05-07 <_> INFO [UnifiedLog partition=<_> dir=/bitnami/kafka/data] Incremented log start offset to <_> due to segment deletion (kafka.log.UnifiedLog)`,
+ `[2024-05-07 10:55:53,<_>] INFO [LocalLog partition=mimir-dev-09-aggregations-offsets-0, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.LocalLog$)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO Deleted log /bitnami/kafka/data/<_>/<_>.log.deleted. (kafka.log.LogSegment)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO Deleted offset index /bitnami/kafka/data/<_>/<_>.index.deleted. (kafka.log.LogSegment)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO Deleted producer state snapshot /bitnami/kafka/data/<_>/<_>.snapshot.deleted (kafka.log.SnapshotFile)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO Deleted time index /bitnami/kafka/data/<_>/<_>.timeindex.deleted. (kafka.log.LogSegment)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [LocalLog partition=<_>, dir=/bitnami/kafka/data] Rolled new log segment at offset <_> in <_> ms. (kafka.log.LocalLog)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [ProducerStateManager partition=<_>] Wrote producer snapshot at offset <_> with 0 producer ids in <_> ms. (kafka.log.ProducerStateManager)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segment LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) due to retention size <_> breach. Log size after deletion will be <_>. (kafka.log.UnifiedLog)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)),LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to leader offset increment (kafka.log.UnifiedLog)`,
+ `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to segment deletion (kafka.log.UnifiedLog)`,
},
},
{
@@ -246,20 +221,22 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: "testdata/kubernetes.txt",
patterns: []string{
`I0507 12:02:27.947830 1 nodeutilization.go:274] "Evicting pods based on priority, if they have same priority, they'll be evicted based on QoS tiers"`,
- `I0507 12:02:<_> 1 defaultevictor.go:163] "pod does not fit on any other node because of nodeSelector(s), Taint(s), or nodes marked as unschedulable" <_>`,
- `I0507 12:02:<_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:02:<_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="pod has local storage and descheduler is not configured with evictLocalStoragePods"`,
- `I0507 12:02:<_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="pod is a DaemonSet pod"`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, insufficient <_>`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, insufficient <_> insufficient <_>`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, insufficient <_> insufficient <_> insufficient pods]"`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_> insufficient <_>`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="[pod node selector does not match the node label, pod does not tolerate taints on the node]"`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:<_> node:<_> error:="insufficient cpu"`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:<_> error:="[insufficient <_> insufficient <_>`,
- `I0507 12:02:<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:<_> error:="pod node selector does not match the node label"`,
- `I0507 12:02:<_> 1 node.go:339] "no Pod antiaffinity rule found" <_>`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:163] "pod does not fit on any other node because of nodeSelector(s), Taint(s), or nodes marked as unschedulable" pod="<_>/<_>"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="<_>/<_>" checks="pod has local storage and descheduler is not configured with evictLocalStoragePods"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="ge-logs/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="insight-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="loki-dev-ssd/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="pyroscope-ebpf/<_>" checks="pod is a DaemonSet pod"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, <_> <_><_> <_> <_><_> <_> <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>, insufficient <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>, insufficient <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="insufficient cpu"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="[insufficient <_>, insufficient <_>]"`,
+ `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="pod node selector does not match the node label"`,
+ `I0507 12:02:27.<_> 1 node.go:339] "no Pod antiaffinity rule found" pod="<_>/<_>"`,
`I0507 12:04:17.595169 1 descheduler.go:155] Building a pod evictor`,
`I0507 12:04:17.596431 1 nodeutilization.go:204] "Node is underutilized" node="gke-dev-eu-west-3-main-n2s8-1-1dd39c-d1c92061-4z2l" usage={"cpu":"984m","memory":"611Mi","pods":"16"} usagePercentage={"cpu":12.44,"memory":2.15,"pods":25}`,
`I0507 12:04:17.596484 1 highnodeutilization.go:107] "Criteria for a node below target utilization" CPU=50 Mem=50 Pods=100`,
@@ -267,26 +244,33 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`I0507 12:04:17.596528 1 nodeutilization.go:260] "Total capacity to be moved" CPU=5060 Mem=112216292800 Pods=163`,
`I0507 12:04:17.596651 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/metrics-server-v0.6.3-68f5b7c4d5-t5mz8" checks="[pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
`I0507 12:04:17.596803 1 defaultevictor.go:202] "Pod fails the following checks" pod="gadget/gadget-zjjts" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:04:<_> 1 nodeutilization.go:207] "Node is overutilized" <_> usage={"cpu":<_> <_> <_> usagePercentage={"cpu":<_> <_> <_>`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_> <_> 1 defaultevictor.go:202] "Pod fails the following checks" <_> checks="[pod is a mirror pod, pod is a static pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_> <_> 1 descheduler.go:<_> "Number of evicted pods" <_>`,
- `I0507 12:<_> <_> 1 nodeutilization.go:<_> "Evicting pods from node" <_> usage={"cpu":<_> <_> <_>`,
- `I0507 12:<_> <_> 1 nodeutilization.go:<_> "No removable pods on node, try next node" <_>`,
- `I0507 12:<_> <_> 1 profile.go:<_> "Total number of pods evicted" extension point="Balance" <_>`,
- `I0507 12:<_> <_> 1 reflector.go:<_> k8s.io/client-go/informers/factory.go:<_> Watch close - <_> total <_> items received`,
- `I0507 <_> 1 <_> "Pods on node" node=<_> allPods=<_> nonRemovablePods=<_> removablePods=<_>`,
+ `I0507 12:04:17.<_> 1 nodeutilization.go:207] "Node is overutilized" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"} usagePercentage={"cpu":<_>.<_>,"memory":<_>.<_>,"pods":<_>.<_>}`,
+ `I0507 12:04:17.<_> 1 nodeutilization.go:207] "Node is overutilized" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"} usagePercentage={"cpu":<_>.<_>,"memory":<_>.<_>,"pods":<_>}`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="agent-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="conntrack-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="goldpinger/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a mirror pod, pod is a static pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="netfilter-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="node-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="startup/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 12:<_>:<_>.<_> 1 descheduler.go:<_>] "Number of evicted pods" totalEvicted=<_>`,
+ `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "Evicting pods from node" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"}`,
+ `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "No removable pods on node, try next node" node="<_>"`,
+ `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "Pods on node" node="<_>" allPods=<_> nonRemovablePods=<_> removablePods=<_>`,
+ `I0507 12:<_>:<_>.<_> 1 profile.go:<_>] "Total number of pods evicted" extension point="Balance" evictedPods=<_>`,
+ `I0507 12:<_>:<_>.<_> 1 reflector.go:<_>] k8s.io/client-go/informers/factory.go:<_>: Watch close - *v1.<_> total <_> items received`,
},
},
{
drain: New(DefaultConfig(), nil),
inputFile: "testdata/vault.txt",
patterns: []string{
- `2024-05-07T10:<_> <_> [INFO] expiration: revoked lease: <_>`,
+ `2024-05-07T10:56:38.667Z [INFO] expiration: revoked lease: lease_id=auth/gcp/login/h4c031a99aa555040a0dd99864d828e946c6d4e31f4f5178757183def61f9d104`,
+ `2024-05-07T10:<_>:<_>.<_> [INFO] expiration: revoked lease: lease_id=auth/kubernetes/<_>/login/<_>`,
},
},
{
@@ -294,86 +278,129 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: "testdata/calico.txt",
patterns: []string{
`2024-05-08 15:23:56.403 [DEBUG][615489] felix/table.go 699: Finished loading iptables state ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:56.403 [INFO][615489] felix/summary.go 100: Summarising 1 dataplane reconciliation loops over 600ms: avg=119ms longest=119ms (resync-filter-v4)`,
`2024-05-08 15:23:56.614 [DEBUG][76] felix/int_dataplane.go 1777: Refreshing routes`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/route_rule.go 179: Queueing a resync of routing rules. ipVersion=4`,
- `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480:Queueing a resync of routing table. ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480:Queueing a resync of routing table. ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
+ `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480: Queueing a resync of routing table. ifaceRegex="<_>.<_>" ipVersion=0x4 tableIndex=<_>`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 533: Check interfaces matching regex`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/wireguard.go 605: Queueing a resync of wireguard configuration ipVersion=0x4`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/wireguard.go 654: Wireguard is not in-sync - verifying wireguard configuration is removed ipVersion=0x4`,
`2024-05-08 15:23:56.617 [DEBUG][76] felix/wireguard.go 1503: Wireguard is disabled and does not exist ifaceName="wireguard.cali" ipVersion=0x4`,
`2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 584: Flag no OIF for full re-sync`,
- `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 614:Synchronised routes on interface ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
- `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 661:Syncing interface routes ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
- `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 686:Reconcile against kernel programming ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
- `2024-05-08 15:23:57.942 [WARNING][56] felix/table.go 654:Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "", "", "", "", "", "", "", "", "tVnHkvAo15HuiPy0", "", ""} chainName="OUTPUT" expectedRuleIDs=[]string{"tVnHkvAo15HuiPy0", "", "", "", "", "", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="raw"`,
- `2024-05-08 15:23:57.942 [WARNING][56] felix/table.go 654:Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "6gwbT8clXdHdC1b1"} chainName="PREROUTING" expectedRuleIDs=[]string{"6gwbT8clXdHdC1b1", "", "", "", ""} ipVersion=0x4 table="raw"`,
- `2024-05-08 15:23:57.969 [WARNING][56] felix/table.go 654:Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "Cz_u1IQiXIMmKD4c", "", "", "", "", "", "", "", "", "", "", "", ""} chainName="INPUT" expectedRuleIDs=[]string{"Cz_u1IQiXIMmKD4c", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:57.969 [WARNING][56] felix/table.go 654:Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "tVnHkvAo15HuiPy0", "", "", "", "", ""} chainName="OUTPUT" expectedRuleIDs=[]string{"tVnHkvAo15HuiPy0", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 614: Synchronised routes on interface ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
+ `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 661: Syncing interface routes ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
+ `2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 686: Reconcile against kernel programming ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
+ `2024-05-08 15:23:56.624 [INFO][76] felix/summary.go 100: Summarising 1 dataplane reconciliation loops over 200ms: avg=10ms longest=10ms (resync-routes-v4,resync-routes-v4,resync-rules-v4,resync-wg)`,
+ `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="<_>" ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="<_>.<_>" ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="cali-pro-ksa.<_>.<_>" ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 557: Resync: found calico-owned interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 614: Synchronised routes on interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 661: Syncing interface routes ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 686: Reconcile against kernel programming ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 880: Processing route: 254 <_> 10.68.10.<_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 915: Route is correct dest=10.68.10.<_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:57.942 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "", "", "", "", "", "", "", "", "tVnHkvAo15HuiPy0", "", ""} chainName="OUTPUT" expectedRuleIDs=[]string{"tVnHkvAo15HuiPy0", "", "", "", "", "", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="raw"`,
+ `2024-05-08 15:23:57.942 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "6gwbT8clXdHdC1b1"} chainName="PREROUTING" expectedRuleIDs=[]string{"6gwbT8clXdHdC1b1", "", "", "", ""} ipVersion=0x4 table="raw"`,
+ `2024-05-08 15:23:57.969 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "Cz_u1IQiXIMmKD4c", "", "", "", "", "", "", "", "", "", "", "", ""} chainName="INPUT" expectedRuleIDs=[]string{"Cz_u1IQiXIMmKD4c", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:57.969 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "tVnHkvAo15HuiPy0", "", "", "", "", ""} chainName="OUTPUT" expectedRuleIDs=[]string{"tVnHkvAo15HuiPy0", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="filter"`,
+ `2024-05-08 15:23:58.169 [INFO][2333] felix/summary.go 100: Summarising 35 dataplane reconciliation loops over 1m2s: avg=12ms longest=46ms (resync-filter-v4,resync-filter-v6,resync-mangle-v4,resync-mangle-v6,update-filter-v4,update-filter-v6)`,
`2024-05-08 15:23:58.566 [DEBUG][3576126] felix/int_dataplane.go 957: Examining link for MTU calculation mtu=1500 name="eth0"`,
`2024-05-08 15:23:58.680 [DEBUG][216945] felix/int_dataplane.go 1785: Reschedule kick received`,
`2024-05-08 15:23:58.681 [DEBUG][216945] felix/feature_detect.go 112: Refreshing detected iptables features`,
- `2024-05-08 15:23:58.681 [DEBUG][216945] felix/table.go 944:Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="nat"`,
+ `2024-05-08 15:23:58.681 [DEBUG][216945] felix/table.go 944: Invalidating dataplane cache ipVersion=0x4 reason="refresh timer" table="nat"`,
`2024-05-08 15:23:58.684 [DEBUG][216945] felix/feature_detect.go 242: Ran iptables --version rawVersion="iptables v1.8.4 (legacy)\n"`,
`2024-05-08 15:23:58.684 [DEBUG][216945] felix/feature_detect.go 255: Parsed iptables version version=1.8.4`,
`2024-05-08 15:23:58.684 [DEBUG][216945] felix/table.go 604: Loading current iptables state and checking it is correct. ipVersion=0x4 table="nat"`,
`2024-05-08 15:23:58.684 [DEBUG][216945] felix/versionparse.go 110: Raw kernel version rawVersion="Linux version 5.15.0-1057-azure (buildd@lcy02-amd64-033) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #65-Ubuntu SMP Fri Feb 9 18:39:24 UTC 2024\n"`,
`2024-05-08 15:23:58.684 [DEBUG][216945] felix/versionparse.go 118: Parsed kernel version version=5.15.0-1057`,
`2024-05-08 15:23:58.715 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line="# Generated by iptables-nft-save v1.8.4 on Wed May 8 15:23:58 2024" table="nat"`,
- `2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 851:Parsing line ipVersion=0x4 line="*nat" table="nat"`,
+ `2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line="*nat" table="nat"`,
`2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 881: Not an append, skipping ipVersion=0x4 line="# Generated by iptables-nft-save v1.8.4 on Wed May 8 15:23:58 2024" table="nat"`,
- `2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 881:Not an append, skipping ipVersion=0x4 line="*nat" table="nat"`,
- `2024-05-08 15:23:58.717 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":POSTROUTING ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:58.717 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="POSTROUTING" ipVersion=0x4 line=":POSTROUTING ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":OUTPUT ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":PREROUTING ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="OUTPUT" ipVersion=0x4 line=":OUTPUT ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="PREROUTING" ipVersion=0x4 line=":PREROUTING ACCEPT [0:0]" table="nat"`,
- `2024-05-08 15:23:<_> <_> felix/endpoint_mgr.go 443: Reporting endpoint status. dirtyEndpoints=set.Set{}`,
- `2024-05-08 15:23:<_> <_> felix/health.go 167: Health: <_>`,
- `2024-05-08 15:23:<_> <_> felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{wall:<_> ext:<_> loc:(*time.Location)(0x4ce3aa0)}}`,
- `2024-05-08 15:23:<_> <_> felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"felix-startup", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:0, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{wall:<_> ext:<_> loc:(*time.Location)(0x4ce3aa0)}}`,
- `2024-05-08 15:23:<_> <_> felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"int_dataplane", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:90000000000, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{wall:<_> ext:<_> loc:(*time.Location)(0x4ce3aa0)}}`,
- `2024-05-08 15:23:<_> <_> felix/health.go 245: Calculated health summary healthResult=&health.HealthReport{Live:true, Ready:true, Detail:"+------------------+---------+----------------+-----------------+--------+\n| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |\n+------------------+---------+----------------+-----------------+--------+\n| async_calc_graph | 20s | reporting live | reporting ready | |\n| felix-startup | 0s | reporting live | reporting ready | |\n| int_dataplane | 1m30s | reporting live | reporting ready | |\n+------------------+---------+----------------+-----------------+--------+"}`,
- `2024-05-08 15:23:<_> <_> felix/health.go <_> GET <_>`,
- `2024-05-08 15:23:<_> <_> felix/int_dataplane.go 1773: Refreshing IP sets state`,
- `2024-05-08 15:23:<_> <_> felix/int_dataplane.go 1807: Applying dataplane updates`,
- `2024-05-08 15:23:<_> <_> felix/int_dataplane.go 2080: Asked to reschedule. <_>`,
- `2024-05-08 15:23:<_> <_> felix/ipsets.go 234: Asked to resync with the dataplane on next update. family="inet"`,
- `2024-05-08 15:23:<_> <_> felix/ipsets.go 314: Resyncing ipsets with dataplane. family="inet"`,
- `2024-05-08 15:23:<_> <_> felix/ipsets.go 426: Parsing IP set. family="inet" <_>`,
- `2024-05-08 15:23:<_> <_> felix/ipsets.go 607: Skipping expected Calico IP set. family="inet" <_>`,
- `2024-05-08 15:23:<_> <_> felix/ipsets.go 643: No dirty IP sets. family="inet"`,
- `2024-05-08 15:23:<_> <_> felix/summary.go 100: Summarising <_> dataplane reconciliation loops over <_> <_> <_> <_>`,
- `2024-05-08 15:23:<_> <_> felix/sync_client.go 347: Ping received from Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
- `2024-05-08 15:23:<_> <_> felix/sync_client.go 356: Pong sent to Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
- `2024-05-08 15:23:<_> <_> felix/sync_client.go 434: New message from Typha. connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} envelope=syncproto.Envelope{Message:syncproto.MsgPing{Timestamp:time.Date(2024, time.May, 8, 15, 23, <_> <_> time.Local)}} type=""`,
- `2024-05-08 15:23:<_> <_> felix/table.go 1233: In nftables mode, restarting transaction between updates and deletions. ipVersion=0x4 <_>`,
- `2024-05-08 15:23:<_> <_> felix/table.go 1263: Update ended up being no-op, skipping call to ip(6)tables-restore. ipVersion=0x4 <_>`,
- `2024-05-08 15:23:<_> <_> felix/wireguard.go 652: Wireguard is not enabled, skipping sync ipVersion=0x4`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1004: Updating ipsetIDsToMembers cache. family=4`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1043: Processing pending diff state. cs=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}} family=4`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1270: Finished processing pending diff state. bpfActions=intdataplane.xdpBPFActions{CreateMap:set.Typed[string]{}, RemoveMap:set.Typed[string]{}, AddToMap:map[string]map[string]uint32{}, RemoveFromMap:map[string]map[string]uint32{}, InstallXDP:set.Typed[string]{}, UninstallXDP:set.Typed[string]{}, MembersToDrop:map[string]map[string]uint32{}, MembersToAdd:map[string]map[string]uint32{}} family=4 newCS=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}}`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1605: Getting member changes. family=4 oldMembers=map[string]set.Set[string]{}`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1798: Processing BPF actions. family="ipv4"`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 1932: Finished processing BPF actions. family="ipv4"`,
- `2024-05-08 15:23:<_> <_> felix/xdp_state.go 968: Processing member updates. family=4`,
- `2024-05-08 15:23:<_> [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":<_> - [0:0]" table="nat"`,
- `2024-05-08 15:23:<_> [DEBUG][216945] felix/table.go 870: Found forward-reference <_> ipVersion=0x4 line=":<_> - [0:0]" table="nat"`,
- `2024-05-08 15:23:<_> [DEBUG][3576126] felix/int_dataplane.go 954: Skipping interface for MTU detection <_> <_>`,
- `2024-05-08 <_> <_> felix/ipsets.go 366:Finished IPSets resync family="inet" numInconsistenciesFound=0 resyncDuration=<_>`,
- `2024-05-08 <_> <_> felix/ipsets.go 467:Found member in dataplane canon=<_> family="inet" member=<_> setID="this-host"`,
- `2024-05-08 <_> <_> felix/ipsets.go 589:Whitelisting IP sets. ID="all-ipam-pools" family="inet" mainName="cali40all-ipam-pools"`,
- `2024-05-08 <_> <_> felix/ipsets.go 589:Whitelisting IP sets. ID="masq-ipam-pools" family="inet" mainName="cali40masq-ipam-pools"`,
- `2024-05-08 <_> <_> felix/ipsets.go 589:Whitelisting IP sets. ID="this-host" family="inet" mainName="cali40this-host"`,
- `2024-05-08 <_> [DEBUG][615489] felix/table.go 677:Skipping expected chain chainName=<_> ipVersion=0x4 table="filter"`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 557:Resync:found calico-owned interface ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 614:Synchronised routes on interface ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 661:Syncing interface routes ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 686:Reconcile against kernel programming ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 880:Processing route:254 <_> <_> ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 <_> [DEBUG][76] felix/route_table.go 915:Route is correct dest=<_> ifaceName=<_> ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `bird: Netlink: No route to host`,
+ `2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 881: Not an append, skipping ipVersion=0x4 line="*nat" table="nat"`,
+ `2024-05-08 15:23:58.<_> [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":<_> <_> [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.<_> [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="<_>" ipVersion=0x4 line=":<_> <_> [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.<_> [DEBUG][3576126] felix/int_dataplane.go 954: Skipping interface for MTU detection mtu=<_> name="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/endpoint_mgr.go 443: Reporting endpoint status. dirtyEndpoints=set.Set{}`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 167: Health: <_>`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"<_>", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:<_>, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{wall:<_>, ext:<_>, loc:(*time.Location)(0x4ce3aa0)}}`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 245: Calculated health summary healthResult=&health.HealthReport{Live:true, Ready:true, Detail:"+------------------+---------+----------------+-----------------+--------+\n| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |\n+------------------+---------+----------------+-----------------+--------+\n| async_calc_graph | 20s | reporting live | reporting ready | |\n| felix-startup | 0s | reporting live | reporting ready | |\n| int_dataplane | 1m30s | reporting live | reporting ready | |\n+------------------+---------+----------------+-----------------+--------+"}`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go <_>: GET /<_>`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 1773: Refreshing IP sets state`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 1807: Applying dataplane updates`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 2080: Asked to reschedule. delay=<_>.<_>`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 234: Asked to resync with the dataplane on next update. family="inet"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 314: Resyncing ipsets with dataplane. family="inet"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 366: Finished IPSets resync family="inet" numInconsistenciesFound=0 resyncDuration=<_>.<_>`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 426: Parsing IP set. family="inet" setName="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 467: Found member in dataplane canon=<_>.<_>.<_>.<_> family="inet" member="<_>.<_>.<_>.<_>" setID="this-host"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 589: Whitelisting IP sets. ID="<_>" family="inet" mainName="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 607: Skipping expected Calico IP set. family="inet" setName="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 643: No dirty IP sets. family="inet"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 347: Ping received from Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 356: Pong sent to Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 434: New message from Typha. connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} envelope=syncproto.Envelope{Message:syncproto.MsgPing{Timestamp:time.Date(2024, time.May, 8, 15, 23, <_>, <_>, time.Local)}} type=""`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/table.go 1233: In nftables mode, restarting transaction between updates and deletions. ipVersion=0x4 table="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/table.go 1263: Update ended up being no-op, skipping call to ip(6)tables-restore. ipVersion=0x4 table="<_>"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/wireguard.go 652: Wireguard is not enabled, skipping sync ipVersion=0x4`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1004: Updating ipsetIDsToMembers cache. family=4`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1043: Processing pending diff state. cs=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}} family=4`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1270: Finished processing pending diff state. bpfActions=intdataplane.xdpBPFActions{CreateMap:set.Typed[string]{}, RemoveMap:set.Typed[string]{}, AddToMap:map[string]map[string]uint32{}, RemoveFromMap:map[string]map[string]uint32{}, InstallXDP:set.Typed[string]{}, UninstallXDP:set.Typed[string]{}, MembersToDrop:map[string]map[string]uint32{}, MembersToAdd:map[string]map[string]uint32{}} family=4 newCS=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}}`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1605: Getting member changes. family=4 oldMembers=map[string]set.Set[string]{}`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1798: Processing BPF actions. family="ipv4"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1932: Finished processing BPF actions. family="ipv4"`,
+ `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 968: Processing member updates. family=4`,
+ `2024-05-08 15:23:<_>.<_> [INFO][<_>] felix/summary.go 100: Summarising <_> dataplane reconciliation loops over <_>.<_>: avg=<_> longest=<_> (<_>)`,
+ "bird: Netlink: No route to host",
+ },
+ },
+ {
+ drain: New(DefaultConfig(), nil),
+ inputFile: "testdata/grafana-ruler.txt",
+ patterns: []string{
+ `level=debug ts=2024-05-29T13:44:15.804597912Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"`,
+ `level=debug ts=2024-05-29T13:44:15.<_> caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"`,
+ `level=debug ts=2024-05-29T13:44:15.<_> caller=remote_instance_store.go:51 user=<_> slug=<_> msg="calling SaveAlertInstance"`,
+ `logger=ngalert.scheduler user=102553 slug=flownative version=1 fingerprint=4ad9e35be0f80ca3 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.79499903Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.794695854s EvaluationString:}]" duration=116.038803ms`,
+ `logger=ngalert.scheduler user=473762 slug=intentiq version=35 fingerprint=0bc4b6f46a852420 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.788200731Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.787878355s EvaluationString:}]" duration=15.345212ms`,
+ `logger=ngalert.scheduler user=70430 slug=dapperlabs version=1 fingerprint=65a68c433031b4e0 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.790598463Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.78875161s EvaluationString:}]" duration=1.693079007s`,
+ `logger=ngalert.state.manager user=102553 slug=flownative instance= t=2024-05-29T13:44:15.795103234Z level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=15338 slug=rstsoftwarerc instance= t=2024-05-29T13:44:15.790951656Z level=debug msg="Keeping state" state=Alerting previous_ends_at=2024-05-29T13:47:00Z next_ends_at=2024-05-29T13:48:00Z`,
+ `logger=ngalert.state.manager user=172772 slug=ppbtradingtribe instance="datasource_uid=p06gSxS7k, ref_id=A" t=2024-05-29T13:44:15.793080651Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=172772 slug=ppbtradingtribe t=2024-05-29T13:44:15.79304032Z level=debug msg="State manager processing evaluation results" resultCount=1`,
+ `logger=ngalert.state.manager user=228733 slug=csmoney instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.796750449Z level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=dish" t=2024-05-29T13:44:15.788780219Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=optimumfixed" t=2024-05-29T13:44:15.788904162Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=rcn" t=2024-05-29T13:44:15.789011178Z level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=412141 slug=sharethrough instance="datasource_uid=pFBylkiVz, ref_id=Swap Usage for Alert" t=2024-05-29T13:44:15.792756002Z level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=412141 slug=sharethrough instance="datasource_uid=pFBylkiVz, ref_id=Swap Usage for Alert" t=2024-05-29T13:44:15.792775073Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799932951Z level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799945019Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq t=2024-05-29T13:44:15.788261794Z level=debug msg="State manager processing evaluation results" resultCount=1`,
+ `logger=ngalert.state.manager user=630397 slug=tatin instance= t=2024-05-29T13:44:15.795542988Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.800327814Z level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791100679Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData`,
+ `logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791114955Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791129917Z level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=84535 slug=arweave instance= t=2024-05-29T13:44:15.796640981Z level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=84535 slug=arweave t=2024-05-29T13:44:15.796542294Z level=debug msg="State manager processing evaluation results" resultCount=1`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=<_>, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=<_>, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d" t=2024-05-29T13:44:15.78870732Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c" t=2024-05-29T13:44:15.790564871Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.791738618Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a" t=2024-05-29T13:44:15.79227249Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsdevauthts-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthts-utils-7f54f8d7b4-njddr, uid=352d7df2-7832-41f3-ad3e-cbe1a060c968" t=2024-05-29T13:44:15.793846886Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsqalivets-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivets-utils-75b748978f-r2vkj, uid=1d39d0d7-d483-427b-ba91-45d897674698" t=2024-05-29T13:44:15.794284465Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-web, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthts-web-57f5b6f56b-bdmh9, uid=8f6b5224-94ce-4f5d-ba08-03f9fc2f572f" t=2024-05-29T13:44:15.795397351Z level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager.persist user=14927 slug=rstsoftware t=2024-05-29T13:44:15.798496844Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=26.340653ms`,
+ `logger=ngalert.state.manager.persist user=20177 slug=paddledash t=2024-05-29T13:44:15.806655602Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1`,
+ `logger=ngalert.state.manager.persist user=<_> slug=<_> t=2024-05-29T13:44:15.<_> level=debug msg="Saving alert states" count=<_> max_state_save_concurrency=1`,
},
},
}
@@ -454,7 +481,6 @@ func TestDrain_TrainGeneratesMatchablePatterns(t *testing.T) {
}
})
}
-
}
func TestDrain_TrainGeneratesPatternsMatchableByLokiPatternFilter(t *testing.T) {
@@ -509,16 +535,17 @@ func TestDrain_TrainGeneratesPatternsMatchableByLokiPatternFilter(t *testing.T)
},
},
{
- name: "Unicode characters are matchable",
+ name: "Scheduler patterns are matchable",
drain: New(DefaultConfig(), nil),
inputLines: []string{
- `13:25:18.033470 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_max_999 0.00 1717075518`,
- `13:25:18.033422 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_max_99 0.00 1717075518`,
- `13:25:18.033394 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_max_95 0.00 1717075518`,
- `13:25:18.033364 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_max_75 0.00 1717075518`,
- `13:25:18.033335 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_max_50 0.00 1717075518`,
- `13:25:18.033304 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_std 0.00 1717075518`,
- `13:25:18.033281 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_gauge.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_B.what_is_FlushSize.type_is_manual.stat_is_mean 0.00 1717075518`,
+ `ts=2024-05-30T12:50:36.648377186Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.350575929Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.335784477Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.250406732Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.248030329Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.45.239:9095`,
+ `ts=2024-05-30T12:50:36.176344754Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.174730772Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ `ts=2024-05-30T12:50:36.076517207Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.45.239:9095`,
},
},
}
@@ -541,5 +568,60 @@ func TestDrain_TrainGeneratesPatternsMatchableByLokiPatternFilter(t *testing.T)
}
})
}
+}
+
+func TestDeduplicatePlaceholders(b *testing.T) {
+ type dedupCase struct {
+ line string
+ want string
+ }
+ cases := []dedupCase{
+ {
+ line: "abcd",
+ want: "abcd",
+ },
+ {
+ line: "<_><_>abcd",
+ want: "<_>abcd",
+ },
+ {
+ line: strings.Repeat("<_>", 100),
+ want: "<_>",
+ },
+ {
+ line: "<_> <_>",
+ want: "<_> <_>",
+ },
+ {
+ line: strings.Repeat("<_> ", 100),
+ want: strings.Repeat("<_> ", 100),
+ },
+ {
+ line: "<_><<_>",
+ want: "<_><<_>",
+ },
+ {
+ line: "<_><->",
+ want: "<_><->",
+ },
+ {
+ line: strings.Repeat(strings.Repeat("<_>", 100)+" ", 100),
+ want: strings.Repeat("<_> ", 100),
+ },
+ {
+ line: "<<<<<<<_><_>>>>>>>>",
+ want: "<<<<<<<_>>>>>>>>",
+ },
+ {
+ line: strings.Repeat("A", 100) + "<_><_>",
+ want: strings.Repeat("A", 100) + "<_>",
+ },
+ }
+ for i, tc := range cases {
+ b.Run(fmt.Sprintf("Dedup %d", i), func(t *testing.T) {
+ got := deduplicatePlaceholders(tc.line, `<_>`)
+ require.Equal(t, tc.want, got)
+ })
+ }
}
diff --git a/pkg/pattern/drain/line_tokenizer.go b/pkg/pattern/drain/line_tokenizer.go
index 1317fbe3fca88..89bf34a5569b5 100644
--- a/pkg/pattern/drain/line_tokenizer.go
+++ b/pkg/pattern/drain/line_tokenizer.go
@@ -1,25 +1,96 @@
package drain
-import "strings"
+import (
+ "strings"
+ "unicode"
+)
type LineTokenizer interface {
- Tokenize(line string) []string
- Join(tokens []string) string
+ Tokenize(line string) ([]string, interface{})
+ Join(tokens []string, state interface{}) string
}
type spacesTokenizer struct{}
-func (spacesTokenizer) Tokenize(line string) []string {
- return strings.Split(line, " ")
+func (spacesTokenizer) Tokenize(line string) ([]string, interface{}) {
+ return strings.Split(line, " "), nil
}
-func (spacesTokenizer) Join(tokens []string) string {
+func (spacesTokenizer) Join(tokens []string, _ interface{}) string {
return strings.Join(tokens, " ")
}
+type punctuationTokenizer struct {
+ includeDelimiters [128]rune
+ excludeDelimiters [128]rune
+}
+
+func newPunctuationTokenizer() *punctuationTokenizer {
+ var included [128]rune
+ var excluded [128]rune
+ included['='] = 1
+ excluded['_'] = 1
+ excluded['-'] = 1
+ return &punctuationTokenizer{
+ includeDelimiters: included,
+ excludeDelimiters: excluded,
+ }
+}
+
+func (p *punctuationTokenizer) Tokenize(line string) ([]string, interface{}) {
+ tokens := make([]string, len(line)) // Maximum size is every character is punctuation
+ spacesAfter := make([]int, strings.Count(line, " ")) // Could be a bitmap, but it's not worth it for a few bytes.
+
+ start := 0
+ nextTokenIdx := 0
+ nextSpaceIdx := 0
+ for i, char := range line {
+ if unicode.IsLetter(char) || unicode.IsNumber(char) || char < 128 && p.excludeDelimiters[char] != 0 {
+ continue
+ }
+ included := char < 128 && p.includeDelimiters[char] != 0
+ if char == ' ' || included || unicode.IsPunct(char) {
+ if i > start {
+ tokens[nextTokenIdx] = line[start:i]
+ nextTokenIdx++
+ }
+ if char == ' ' {
+ spacesAfter[nextSpaceIdx] = nextTokenIdx - 1
+ nextSpaceIdx++
+ } else {
+ tokens[nextTokenIdx] = line[i : i+1]
+ nextTokenIdx++
+ }
+ start = i + 1
+ }
+ }
+
+ if start < len(line) {
+ tokens[nextTokenIdx] = line[start:]
+ nextTokenIdx++
+ }
+
+ return tokens[:nextTokenIdx], spacesAfter[:nextSpaceIdx]
+}
+
+func (p *punctuationTokenizer) Join(tokens []string, state interface{}) string {
+ spacesAfter := state.([]int)
+ strBuilder := strings.Builder{}
+ spacesIdx := 0
+ for i, token := range tokens {
+ strBuilder.WriteString(token)
+ for spacesIdx < len(spacesAfter) && i == spacesAfter[spacesIdx] {
+ // One entry for each space following the token
+ strBuilder.WriteRune(' ')
+ spacesIdx++
+ }
+ }
+ return strBuilder.String()
+}
+
type splittingTokenizer struct{}
-func (splittingTokenizer) Tokenize(line string) []string {
+func (splittingTokenizer) Tokenize(line string) ([]string, interface{}) {
numEquals := strings.Count(line, "=")
numColons := strings.Count(line, ":")
numSpaces := strings.Count(line, " ")
@@ -32,24 +103,31 @@ func (splittingTokenizer) Tokenize(line string) []string {
}
tokens := make([]string, 0, expectedTokens)
+ spacesAfter := make([]int, 0, strings.Count(line, " "))
for _, token := range strings.SplitAfter(line, keyvalSeparator) {
- tokens = append(tokens, strings.Split(token, " ")...)
+ words := strings.Split(token, " ")
+ for i, entry := range words {
+ tokens = append(tokens, entry)
+ if i == len(words)-1 {
+ continue
+ }
+ spacesAfter = append(spacesAfter, len(tokens)-1)
+ }
}
- return tokens
+ return tokens, spacesAfter
}
-func (splittingTokenizer) Join(tokens []string) string {
- var builder strings.Builder
- for _, token := range tokens {
- if strings.HasSuffix(token, "=") || strings.HasSuffix(token, ":") {
- builder.WriteString(token)
- } else {
- builder.WriteString(token + " ")
+func (splittingTokenizer) Join(tokens []string, state interface{}) string {
+ spacesAfter := state.([]int)
+ strBuilder := strings.Builder{}
+ spacesIdx := 0
+ for i, token := range tokens {
+ strBuilder.WriteString(token)
+ for spacesIdx < len(spacesAfter) && i == spacesAfter[spacesIdx] {
+ // One entry for each space following the token
+ strBuilder.WriteRune(' ')
+ spacesIdx++
}
}
- output := builder.String()
- if output[len(output)-1] == ' ' {
- return output[:len(output)-1]
- }
- return output
+ return strBuilder.String()
}
diff --git a/pkg/pattern/drain/line_tokenizer_test.go b/pkg/pattern/drain/line_tokenizer_test.go
index 8cb541a61b629..1eda1b51068a3 100644
--- a/pkg/pattern/drain/line_tokenizer_test.go
+++ b/pkg/pattern/drain/line_tokenizer_test.go
@@ -1,59 +1,163 @@
package drain
import (
- "reflect"
"testing"
+
+ "github.com/stretchr/testify/require"
)
-func TestSplittingTokenizer_Tokenize(t *testing.T) {
- tokenizer := splittingTokenizer{}
+type TestCase struct {
+ name string
+ line string
+ want map[string][]string
+}
- tests := []struct {
- name string
- line string
- want []string
- }{
- {
- name: "Test with equals sign",
- line: "key1=value1 key2=value2",
- want: []string{"key1=", "value1", "key2=", "value2"},
+const typePunctuation = "punctuation"
+const typeSplitting = "splitting"
+
+var testCases = []TestCase{
+ {
+ name: "Test with equals sign",
+ line: "key1=value1 key2=value2",
+ want: map[string][]string{
+ typePunctuation: {"key1", "=", "value1", "key2", "=", "value2"},
+ typeSplitting: {"key1=", "value1", "key2=", "value2"},
},
- {
- name: "Test with colon",
- line: "key1:value1 key2:value2",
- want: []string{"key1:", "value1", "key2:", "value2"},
+ },
+ {
+ name: "Test with colon",
+ line: "key1:value1 key2:value2",
+ want: map[string][]string{
+ typePunctuation: {"key1", ":", "value1", "key2", ":", "value2"},
+ typeSplitting: {"key1:", "value1", "key2:", "value2"},
},
- {
- name: "Test with mixed delimiters, more = than :",
- line: "key1=value1 key2:value2 key3=value3",
- want: []string{"key1=", "value1", "key2:value2", "key3=", "value3"},
+ },
+ {
+ name: "Test with mixed delimiters, more = than :",
+ line: "key1=value1 key2:value2 key3=value3",
+ want: map[string][]string{
+ typePunctuation: {"key1", "=", "value1", "key2", ":", "value2", "key3", "=", "value3"},
+ typeSplitting: {"key1=", "value1", "key2:value2", "key3=", "value3"},
},
+ },
+ {
+ name: "Test with mixed delimiters, more : than =",
+ line: "key1:value1 key2:value2 key3=value3",
+ want: map[string][]string{
+ typePunctuation: {"key1", ":", "value1", "key2", ":", "value2", "key3", "=", "value3"},
+ typeSplitting: {"key1:", "value1", "key2:", "value2", "key3=value3"},
+ },
+ },
+ {
+ name: "Dense json",
+ line: `{"key1":"value1","key2":"value2","key3":"value3"}`,
+ want: map[string][]string{
+ typePunctuation: {`{`, `"`, `key1`, `"`, `:`, `"`, `value1`, `"`, `,`, `"`, `key2`, `"`, `:`, `"`, `value2`, `"`, `,`, `"`, `key3`, `"`, `:`, `"`, `value3`, `"`, `}`},
+ typeSplitting: {`{"key1":`, `"value1","key2":`, `"value2","key3":`, `"value3"}`},
+ },
+ },
+ {
+ name: "json with spaces",
+ line: `{"key1":"value1", "key2":"value2", "key3":"value3"}`,
+ want: map[string][]string{
+ typePunctuation: {`{`, `"`, `key1`, `"`, `:`, `"`, `value1`, `"`, `,`, `"`, `key2`, `"`, `:`, `"`, `value2`, `"`, `,`, `"`, `key3`, `"`, `:`, `"`, `value3`, `"`, `}`},
+ typeSplitting: {`{"key1":`, `"value1",`, `"key2":`, `"value2",`, `"key3":`, `"value3"}`},
+ },
+ },
+ {
+ name: "logfmt multiword values",
+ line: `key1=value1 key2=value2 msg="this is a message"`,
+ want: map[string][]string{
+ typePunctuation: {"key1", "=", "value1", "key2", "=", "value2", "msg", "=", `"`, `this`, "is", "a", `message`, `"`},
+ typeSplitting: {"key1=", "value1", "key2=", "value2", "msg=", `"this`, "is", "a", `message"`},
+ },
+ },
+ {
+ name: "longer line",
+ line: "09:17:38.033366 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_counter.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_Metric.action_is_drop.reason_is_queue_full 0 1717060658",
+ want: map[string][]string{
+ typePunctuation: {`09`, `:`, `17`, `:`, `38`, `.`, `033366`, `▶`, `INFO`, `route`, `ops`, `sending`, `to`, `dest`, `https`, `:`, `/`, `/`, `graphite-cortex-ops-blocks-us-east4`, `.`, `grafana`, `.`, `net`, `/`, `graphite`, `/`, `metrics`, `:`, `service_is_carbon-relay-ng`, `.`, `instance_is_carbon-relay-ng-c665b7b-j2trk`, `.`, `mtype_is_counter`, `.`, `dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics`, `.`, `unit_is_Metric`, `.`, `action_is_drop`, `.`, `reason_is_queue_full`, `0`, `1717060658`},
+ typeSplitting: {`09:`, `17:`, `38.033366`, `▶`, `INFO`, ``, `route`, `ops`, `sending`, `to`, `dest`, `https:`, `//graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics:`, ``, `service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_counter.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_Metric.action_is_drop.reason_is_queue_full`, `0`, `1717060658`},
+ },
+ },
+ {
+ name: "Consecutive splits points: equals followed by space",
+ line: `ts=2024-05-30T12:50:36.648377186Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
+ want: map[string][]string{
+ typePunctuation: {`ts`, `=`, `2024-05-30T12`, `:`, `50`, `:`, `36`, `.`, `648377186Z`, `caller`, `=`, `scheduler_processor`, `.`, `go`, `:`, `143`, `level`, `=`, `warn`, `msg`, `=`, `"`, `error`, `contacting`, `scheduler`, `"`, `err`, `=`, `"`, `rpc`, `error`, `:`, `code`, `=`, `Unavailable`, `desc`, `=`, `connection`, `error`, `:`, `desc`, `=`, `\`, `"`, `error`, `reading`, `server`, `preface`, `:`, `EOF`, `\`, `"`, `"`, `addr`, `=`, `10`, `.`, `0`, `.`, `151`, `.`, `101`, `:`, `9095`},
+ typeSplitting: {"ts=", "2024-05-30T12:50:36.648377186Z", "caller=", "scheduler_processor.go:143", "level=", "warn", "msg=", "\"error", "contacting", "scheduler\"", "err=", "\"rpc", "error:", "code", "=", ``, "Unavailable", "desc", "=", ``, "connection", "error:", "desc", "=", ``, `\"error`, "reading", "server", "preface:", `EOF\""`, "addr=", "10.0.151.101:9095"},
+ },
+ },
+ {
+ name: "Only punctation",
+ line: `!@£$%^&*()`,
+ want: map[string][]string{
+ typePunctuation: {`!`, `@`, `£$`, `%`, `^`, `&`, `*`, `(`, `)`},
+ typeSplitting: {`!@£$%^&*()`},
+ },
+ },
+}
+
+func TestTokenizer_Tokenize(t *testing.T) {
+ tests := []struct {
+ name string
+ tokenizer LineTokenizer
+ }{
{
- name: "Test with mixed delimiters, more : than =",
- line: "key1:value1 key2:value2 key3=value3",
- want: []string{"key1:", "value1", "key2:", "value2", "key3=value3"},
+ name: typePunctuation,
+ tokenizer: newPunctuationTokenizer(),
},
{
- name: "Dense json",
- line: `{"key1":"value1","key2":"value2","key3":"value3"}`,
- want: []string{`{"key1":`, `"value1","key2":`, `"value2","key3":`, `"value3"}`},
+ name: typeSplitting,
+ tokenizer: splittingTokenizer{},
},
+ }
+
+ for _, tt := range tests {
+ for _, tc := range testCases {
+ t.Run(tt.name+":"+tc.name, func(t *testing.T) {
+ got, _ := tt.tokenizer.Tokenize(tc.line)
+ require.Equal(t, tc.want[tt.name], got)
+ })
+ }
+ }
+}
+
+func TestTokenizer_TokenizeAndJoin(t *testing.T) {
+ tests := []struct {
+ name string
+ tokenizer LineTokenizer
+ }{
{
- name: "json with spaces",
- line: `{"key1":"value1", "key2":"value2", "key3":"value3"}`,
- want: []string{`{"key1":`, `"value1",`, `"key2":`, `"value2",`, `"key3":`, `"value3"}`},
+ name: typePunctuation,
+ tokenizer: newPunctuationTokenizer(),
},
{
- name: "logfmt multiword values",
- line: `key1=value1 key2=value2 msg="this is a message"`,
- want: []string{"key1=", "value1", "key2=", "value2", "msg=", `"this`, "is", "a", `message"`},
+ name: typeSplitting,
+ tokenizer: splittingTokenizer{},
},
}
for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- if got := tokenizer.Tokenize(tt.line); !reflect.DeepEqual(got, tt.want) {
- t.Errorf("splittingTokenizer.Tokenize() = %v, want %v", got, tt.want)
+ for _, tc := range testCases {
+ t.Run(tt.name+":"+tc.name, func(t *testing.T) {
+ got := tt.tokenizer.Join(tt.tokenizer.Tokenize(tc.line))
+ require.Equal(t, tc.line, got)
+ })
+ }
+ }
+}
+
+func BenchmarkSplittingTokenizer(b *testing.B) {
+ tokenizer := newPunctuationTokenizer()
+
+ for _, tt := range testCases {
+ tc := tt
+ b.Run(tc.name, func(b *testing.B) {
+ b.ResetTimer()
+ b.ReportAllocs()
+ for i := 0; i < b.N; i++ {
+ tokenizer.Tokenize(tc.line)
}
})
}
diff --git a/pkg/pattern/drain/log_cluster.go b/pkg/pattern/drain/log_cluster.go
index af5932d16f706..cffff3abe5215 100644
--- a/pkg/pattern/drain/log_cluster.go
+++ b/pkg/pattern/drain/log_cluster.go
@@ -11,16 +11,18 @@ import (
)
type LogCluster struct {
- id int
- Size int
- Tokens []string
- Stringer func([]string) string
- Chunks Chunks
+ id int
+ Size int
+ Tokens []string
+ TokenState interface{}
+ Stringer func([]string, interface{}) string
+
+ Chunks Chunks
}
func (c *LogCluster) String() string {
if c.Stringer != nil {
- return c.Stringer(c.Tokens)
+ return c.Stringer(c.Tokens, c.TokenState)
}
return strings.Join(c.Tokens, " ")
}
diff --git a/pkg/pattern/drain/testdata/grafana-ruler.txt b/pkg/pattern/drain/testdata/grafana-ruler.txt
new file mode 100644
index 0000000000000..54b6854d9e172
--- /dev/null
+++ b/pkg/pattern/drain/testdata/grafana-ruler.txt
@@ -0,0 +1,50000 @@
+logger=ngalert.state.manager.persist user=20177 slug=paddledash t=2024-05-29T13:44:15.806655602Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.805113753Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=103548 slug=gen2 t=2024-05-29T13:44:15.805016017Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.804597912Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.802571162Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.801740193Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.800327814Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799945019Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799932951Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=612525 slug=adleyeview t=2024-05-29T13:44:15.799982989Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=679831 slug=joveostageaws t=2024-05-29T13:44:15.798839218Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=14927 slug=rstsoftware t=2024-05-29T13:44:15.798496844Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=26.340653ms
+level=debug ts=2024-05-29T13:44:15.797668756Z caller=remote_instance_store.go:51 user=516613 slug=blackrocktp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.797275166Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=228733 slug=csmoney instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.796750449Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=84535 slug=arweave instance= t=2024-05-29T13:44:15.796640981Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=84535 slug=arweave t=2024-05-29T13:44:15.796542294Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=wcs9-tds-devus-jenkins-w, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-jenkins-w-6c6cb984d8-qrpm7, uid=d229ff35-bf4d-4bb5-8791-60b0a3bebca8" t=2024-05-29T13:44:15.796130498Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=vault, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65" t=2024-05-29T13:44:15.796062736Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=vault, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d" t=2024-05-29T13:44:15.795990925Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.795593051Z caller=remote_instance_store.go:51 user=636704 slug=nmartin2 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-web, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivets-web-6fc5b6f9c5-6spps, uid=b75b2425-e66c-4869-94f7-cfecc5d4c935" t=2024-05-29T13:44:15.795680228Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=630397 slug=tatin instance= t=2024-05-29T13:44:15.795542988Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-web, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthts-web-57f5b6f56b-bdmh9, uid=8f6b5224-94ce-4f5d-ba08-03f9fc2f572f" t=2024-05-29T13:44:15.795397351Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=102553 slug=flownative instance= t=2024-05-29T13:44:15.795103234Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=102553 slug=flownative version=1 fingerprint=4ad9e35be0f80ca3 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.79499903Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.794695854s EvaluationString:}]" duration=116.038803ms
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthts-app-989d79dbb-lwc9p, uid=a6cfb6f8-edfe-4c28-8435-acb6d54f3599" t=2024-05-29T13:44:15.795068084Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivets-app-5b7ff985b6-c59n4, uid=4d533dcf-4e6c-4ffe-a0fc-caa6e617c8c8" t=2024-05-29T13:44:15.794992842Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivets-app-5b7ff985b6-4nw98, uid=855af10e-bb32-49c1-8a47-0fba814e437c" t=2024-05-29T13:44:15.794979122Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthts-app-989d79dbb-lwc9p, uid=a6cfb6f8-edfe-4c28-8435-acb6d54f3599" t=2024-05-29T13:44:15.794753977Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivets-app-5b7ff985b6-4nw98, uid=855af10e-bb32-49c1-8a47-0fba814e437c" t=2024-05-29T13:44:15.794631294Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsqausauthts-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthts-utils-59f788556b-xrfpx, uid=d195032e-df70-4672-bc90-79692b1411af" t=2024-05-29T13:44:15.794322337Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsqalivets-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivets-utils-75b748978f-r2vkj, uid=1d39d0d7-d483-427b-ba91-45d897674698" t=2024-05-29T13:44:15.794284465Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsdevauthts-utils, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthts-utils-7f54f8d7b4-njddr, uid=352d7df2-7832-41f3-ad3e-cbe1a060c968" t=2024-05-29T13:44:15.793876757Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsdevauthts-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthts-utils-7f54f8d7b4-njddr, uid=352d7df2-7832-41f3-ad3e-cbe1a060c968" t=2024-05-29T13:44:15.793846886Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf" t=2024-05-29T13:44:15.793416796Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60" t=2024-05-29T13:44:15.793216421Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=172772 slug=ppbtradingtribe instance="datasource_uid=p06gSxS7k, ref_id=A" t=2024-05-29T13:44:15.793080651Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=172772 slug=ppbtradingtribe t=2024-05-29T13:44:15.79304032Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833" t=2024-05-29T13:44:15.792980836Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227" t=2024-05-29T13:44:15.792956616Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833" t=2024-05-29T13:44:15.792793782Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=412141 slug=sharethrough t=2024-05-29T13:44:15.79278731Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=412141 slug=sharethrough instance="datasource_uid=pFBylkiVz, ref_id=Swap Usage for Alert" t=2024-05-29T13:44:15.792775073Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=412141 slug=sharethrough instance="datasource_uid=pFBylkiVz, ref_id=Swap Usage for Alert" t=2024-05-29T13:44:15.792756002Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a" t=2024-05-29T13:44:15.79227249Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7" t=2024-05-29T13:44:15.791954212Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d" t=2024-05-29T13:44:15.791863631Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.791738618Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c" t=2024-05-29T13:44:15.791660547Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.791526073Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.791206493Z caller=remote_instance_store.go:51 user=439643 slug=swirldslabspreproduction msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.791456811Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.79134478Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.791225391Z caller=remote_instance_store.go:51 user=692010 slug=mercariusprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791129917Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791114955Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791100679Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.791027617Z caller=remote_instance_store.go:51 user=662363 slug=facephi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=15338 slug=rstsoftwarerc instance= t=2024-05-29T13:44:15.790951656Z level=debug msg="Keeping state" state=Alerting previous_ends_at=2024-05-29T13:47:00Z next_ends_at=2024-05-29T13:48:00Z
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7" t=2024-05-29T13:44:15.791010011Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=70430 slug=dapperlabs version=1 fingerprint=65a68c433031b4e0 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.790598463Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.78875161s EvaluationString:}]" duration=1.693079007s
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49" t=2024-05-29T13:44:15.790593572Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c" t=2024-05-29T13:44:15.790564871Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52" t=2024-05-29T13:44:15.790229164Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c" t=2024-05-29T13:44:15.790085591Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18" t=2024-05-29T13:44:15.79004016Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced" t=2024-05-29T13:44:15.789860646Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52" t=2024-05-29T13:44:15.78960996Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa" t=2024-05-29T13:44:15.789407005Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced" t=2024-05-29T13:44:15.789216261Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=371756 slug=asapp t=2024-05-29T13:44:15.789039986Z level=debug msg="Saving alert states" count=3 max_state_save_concurrency=1
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=rcn" t=2024-05-29T13:44:15.789011178Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=optimumfixed" t=2024-05-29T13:44:15.788904162Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788771442Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788761161Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788725479Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=dish" t=2024-05-29T13:44:15.788780219Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788701028Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788691799Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.78866505Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788646347Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788639897Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.7886198Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d" t=2024-05-29T13:44:15.78870732Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.7885482Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.78854173Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788522663Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788502704Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788472468Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788464205Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.78841334Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788374794Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788330559Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788320822Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.788310995Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=473762 slug=intentiq t=2024-05-29T13:44:15.788261794Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=473762 slug=intentiq version=35 fingerprint=0bc4b6f46a852420 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.788200731Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.787878355s EvaluationString:}]" duration=15.345212ms
+logger=ngalert.scheduler user=893151 slug=cmtdsnp version=1 fingerprint=0db5016ab8b43d15 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.781149137Z level=debug msg="Alert rule evaluated" results="[{Instance:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a7a0} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a800} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a860}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764303759s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a910} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a958} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761 Value:0xc03af9a9c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76433916s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-application-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-application-controller-0, uid=ed798931-5824-4e7d-9f54-3225a6307761} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9aa70} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9ae60} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9aeb0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764355151s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9afa8} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9b008} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2 Value:0xc03af9b070}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764373772s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-applicationset-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-applicationset-controller-5877955b59-h8bhh, uid=805c5578-2751-48e3-8be3-baadb00840c2} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc03af9b118} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc03af9b178} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc03af9b1e8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764387162s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc030f5c0d0} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc030f5c160} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e Value:0xc030f5c1c8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764408623s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-notifications-controller, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-notifications-controller-64bb8dcf46-trlct, uid=5089875c-5641-46ab-b20e-ce2aa25c7f2e} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c3e8} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c4f8} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c360}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764423283s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c680} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c6e8} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a Value:0xc030f5c5d8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764439703s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-repo-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-repo-server-665d6b7b59-m5929, uid=c3b49347-3f87-4aad-842c-77225cad682a} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5c7d0} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5c840} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5c8a0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764453224s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ]} {Instance:cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5c9f8} B:{Var:B Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5ca80} C:{Var:C Labels:cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026 Value:0xc030f5c968}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764468124s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=argocd-server, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-server-5986f74c99-p8nsr, uid=dbf73d6f-0e51-458d-8d72-a63982e78026} value=0 ]} {Instance:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5cba0} B:{Var:B Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5cc08} C:{Var:C Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5cb40}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764487155s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ]} {Instance:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5ccd8} B:{Var:B Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5cd40} C:{Var:C Labels:cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50 Value:0xc030f5cda8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764503805s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=cdap-sandbox, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlive-cdap-sandbox-deployment-855b79f56b-2ll62, uid=6b9f044a-473a-4c6d-8934-3647088e4e50} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5cf88} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5ce50} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5cf38}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764517896s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5d158} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5d1b0} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5d208}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764531786s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5d2a8} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5d420} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5d480}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764558377s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5d520} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5d578} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5d5c8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764572477s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5d670} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5d6c8} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5d718}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764585898s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5d7c0} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5d820} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd Value:0xc030f5d920}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764600318s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-vault-consu-76f5467596-mm9qn, uid=58f8624a-e2fd-4f88-bc15-5d8e8b90febd} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5d9d0} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5da30} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d Value:0xc030f5da80}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764613268s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5db20} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5db80} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0 Value:0xc030f5dbd0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764631609s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-vault-consul-758ff7bfdd-8dzmg, uid=2142b2fa-c391-4493-9f17-ecab47b386c0} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5dd70} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5ddc8} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce Value:0xc030f5dc80}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764645279s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-vault-consul-86db767f5-ttpm8, uid=546be242-3313-46c4-b57c-8a87f4e320ce} value=0 ]} {Instance:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5def8} B:{Var:B Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5df50} C:{Var:C Labels:cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65 Value:0xc030f5dfa8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76465865s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=consul, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-vault-cons-85f6c4f87d-tpdmj, uid=2aa0ac70-24e0-4323-a7c7-61fead7b0c65} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc007df22f8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc007df2170} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc007df22a0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76467379s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc007df2448} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc007df2498} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc007df24e8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764687301s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc007df2b10} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc007df2ed8} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc007df3318}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764702761s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc007df3a90} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc007df3e50} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc039f66028}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764718062s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f660c8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f66120} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f66178}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764731882s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f66318} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f66368} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f662c8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764763083s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f66410} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f66470} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f664c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764779893s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f66570} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f665c8} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f66620}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764794374s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f666c8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f66720} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f66778}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764809464s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f66820} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f66878} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f668d0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764827125s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f66970} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f669d0} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f66a28}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764844805s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f66b40} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f66b90} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f66ae0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764859176s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f66ce8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f66c40} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f66c98}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764873126s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc039f66de8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc039f66e40} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc039f66d90}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764887547s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc039f66ef0} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc039f66f48} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769 Value:0xc039f66fa0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764901857s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthcrs-app-56bc9488c4-tkppb, uid=19609211-799f-4cc4-a64c-3362d923f769} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc039f67050} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc039f670b0} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d Value:0xc039f67108}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764914537s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivecrs-app-55f8998d6c-qdgqv, uid=d87cce81-2d06-4007-b374-2a5a83761d1d} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc039f671b0} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc039f67208} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2 Value:0xc039f67260}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764928968s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-5l6mh, uid=9a76f4f5-2b38-47cf-a5a4-5a833d2759f2} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc039f673b8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc039f67308} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced Value:0xc039f67360}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764945518s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthcrs-app-778c79f6f6-w8mlk, uid=451a3b1f-50fa-4418-bb67-e273772ceced} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f674b0} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f67510} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a Value:0xc039f67460}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764957669s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivecrs-app-6d66bccddd-hgx74, uid=8c4026f5-4ee1-4123-bc8f-57aeb412532a} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f675c0} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f67610} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53 Value:0xc039f67670}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764971009s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthcrs-app-76b8477c-sphlt, uid=9ce20c7e-8191-4dce-af1d-0a7635d43f53} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f67720} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f67780} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa Value:0xc039f677d8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.764983579s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-c9gzg, uid=f20540a5-8b7c-48c2-96a2-a264404f0afa} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f67878} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f678c8} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05 Value:0xc039f67928}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76499886s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivecrs-app-b9c454b74-r67pm, uid=70196c6b-2e71-44d9-adfc-63c54d9b1c05} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f679c8} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f67a20} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18 Value:0xc039f67a78}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76501074s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-fclrn, uid=9c835179-b911-4296-a481-705af4228a18} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f67b20} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f67b78} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c Value:0xc039f67bd0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765025851s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthcrs-app-59b767688b-zjtfs, uid=81ec285c-745f-47c5-9ae8-771f4a5ba74c} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f67d20} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f67c78} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e Value:0xc039f67cd0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765039801s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivecrs-app-c9d7c4c9-jqjpv, uid=07c9df8a-44c1-4891-9b39-b83b86e8919e} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f67dc0} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f67e18} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52 Value:0xc039f67e70}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765053501s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-742jp, uid=cfbfe1d6-c33f-4618-8876-ffec0b52cb52} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f67f70} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f67fc8} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc Value:0xc039f67f20}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765066942s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthcrs-app-cc98b9c59-fcljp, uid=66b34df8-8f91-497d-874e-e78827970bdc} value=0 ]} {Instance:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc03d388070} B:{Var:B Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc03d3880c8} C:{Var:C Labels:cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d Value:0xc03d388120}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765097643s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=crs-app, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivecrs-app-6cf8645f7b-zcz4f, uid=0280b90d-a8da-4a3f-a10f-90f4e7d2ee3d} value=0 ]} {Instance:cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d388290} B:{Var:B Labels:cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d3881d8} C:{Var:C Labels:cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d388230}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765112593s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=dex, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ]} {Instance:cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d388340} B:{Var:B Labels:cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d388450} C:{Var:C Labels:cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c Value:0xc03d388528}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765126413s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=dex, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-dex-server-6c87968c75-qqvdk, uid=b2313800-1bdd-4857-b3f9-211d6bae131c} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d3886e0} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d388798} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d388688}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765142264s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d3888a8} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d388908} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d388960}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765158244s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d388c28} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d388a68} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d388b78}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765171556s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d388cd0} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d388d30} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d388d88}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765184626s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d388e88} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d388ee0} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d388e38}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765197737s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d389038} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d389098} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d3890f8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765212627s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d3892f8} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d389240} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1 Value:0xc03d3892a0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765227147s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-f4bwj, uid=86d8728e-14ab-409b-adc8-4d0d8c89f0d1} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d3893f0} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d389458} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea Value:0xc03d3894b8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765246268s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-57bb64c945-mpmqf, uid=0bc92410-f1bd-4cb0-951e-533bae3780ea} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d389600} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d389660} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c Value:0xc03d389720}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765262588s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d3898f8} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d389950} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49 Value:0xc03d389898}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765575208s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-vjfzl, uid=15c097da-a56b-4fbd-a66d-477c24638f49} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d389a40} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d389ac0} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec Value:0xc03d389b28}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765607309s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5d5766c56b-t88vn, uid=e9432221-3850-408e-be4a-37c1f06cceec} value=0 ]} {Instance:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d389cb0} B:{Var:B Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d389d00} C:{Var:C Labels:cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14 Value:0xc03d389c58}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765621829s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=frontend, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-frontend-749899bd65-rncb9, uid=735d03dc-29e0-4a03-9058-371077b57f14} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc01b17c000} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc03d389ea0} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc03d389f58}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.765635919s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c120} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c180} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c0c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76564905s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c278} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c2d0} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c220}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76566618s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17c378} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17c3d0} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17c420}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76599224s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17c510} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17c570} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17c4c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76601021s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc01b17c618} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc01b17c668} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc01b17c6c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766028111s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c808} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c760} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc01b17c7b8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766043021s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c900} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c958} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc01b17c8b0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766056622s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17caa8} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17ca00} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc01b17ca50}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766070182s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ]} {Instance:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17cb50} B:{Var:B Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17cba0} C:{Var:C Labels:cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc01b17cbf8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766087283s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=gitea, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ]} {Instance:cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17ccc0} B:{Var:B Labels:cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17cd28} C:{Var:C Labels:cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17cd90}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766100093s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=helm, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17cf60} B:{Var:B Labels:cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17ce48} C:{Var:C Labels:cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17cef8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766115693s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=helm, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d100} B:{Var:B Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d030} C:{Var:C Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d098}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766129774s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ]} {Instance:cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d228} B:{Var:B Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d288} C:{Var:C Labels:cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0 Value:0xc01b17d1c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766143314s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=jenkins, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=jenkins-deployment-5469b67d9b-64hgc, uid=9d3785a5-0b4d-4756-814c-17c117a142f0} value=0 ]} {Instance:cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d350} B:{Var:B Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d3a8} C:{Var:C Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d400}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766158405s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d518} B:{Var:B Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d578} C:{Var:C Labels:cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d4b8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766172735s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=jnlp, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d638} B:{Var:B Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d688} C:{Var:C Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d720}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766187725s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d7f0} B:{Var:B Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d860} C:{Var:C Labels:cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d8c8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766203076s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=kaniko1, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d9d8} B:{Var:B Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17da38} C:{Var:C Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17d978}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766216616s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17db60} B:{Var:B Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17dbb8} C:{Var:C Labels:cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc01b17db00}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766229717s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=kaniko2, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc01b17dcc8} B:{Var:B Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc01b17dd20} C:{Var:C Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc01b17dc70}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766244637s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ]} {Instance:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc01b17de60} B:{Var:B Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc01b17dec0} C:{Var:C Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc01b17de10}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766258198s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ]} {Instance:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc01b17dfd0} B:{Var:B Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc007bfa558} C:{Var:C Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8 Value:0xc01b17df70}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766275218s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-mesh-6fb55bc86c-hj6q4, uid=a4b94ad1-0a6a-41ac-8d3a-6ce7fe6671e8} value=0 ]} {Instance:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc007bfa978} B:{Var:B Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc007bfa9e8} C:{Var:C Labels:cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c Value:0xc007bfaa50}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766289788s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=mesh, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauth-exo-mesh-669cd5c69d-nxcdx, uid=9211d299-849a-4c56-95be-633b10fffe3c} value=0 ]} {Instance:cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfacc0} B:{Var:B Labels:cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfb200} C:{Var:C Labels:cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfb460}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766302519s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfb750} B:{Var:B Labels:cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfb7c8} C:{Var:C Labels:cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed Value:0xc007bfb648}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766315829s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=node, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc007bfb9b8} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc007bfba20} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc007bfbaa8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76632805s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc007bfbc70} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc017514000} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc017514210}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76634059s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc017514340} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc0175143b0} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc017514460}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76635318s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc017514548} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc0175145c0} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc017514650}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766368291s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc017514728} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc017514790} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc0175147f8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766381841s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc017514918} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc0175149c8} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b Value:0xc0175148b8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766396332s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-dev-git-gitea-6d9474cb7-4dg5m, uid=f44c20bf-791d-4c57-8e6d-81fdeaedec5b} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc017514c70} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc017514ab8} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7 Value:0xc017514bd8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766409542s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-git-gitea-5d9bbcc688-x4sv9, uid=95295c01-4f77-4706-8fa1-6e894b1447b7} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc017514d38} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc017514da0} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7 Value:0xc017514e18}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766422132s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-preprod-git-git-7b5b648548-spgjg, uid=2b9b17f9-fbac-48bd-988f-31c6b76810d7} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc0175150d8} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc017514f48} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d Value:0xc017514fb8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766435853s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qa-git-gitea-654cd6bb87-h7jkc, uid=57d7a792-6fe8-429e-a37d-737acd090f4d} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc0175151a8} B:{Var:B Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc017515208} C:{Var:C Labels:cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2 Value:0xc017515278}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766450273s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgres, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-qaus-git-gitea-75dc8cd659-k86f4, uid=273f1ee9-4e21-4771-92ec-afd2b1721bb2} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc017515370} B:{Var:B Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc0175153e0} C:{Var:C Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc017515468}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766463784s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ]} {Instance:cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc017515558} B:{Var:B Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc017515610} C:{Var:C Labels:cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41 Value:0xc017515710}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766478724s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=postgresql, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-strapi-postgresql-0, uid=9985b825-e8e4-4f35-bbcc-287e655f0f41} value=0 ]} {Instance:cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515980} B:{Var:B Labels:cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515850} C:{Var:C Labels:cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515900}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766493104s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=redis, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ]} {Instance:cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515b08} B:{Var:B Labels:cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515b80} C:{Var:C Labels:cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc Value:0xc017515a48}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766505865s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=redis, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=argocd-redis-7d8d46cc7f-m7gpl, uid=a3171dae-0648-434f-b9c9-068dd86699bc} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc017515c60} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc017515d48} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc017515e00}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766518805s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc017515f80} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc017515fe0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc017515ed8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766531756s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc010050190} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc0100501e0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc010050238}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766543886s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc0100502f8} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc010050700} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc010050758}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766557896s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc010050850} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc0100508b0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc010050800}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766571407s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc010050958} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc0100509b0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a Value:0xc010051210}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766585347s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc0100512b8} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc010051310} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b Value:0xc010051368}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766601518s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevusauthsearch-app-master-546cd5cc-smnwh, uid=3bbaf094-2ea3-4764-bf8b-f9f3172e947b} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc010051468} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc0100514b8} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5 Value:0xc010051510}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766614698s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodauthsearch-app-master-68ffbdf94d-rbk2s, uid=a0661a71-a856-4072-b75e-9fb28aabf4d5} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc0100515b0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc010051600} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd Value:0xc010051658}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766630558s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthsearch-app-master-77dbc97966-jhrfv, uid=8f84f0bc-5f32-4e3f-8670-9f60864759fd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc0100517d0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc010051730} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd Value:0xc010051780}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766647769s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-master, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqausauthsearch-app-master-6cdf9cbffc-bgsnv, uid=7cb4f9d2-b414-4070-ace3-ad51fb0f49cd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc010051c50} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc010051cb0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc010051d40}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766662059s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a7540c0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a754110} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a754068}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.7666789s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a754270} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a7541d0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a754220}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76669259s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a754310} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a754368} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a7543b8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766705791s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a754458} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a7544a8} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a7544f8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766721281s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc02a754600} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc02a754650} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0 Value:0xc02a7545b0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766735071s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-repeater-7f6d99ddf6-6m9wf, uid=b2e77d0f-e52b-4908-8b35-9fe3de9075a0} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a754768} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a7547c0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd Value:0xc02a754710}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766748572s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-repeater-5596664d54-8pzh6, uid=f3687f83-6af2-4e37-b69d-9b564a2739fd} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a754870} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a7548e8} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679 Value:0xc02a754940}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766771933s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-repeater-7dc846dd7b-qtc64, uid=e687930a-1c4b-43fd-97c3-4be17e79a679} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a7549e0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a754a38} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227 Value:0xc02a754aa0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766785663s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-repeater-665b664b99-lk8ws, uid=1f513acf-ba36-4abb-a435-ca6d5400b227} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a754b50} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a754ba8} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833 Value:0xc02a754c00}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766821454s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-repeater-5d5fdc8d98-bphrx, uid=3452a789-78d7-4e95-b885-4e862d380833} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a754cd8} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a754d40} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a754db8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766832464s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a754eb0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a754f10} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a754f78}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766844145s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a755090} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a7550f8} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a755030}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766856295s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a7551c0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a755220} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a755280}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766868135s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a755340} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a7553a0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a755400}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766879746s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf Value:0xc02a7554c8} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf Value:0xc02a755540} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf Value:0xc02a7555a8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766893946s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a755740} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a755670} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60 Value:0xc02a7556d0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766908177s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevlivesearch-app-slave-5f9d7fd6bc-sxjt4, uid=b332559c-562b-4c45-94cd-27d40a864a60} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a7558f0} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a755818} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a Value:0xc02a755880}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766922317s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevuslivesearch-app-slave-5955c58b9c-j6bqw, uid=034086a0-1104-4270-bac7-4588dfa3648a} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a755a80} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a7559c0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058 Value:0xc02a755a20}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766935097s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-7j6vq, uid=ae51b866-27f7-4181-afc6-1afaf8d56058} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a755b50} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a755bc0} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561 Value:0xc02a755c38}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766947418s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdspreprodlivesearch-app-slave-6f5f8dfd8b-wcfsp, uid=f3ec083a-27fb-463b-a60b-2e1842373561} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a755d00} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a755d60} C:{Var:C Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6 Value:0xc02a755dc0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.766960678s EvaluationString:[ var='A' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ], [ var='B' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ], [ var='C' labels={cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivesearch-app-slave-557c7fb9b4-8j777, uid=7e2d1fa7-4e1c-41aa-a4fa-d21bd117b4e6} value=0 ]} {Instance:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf State:Normal Error: Results:map[] Values:map[A:{Var:A Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqauslivesearch-app-slave-5558869975-m6fb5, uid=ae9f0c0b-7cd7-4591-81f4-3e4ba7b1edbf Value:0xc02a755e80} B:{Var:B Labels:cluster=tds-np-cluster, container=search-app-slave, instance=172.30.58.138:8080, job=integrations/kubernetes/kube-state-me
+level=debug ts=2024-05-29T13:44:15.788087186Z caller=remote_instance_store.go:51 user=151289 slug=everflow msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=277970 slug=teckresourcestest instance= t=2024-05-29T13:44:15.786018318Z level=debug msg="Execution error state is Normal" handler=resultNormal previous_handler=resultError
+logger=ngalert.state.manager user=615392 slug=shinemetrics instance="__name__=probe_success, config_version=1715008305715867392, instance=https://api.shine.fr/v2/referrals/liveness_check, job=Liveness Check referrals-v2, probe=Amsterdam" t=2024-05-29T13:44:15.785579974Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-replica-service-7b4df8ff7f-k4mzn" t=2024-05-29T13:44:15.785573489Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-replica-service-7b4df8ff7f-h8r69" t=2024-05-29T13:44:15.785527862Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-replica-service-7b4df8ff7f-9gbcz" t=2024-05-29T13:44:15.785475171Z level=debug msg="Keeping state" state=Normal
+level=error ts=2024-05-29T13:44:15.785177173Z caller=remote_rule_evaluator.go:110 user=277970 slug=teckresourcestest msg="remote evaluate failed" code=Code(422) err="failed to build query 'A': data source not found"
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-import-service-5c95f8f985-fdzbq" t=2024-05-29T13:44:15.785285397Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-forecast-service-75f5ddb88d-vprqs" t=2024-05-29T13:44:15.785102836Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-forecast-service-75f5ddb88d-nwfg2" t=2024-05-29T13:44:15.784965233Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-export-service-66dbcf8f5b-jrc4g" t=2024-05-29T13:44:15.784820075Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.784694596Z caller=remote_instance_store.go:51 user=696798 slug=mcv msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.784612816Z caller=remote_instance_store.go:51 user=94501 slug=datastax msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=696798 slug=mcv t=2024-05-29T13:44:15.784474628Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-budget-service-5d9b6c54f8-wjf9k" t=2024-05-29T13:44:15.784299856Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=wf-budget-service-5d9b6c54f8-wjf9k" t=2024-05-29T13:44:15.784287937Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.784172425Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=396586 slug=opengov instance="cluster=production, environment=production, namespace=workforce, pod=budget-gateway-service-5bf9899ddb-4hj4v" t=2024-05-29T13:44:15.783938316Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=396586 slug=opengov t=2024-05-29T13:44:15.783836757Z level=debug msg="State manager processing evaluation results" resultCount=24
+level=debug ts=2024-05-29T13:44:15.783603379Z caller=remote_instance_store.go:51 user=520342 slug=atrati msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.783596503Z caller=remote_instance_store.go:51 user=180994 slug=cgmonitor msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=163513 slug=dialpad t=2024-05-29T13:44:15.78346786Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=163513 slug=dialpad instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.783454581Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=163513 slug=dialpad instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.78344714Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=163513 slug=dialpad instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.783417446Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=163513 slug=dialpad instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.783407599Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=163513 slug=dialpad version=35 fingerprint=c5a97915aa68b6b4 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.783300388Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-logs, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.782775149s EvaluationString:}]" duration=52.906714ms
+level=debug ts=2024-05-29T13:44:15.782165884Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=430961 slug=solifi t=2024-05-29T13:44:15.78157326Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=14.68816ms
+logger=ngalert.state.manager user=765158 slug=stellarmenus instance="__name__=up, instance=grafana-prod, job=Step Functions" t=2024-05-29T13:44:15.781423863Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=312340 slug=lakefs t=2024-05-29T13:44:15.780992116Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=312340 slug=lakefs version=100 fingerprint=78b1f02b6c94c6b4 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.78084665Z level=debug msg="Alert rule evaluated" results="[{Instance:TableName=control-plane-v2 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:TableName=control-plane-v2 Value:0xc017202a68} C:{Var:C Labels:TableName=control-plane-v2 Value:0xc017202a60}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.780331055s EvaluationString:[ var='B' labels={TableName=control-plane-v2} value=0 ], [ var='C' labels={TableName=control-plane-v2} value=0 ]}]" duration=44.56025ms
+logger=ngalert.state.manager.persist user=206107 slug=hydrolix t=2024-05-29T13:44:15.780918271Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=24.50355ms
+level=debug ts=2024-05-29T13:44:15.780526613Z caller=remote_instance_store.go:51 user=756904 slug=orbdatanfr msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.779528076Z caller=remote_instance_store.go:51 user=349736 slug=elephanthealthcare msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.778315273Z caller=remote_instance_store.go:51 user=260796 slug=expressvpn msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.777510495Z caller=remote_instance_store.go:51 user=309009 slug=elestyle msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=237629 slug=ocrolus t=2024-05-29T13:44:15.777076516Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager.persist user=430961 slug=solifi t=2024-05-29T13:44:15.775967332Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=13.075377ms
+logger=ngalert.state.manager.persist user=656158 slug=muonspacegroundprod t=2024-05-29T13:44:15.775731596Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=656158 slug=muonspacegroundprod instance="datasource_uid=a27bb067-67c3-4636-aa16-ed387b9bc21e, ref_id=ssd_used" previous_handler=resultNoData t=2024-05-29T13:44:15.775712504Z level=debug msg="Execution keep last state is Normal" handler=resultNormal
+logger=ngalert.state.manager user=806229 slug=simplisafe instance="host=ip-10-91-5-100.us-west-2.compute.internal" t=2024-05-29T13:44:15.77284151Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=14927 slug=rstsoftware instance= t=2024-05-29T13:44:15.771474347Z level=debug msg="Keeping state" state=Alerting previous_ends_at=2024-05-29T13:47:00Z next_ends_at=2024-05-29T13:48:00Z
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770768917Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770729015Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770622945Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770613173Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770598955Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770579081Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770571348Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770550889Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770387812Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770365274Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770292505Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770271125Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770231852Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770221589Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdeyrm9s020owb, ref_id=A" t=2024-05-29T13:44:15.770163201Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=206107 slug=hydrolix version=3 fingerprint=4ecfee11a8a54653 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.7700428Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=fdeyrm9s020owb, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.769733229s EvaluationString:}]" duration=123.571101ms
+level=info ts=2024-05-29T13:44:15.769608655Z caller=remote_alert_sender.go:94 user=4947 slug=mediamath host=mediamath-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.145.156.57:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=ddbhsq1zf0gsle alerts=1
+logger=ngalert.state.manager.persist user=250150 slug=bizagi t=2024-05-29T13:44:15.769599016Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=12.924338ms
+logger=ngalert.scheduler user=404375 slug=cbeanalytics version=2 fingerprint=ccf0a14cbed23fee attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.76775728Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.767411307s EvaluationString:}]" duration=16.800948ms
+level=info ts=2024-05-29T13:44:15.767623342Z caller=grafana.go:247 user=396586 slug=opengov msg="rules manager rule groups request" path=/prometheus/api/v1/rules grafana_org_id=1 query="limit_alerts=16" groups=40 alerts=0
+logger=ngalert.scheduler user=491157 slug=prd01wr version=2 fingerprint=165f2fee356ad8f8 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.767433563Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.76711132s EvaluationString:}]" duration=21.075989ms
+level=debug ts=2024-05-29T13:44:15.767074048Z caller=remote_instance_store.go:51 user=80938 slug=fispan msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.766874668Z level=debug msg="Keeping state" state=Normal
+level=error ts=2024-05-29T13:44:15.766358493Z caller=remote_rule_evaluator.go:110 user=432323 slug=lithic msg="remote evaluate failed" code=Code(422) err="failed to parse expression 'B': reduction avg not implemented"
+level=debug ts=2024-05-29T13:44:15.765364182Z caller=remote_instance_store.go:51 user=381989 slug=vanoordacf msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=328755 slug=infogrideu instance="ServiceName=sensor-planning-api" t=2024-05-29T13:44:15.763868946Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.764331698Z caller=remote_instance_store.go:51 user=430961 slug=solifi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=430961 slug=solifi t=2024-05-29T13:44:15.764288258Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=453308 slug=hyperzodprod instance= t=2024-05-29T13:44:15.763944033Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=453308 slug=hyperzodprod instance= t=2024-05-29T13:44:15.763924718Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.763473101Z caller=remote_instance_store.go:51 user=687021 slug=heviai msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.762876862Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.762989195Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.762895982Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.762846442Z level=debug msg="Execution error state is Normal" handler=resultNormal previous_handler=resultError
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.762835984Z level=debug msg="Setting next state" handler=resultError
+level=debug ts=2024-05-29T13:44:15.76276045Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.761866075Z caller=remote_instance_store.go:51 user=882448 slug=bookbookspace1 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.761779898Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=446790 slug=empowereco instance="instance=stargaze" t=2024-05-29T13:44:15.761521942Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=446790 slug=empowereco instance="instance=jackal" t=2024-05-29T13:44:15.761415347Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=446790 slug=empowereco instance="instance=jackal" t=2024-05-29T13:44:15.761399917Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.761178995Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.759196238Z caller=remote_instance_store.go:51 user=639839 slug=silae msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.759118674Z caller=remote_instance_store.go:51 user=260796 slug=expressvpn msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.758652888Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=642786 slug=sophoscomnsg instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.758263356Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:47:10Z next_ends_at=2024-05-29T13:48:10Z
+level=debug ts=2024-05-29T13:44:15.757924854Z caller=remote_instance_store.go:51 user=502468 slug=gmawater msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.757443058Z caller=remote_instance_store.go:51 user=414522 slug=scaleops msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="datasource_uid=fdg5sm3oacbnkc, ref_id=A" t=2024-05-29T13:44:15.756384898Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.scheduler user=206107 slug=hydrolix version=3 fingerprint=f72cb230217f9c02 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.756213335Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=fdg5sm3oacbnkc, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.755874102s EvaluationString:}]" duration=54.049352ms
+logger=ngalert.state.manager.persist user=328755 slug=infogrideu t=2024-05-29T13:44:15.755491937Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=163513 slug=dialpad t=2024-05-29T13:44:15.755393099Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.753795376Z caller=remote_instance_store.go:51 user=18798 slug=smsportal msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.753194967Z caller=remote_instance_store.go:51 user=520342 slug=atrati msg="calling SaveAlertInstance"
+level=info ts=2024-05-29T13:44:15.753056343Z caller=grafana.go:247 user=884866 slug=cnonumerique msg="rules manager rule groups request" path=/prometheus/api/v1/rules grafana_org_id=1 query="limit_alerts=15&state=firing&state=pending&state=error" groups=10 alerts=0
+logger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.751862342Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=698963 slug=lemonade instance="app=home-risk, pod=home-risk-668d54b448-jfx7r" t=2024-05-29T13:44:15.75185027Z level=debug msg="Keeping state" state=Normal
+Error parsing panelUID for alert annotationruleID433dashactualerrorstrconv.ParseInt: parsing "": invalid syntaxlogger=ngalert.scheduler user=698963 slug=lemonade version=5 fingerprint=b5b925e753db6a58 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.751653578Z level=debug msg="Alert rule evaluated" results="[{Instance:app=home-risk, pod=home-risk-668d54b448-f4hll State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=home-risk, pod=home-risk-668d54b448-f4hll Value:0xc036c6dfc0} THRESHOLD:{Var:THRESHOLD Labels:app=home-risk, pod=home-risk-668d54b448-f4hll Value:0xc036c6df80}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.751349421s EvaluationString:[ var='QUERY' labels={app=home-risk, pod=home-risk-668d54b448-f4hll} value=0 ], [ var='THRESHOLD' labels={app=home-risk, pod=home-risk-668d54b448-f4hll} value=0 ]} {Instance:app=home-risk, pod=home-risk-668d54b448-jfx7r State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=home-risk, pod=home-risk-668d54b448-jfx7r Value:0xc010006010} THRESHOLD:{Var:THRESHOLD Labels:app=home-risk, pod=home-risk-668d54b448-jfx7r Value:0xc010006070}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.751363919s EvaluationString:[ var='QUERY' labels={app=home-risk, pod=home-risk-668d54b448-jfx7r} value=0 ], [ var='THRESHOLD' labels={app=home-risk, pod=home-risk-668d54b448-jfx7r} value=0 ]}]" duration=51.034272ms
+level=info ts=2024-05-29T13:44:15.75166588Z caller=remote_alert_sender.go:94 user=191376 slug=abalabuha host=abalabuha-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.152.120.77:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=y-G__2Onk alerts=1
+logger=ngalert.state.manager.persist user=191376 slug=abalabuha t=2024-05-29T13:44:15.7515138Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=19.418151ms
+logger=ngalert.state.manager user=516847 slug=signit instance= t=2024-05-29T13:44:15.751108277Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=516847 slug=signit version=28 fingerprint=5e9b2f15ba72108e attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.750990512Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[A:{Var:A Labels: Value:0xc04cb68eb8} B:{Var:B Labels: Value:0xc04cb68f30} C:{Var:C Labels: Value:0xc04cb68f38}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.75063771s EvaluationString:[ var='A' labels={} value=25.314814814814525 ], [ var='B' labels={} value=25.314814814814525 ], [ var='C' labels={} value=0 ]}]" duration=21.376286ms
+logger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.750883505Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=214309 slug=spenmo instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.75064387Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=214309 slug=spenmo instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.750630523Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=250150 slug=bizagi instance= t=2024-05-29T13:44:15.750623536Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=214309 slug=spenmo version=277 fingerprint=fca3cec3a4df409e attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.750488744Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-logs, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.750146121s EvaluationString:}]" duration=39.999951ms
+logger=ngalert.scheduler user=250150 slug=bizagi version=58 fingerprint=5d2c306009ecd05a attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.750471677Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.75027071s EvaluationString:}]" duration=399.007287ms
+logger=ngalert.state.manager user=465668 slug=xpressinfra instance= t=2024-05-29T13:44:15.750541675Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.749221346Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=856040 slug=kuady t=2024-05-29T13:44:15.748670215Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=10.801796ms
+level=debug ts=2024-05-29T13:44:15.746308437Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.746013121Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=642786 slug=sophoscomnsg instance="namespace=reserve-resource" t=2024-05-29T13:44:15.745871096Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=642786 slug=sophoscomnsg instance="namespace=komodor" t=2024-05-29T13:44:15.745702534Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=642786 slug=sophoscomnsg instance="namespace=argocd" t=2024-05-29T13:44:15.745590653Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.745326664Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=107.085363ms
+level=debug ts=2024-05-29T13:44:15.745015631Z caller=remote_instance_store.go:51 user=633501 slug=y2engineering msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.744748336Z caller=remote_instance_store.go:51 user=412779 slug=microstrategy msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=882448 slug=bookbookspace1 instance="datasource_uid=grafanacloud-logs, ref_id=Number of Exception Logs" t=2024-05-29T13:44:15.743723748Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=882448 slug=bookbookspace1 instance="datasource_uid=grafanacloud-logs, ref_id=Number of Exception Logs" t=2024-05-29T13:44:15.743651906Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=882448 slug=bookbookspace1 instance="datasource_uid=grafanacloud-logs, ref_id=Number of Exception Logs" t=2024-05-29T13:44:15.743632896Z level=debug msg="Setting next state" handler=resultNoData
+level=info ts=2024-05-29T13:44:15.743522425Z caller=remote_alert_sender.go:94 user=622339 slug=lendbr host=lendbr-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.146.117.83:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=ddk7e8hti9pmoc alerts=1
+logger=ngalert.state.manager.persist user=622339 slug=lendbr t=2024-05-29T13:44:15.742695906Z level=debug msg="Saving alert states done" count=3 max_state_save_concurrency=1 duration=48.313161ms
+level=debug ts=2024-05-29T13:44:15.742687997Z caller=remote_instance_store.go:51 user=447897 slug=mysten msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=421567 slug=nexx360 t=2024-05-29T13:44:15.742332798Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=A,B" t=2024-05-29T13:44:15.742054174Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:47:10Z next_ends_at=2024-05-29T13:48:10Z
+logger=ngalert.scheduler user=4947 slug=mediamath version=1 fingerprint=dde0eed59739c93d attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.741957263Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=000000020, ref_id=A,B State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.741674269s EvaluationString:}]" duration=37.835978ms
+logger=ngalert.state.manager.persist user=426229 slug=accelbyte t=2024-05-29T13:44:15.741237667Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=707607 slug=obi t=2024-05-29T13:44:15.738358799Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=301090 slug=racktopsystems instance="customer_number=CN00014B, is_vm=false, scope=public, stability=Release, system_serial=RT0001II, version=23.6.0.195" t=2024-05-29T13:44:15.740217929Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.738111564Z caller=remote_instance_store.go:51 user=729654 slug=bmsmonitoring msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.739998314Z caller=remote_instance_store.go:51 user=76255 slug=benzinga msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=76255 slug=benzinga t=2024-05-29T13:44:15.739955852Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=301090 slug=racktopsystems instance="customer_number=CN00011S, is_vm=false, scope=public, stability=release, system_serial=RT0001CI, version=23.6.1.263" t=2024-05-29T13:44:15.739502448Z level=debug msg="Setting next state" handler=resultNormal
+level=info ts=2024-05-29T13:44:15.739403982Z caller=remote_alert_sender.go:94 user=78401 slug=ayadav6 host=ayadav6-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.152.83.20:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=EVMYgIw7k alerts=1
+logger=ngalert.state.manager user=475799 slug=dpdcz instance="stream_name=DX_CUSTOMERS" t=2024-05-29T13:44:15.738873417Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=301090 slug=racktopsystems instance="customer_number=CN0000XZ, is_vm=false, scope=public, stability=release, system_serial=RT0001AK, version=23.2.0.54" t=2024-05-29T13:44:15.738745928Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=856040 slug=kuady t=2024-05-29T13:44:15.737868329Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=265756 slug=vowfood t=2024-05-29T13:44:15.737426394Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=14.073081ms
+level=debug ts=2024-05-29T13:44:15.736607645Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=301090 slug=racktopsystems instance="customer_number=CN000001, is_vm=false, scope=public, stability=release, system_serial=RT0000XS, version=23.4.6.50" t=2024-05-29T13:44:15.7358482Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.732159013Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.732103713Z caller=remote_instance_store.go:51 user=687021 slug=heviai msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=191376 slug=abalabuha t=2024-05-29T13:44:15.732089299Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=info ts=2024-05-29T13:44:15.732032482Z caller=remote_image_capturer.go:61 user=191376 slug=abalabuha rule_org_id=1 rule_uid=y-G__2Onk dashboard=PuXkhsuMk panel=46 msg="skipping screenshot for tenant" error="rpc error: code = Code(422) desc = screenshots unavailable"
+level=debug ts=2024-05-29T13:44:15.731345009Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=824501 slug=bendingspoons t=2024-05-29T13:44:15.73125327Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=191376 slug=abalabuha t=2024-05-29T13:44:15.73120469Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.731095634Z caller=remote_instance_store.go:51 user=196013 slug=inmediasoftware msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=371756 slug=asapp t=2024-05-29T13:44:15.73068826Z level=debug msg="Saving alert states done" count=4 max_state_save_concurrency=1 duration=75.358914ms
+level=debug ts=2024-05-29T13:44:15.729790503Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.726530699Z caller=remote_instance_store.go:51 user=469851 slug=yello msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.726228577Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.726114377Z caller=remote_instance_store.go:51 user=260796 slug=expressvpn msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=696798 slug=mcv version=1 fingerprint=66174db2c4744fdc attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.725557472Z level=debug msg="Alert rule evaluated" results="[{Instance:name=keepLastValue(eadp.gos.torch.prod.bf-2021-xbsx-gen5.Users_in_Game,5) Query State:Normal Error: Results:map[] Values:map[Breaches:{Var:Breaches Labels: Value:0xc0b894f2c8} IgnoreBelow:{Var:IgnoreBelow Labels: Value:0xc0b894f2f0} Threshold:{Var:Threshold Labels: Value:0xc0b894f2f8} compare:{Var:compare Labels:name=keepLastValue(eadp.gos.torch.prod.bf-2021-xbsx-gen5.Users_in_Game,5) Query Value:0xc0b894f338} sum:{Var:sum Labels:name=keepLastValue(eadp.gos.torch.prod.bf-2021-xbsx-gen5.Users_in_Game,5) Query Value:0xc0b894f2b0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.723851807s EvaluationString:[ var='Breaches' labels={} value=1 ], [ var='IgnoreBelow' labels={} value=2000 ], [ var='Threshold' labels={} value=-6 ], [ var='compare' labels={name=keepLastValue(eadp.gos.torch.prod.bf-2021-xbsx-gen5.Users_in_Game,5) Query} value=0 ], [ var='sum' labels={name=keepLastValue(eadp.gos.torch.prod.bf-2021-xbsx-gen5.Users_in_Game,5) Query} value=0 ]}]" duration=52.989132ms
+level=debug ts=2024-05-29T13:44:15.724947703Z caller=remote_instance_store.go:51 user=472647 slug=planet msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=472647 slug=planet instance="service=urlsigning@file" t=2024-05-29T13:44:15.724890383Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=472647 slug=planet t=2024-05-29T13:44:15.724634441Z level=debug msg="State manager processing evaluation results" resultCount=6
+level=debug ts=2024-05-29T13:44:15.723391804Z caller=remote_instance_store.go:51 user=265756 slug=vowfood msg="calling SaveAlertInstance"
+level=info ts=2024-05-29T13:44:15.722156468Z caller=remote_alert_sender.go:94 user=681509 slug=momotfilip host=momotfilip-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.9.49.130:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=c4e36770-a2b2-4f1d-b6fa-06d42192d97f alerts=1
+level=debug ts=2024-05-29T13:44:15.721337791Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.721145336Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-us-ewr-comcast-01" t=2024-05-29T13:44:15.720985738Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-sg-sin-gsl-03" t=2024-05-29T13:44:15.719138772Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=386776 slug=rcsworks t=2024-05-29T13:44:15.718788495Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=16.316604ms
+level=debug ts=2024-05-29T13:44:15.718505831Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-sg-sin-gsl-01" t=2024-05-29T13:44:15.718531261Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.717782339Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-pl-wmi-dp-01" t=2024-05-29T13:44:15.716988805Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=78401 slug=ayadav6 instance= t=2024-05-29T13:44:15.716638922Z level=debug msg="Execution error state is Alerting" handler=resultAlerting previous_handler=resultError
+logger=ngalert.state.manager user=78401 slug=ayadav6 instance= t=2024-05-29T13:44:15.716627833Z level=debug msg="Setting next state" handler=resultError
+logger=ngalert.scheduler user=78401 slug=ayadav6 version=1 fingerprint=cfcc861d56cfafaa attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.716566198Z level=error msg="Failed to evaluate rule" error="failed to build query 'A': can not get data source by uid, uid is empty" duration=638.761µs
+level=error ts=2024-05-29T13:44:15.716534985Z caller=remote_rule_evaluator.go:110 user=78401 slug=ayadav6 msg="remote evaluate failed" code=Code(422) err="failed to build query 'A': can not get data source by uid, uid is empty"
+logger=ngalert.state.manager.persist user=250150 slug=bizagi t=2024-05-29T13:44:15.715890778Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=14.425512ms
+level=debug ts=2024-05-29T13:44:15.715719772Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-no-osl-glesys-02" t=2024-05-29T13:44:15.71526246Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.714611148Z caller=remote_instance_store.go:51 user=502468 slug=gmawater msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=173730 slug=nikon t=2024-05-29T13:44:15.714090022Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=19.327805ms
+level=debug ts=2024-05-29T13:44:15.712676426Z caller=remote_instance_store.go:51 user=177465 slug=fairtiq msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-hk-hkg-dp-01" t=2024-05-29T13:44:15.711742041Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.71154909Z caller=remote_instance_store.go:51 user=636704 slug=nmartin2 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.711403648Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=797387 slug=roadrunnerdev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.711270568Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager.persist user=150145 slug=pleasant t=2024-05-29T13:44:15.710363139Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=14.209065ms
+logger=ngalert.state.manager user=398018 slug=joepegs instance= t=2024-05-29T13:44:15.709186657Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=432323 slug=lithic instance= t=2024-05-29T13:44:15.708952993Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=432323 slug=lithic instance= t=2024-05-29T13:44:15.708934947Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.708568187Z caller=remote_instance_store.go:51 user=196013 slug=inmediasoftware msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=620731 slug=masonite instance="resourceName=SLVAZQAINFMDMDQ" t=2024-05-29T13:44:15.708484406Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-fr-cdg-dp-02" t=2024-05-29T13:44:15.707892586Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.707825743Z caller=remote_instance_store.go:51 user=318387 slug=luarx msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.707785581Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-es-mad-dp-01" t=2024-05-29T13:44:15.707278492Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=191103 slug=amazonadmin t=2024-05-29T13:44:15.707195829Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=28.966293ms
+logger=ngalert.state.manager user=681509 slug=momotfilip instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.706997882Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:47:10Z next_ends_at=2024-05-29T13:48:10Z
+logger=ngalert.state.manager user=681509 slug=momotfilip instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.706982413Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=681509 slug=momotfilip t=2024-05-29T13:44:15.706951481Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.705810331Z caller=remote_instance_store.go:51 user=206107 slug=hydrolix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-ch-zrh-dp-01" t=2024-05-29T13:44:15.706010779Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-ca-yyz-dp-03" t=2024-05-29T13:44:15.705911747Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=432323 slug=lithic t=2024-05-29T13:44:15.70543289Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-br-sao-vultr-01" t=2024-05-29T13:44:15.705448335Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.7052521Z caller=remote_instance_store.go:51 user=527204 slug=lnrsusinsurancenonprod msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.705124811Z caller=remote_instance_store.go:51 user=516613 slug=blackrocktp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.704662008Z caller=remote_instance_store.go:51 user=430961 slug=solifi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=543654 slug=jobcloudprogrammaticprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.704406949Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=543654 slug=jobcloudprogrammaticprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.703834271Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-at-vie-dp-03" t=2024-05-29T13:44:15.703575127Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.703551954Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-at-vie-dp-01" t=2024-05-29T13:44:15.703241202Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=389502 slug=ciscoiot t=2024-05-29T13:44:15.702674797Z level=debug msg="Skip rule evaluation because it is paused"
+logger=ngalert.state.manager user=386776 slug=rcsworks instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.702432784Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=386776 slug=rcsworks instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.702408195Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=p2p-vultr-mex-ar-a01" t=2024-05-29T13:44:15.702396232Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=386776 slug=rcsworks version=2 fingerprint=e8763a41bede9687 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.702314783Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.702000443s EvaluationString:}]" duration=34.703757ms
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=p2p-linode-sin-id-a02" t=2024-05-29T13:44:15.701852935Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=250150 slug=bizagi t=2024-05-29T13:44:15.701481986Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=134486 slug=podigee instance="cmd=exec, hostname=railspodigeecache-green-nbg1-02, instance=5.75.188.152:9121, job=consul_services, quantile=99, service=redis_exporter" t=2024-05-29T13:44:15.700990278Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=p2p-linode-ewr-us-a03" t=2024-05-29T13:44:15.700940238Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.700816961Z caller=remote_instance_store.go:51 user=502468 slug=gmawater msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=753403 slug=romich t=2024-05-29T13:44:15.700467507Z level=debug msg="Saving alert states done" count=28 max_state_save_concurrency=1 duration=648.573495ms
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv6-linode-ewr-us-z01" t=2024-05-29T13:44:15.70050439Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv6-linode-bom-in-a03" t=2024-05-29T13:44:15.7003573Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.70020854Z caller=remote_instance_store.go:51 user=756904 slug=orbdatanfr msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.700224149Z caller=remote_instance_store.go:51 user=716600 slug=microntechnology msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.699322841Z caller=remote_instance_store.go:51 user=456946 slug=menlosecurityredge msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv4-vultr-cdg-fr-a06" t=2024-05-29T13:44:15.69914996Z level=debug msg="Keeping state" state=Normal
+level=debug component=discovery ts=2024-05-29T13:44:15.698798967Z caller=retry.go:58 user=529753 msg="retrying grpc request" method=/remoteruler.rules.v1.RulesService/GetByRuleGroup attempt=4
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv4-m247-bru-be-a02" t=2024-05-29T13:44:15.696917347Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=851297 slug=roadrunneruat t=2024-05-29T13:44:15.696609227Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=851297 slug=roadrunneruat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.696526756Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.696542311Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=851297 slug=roadrunneruat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.696469875Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=851297 slug=roadrunneruat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.696459244Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager.persist user=4947 slug=mediamath t=2024-05-29T13:44:15.696139788Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=38.279667ms
+logger=ngalert.state.manager user=150145 slug=pleasant instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.696129816Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=150145 slug=pleasant t=2024-05-29T13:44:15.696101307Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv4-bt-fra-de-a02" t=2024-05-29T13:44:15.696083272Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.696025666Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=ipv4-aruba-flr-it-b02" t=2024-05-29T13:44:15.695798386Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.695060806Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=173730 slug=nikon t=2024-05-29T13:44:15.694757251Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.694678058Z caller=remote_instance_store.go:51 user=94501 slug=datastax msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=173730 slug=nikon t=2024-05-29T13:44:15.694664217Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=622339 slug=lendbr instance="__name__=kube_horizontalpodautoscaler_status_current_replicas, cluster=prod-shared-palmdale, horizontalpodautoscaler=keda-hpa-worker-voldemort-process-cerc-optin-request, instance=grafana-cloud-monitoring-kube-state-metrics.grafana-cloud-monitoring.svc:8080, job=integrations/kubernetes/kube-state-metrics, namespace=voldemort" t=2024-05-29T13:44:15.693794553Z level=debug msg="Setting next state" handler=resultAlerting
+level=debug ts=2024-05-29T13:44:15.693430247Z caller=remote_instance_store.go:51 user=312340 slug=lakefs msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=402122 slug=leapwallet t=2024-05-29T13:44:15.69275558Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=19.231683ms
+logger=ngalert.state.manager user=697570 slug=carroteco t=2024-05-29T13:44:15.691502467Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=538355 slug=flogic t=2024-05-29T13:44:15.691176252Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.690567143Z caller=remote_instance_store.go:51 user=185895 slug=gradle msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=851297 slug=roadrunneruat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.690415512Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=851297 slug=roadrunneruat version=1 fingerprint=9c1a80daca99034c attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.6902981Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.690044953s EvaluationString:}]" duration=7.138562ms
+logger=ngalert.state.manager.persist user=716600 slug=microntechnology t=2024-05-29T13:44:15.690140727Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=70430 slug=dapperlabs t=2024-05-29T13:44:15.690105787Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.scheduler user=716600 slug=microntechnology version=1 fingerprint=f8ddac07d74aede4 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.689947704Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.68981761s EvaluationString:}]" duration=8.845291ms
+logger=ngalert.state.manager user=756004 slug=jdsportsprd instance="agent_hostname=ip-10-0-101-115, instance=ip-10-0-101-115:9090, job=integrations/node_exporter" t=2024-05-29T13:44:15.689652309Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=245291 slug=pismo instance= t=2024-05-29T13:44:15.689378433Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=245291 slug=pismo t=2024-05-29T13:44:15.689361088Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.689149625Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=735588 slug=srepradnya t=2024-05-29T13:44:15.688925376Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.689000395Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.688979042Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=735588 slug=srepradnya version=5 fingerprint=d120aa2c0631ffcf attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.688849265Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.688612809s EvaluationString:}]" duration=7.679271ms
+level=debug ts=2024-05-29T13:44:15.688876216Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.687984649Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.687856961Z caller=remote_instance_store.go:51 user=242310 slug=suzy msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=715708 slug=ggiprod t=2024-05-29T13:44:15.687779927Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=6.666104ms
+logger=ngalert.state.manager.persist user=615073 slug=origence t=2024-05-29T13:44:15.687549987Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=615073 slug=origence t=2024-05-29T13:44:15.687473282Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.687231608Z caller=remote_instance_store.go:51 user=469851 slug=yello msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.687259287Z caller=remote_instance_store.go:51 user=756904 slug=orbdatanfr msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.687212134Z caller=remote_instance_store.go:51 user=516613 slug=blackrocktp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.687064744Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.686708769Z caller=remote_instance_store.go:51 user=636704 slug=nmartin2 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.685900873Z caller=remote_instance_store.go:51 user=295631 slug=dapvizor msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=295631 slug=dapvizor t=2024-05-29T13:44:15.685836397Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=295631 slug=dapvizor instance="datasource_uid=ioFV1Jn4z, ref_id=A" t=2024-05-29T13:44:15.685798676Z level=debug msg="Setting next state" handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.685466716Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=196413 slug=form3production instance="Region=-, ServiceLimit=Route 53 Max Health Checks, ServiceName=Route53" t=2024-05-29T13:44:15.684634669Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.684570409Z caller=remote_instance_store.go:51 user=810903 slug=vespaai msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.683241135Z caller=remote_instance_store.go:51 user=615392 slug=shinemetrics msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=344017 slug=descript t=2024-05-29T13:44:15.68282555Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=18.250146ms
+logger=ngalert.state.manager user=245291 slug=pismo instance= t=2024-05-29T13:44:15.68200537Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=111653 slug=theassociationmxp instance= t=2024-05-29T13:44:15.681443556Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.681390478Z caller=remote_instance_store.go:51 user=196013 slug=inmediasoftware msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=111653 slug=theassociationmxp t=2024-05-29T13:44:15.681400602Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=111653 slug=theassociationmxp version=1 fingerprint=201c78cc669a9dad attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.681321764Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.680873163s EvaluationString:}]" duration=44.157366ms
+logger=ngalert.state.manager user=715708 slug=ggiprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.68096289Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=715708 slug=ggiprod version=1 fingerprint=80fed88493f41399 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.680868089Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.680651243s EvaluationString:}]" duration=6.764154ms
+logger=ngalert.state.manager user=171235 slug=circleslabs instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.67968682Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.679617897Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=696798 slug=mcv t=2024-05-29T13:44:15.679412762Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=9.418322ms
+level=debug ts=2024-05-29T13:44:15.678348561Z caller=remote_instance_store.go:51 user=309009 slug=elestyle msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.678163434Z caller=remote_instance_store.go:51 user=350037 slug=morpho msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.67825407Z caller=remote_instance_store.go:51 user=191103 slug=amazonadmin msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=191103 slug=amazonadmin t=2024-05-29T13:44:15.678225911Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.scheduler user=250150 slug=bizagi version=1 fingerprint=55a1ecdf8e408796 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.677660052Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.677403174s EvaluationString:}]" duration=156.396427ms
+level=debug ts=2024-05-29T13:44:15.677444326Z caller=remote_instance_store.go:51 user=423441 slug=outgoinc msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.675315795Z caller=remote_instance_store.go:51 user=698963 slug=lemonade msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.675271694Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=698963 slug=lemonade instance="app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg" t=2024-05-29T13:44:15.675250196Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=698963 slug=lemonade instance="app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9" t=2024-05-29T13:44:15.675182621Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=698963 slug=lemonade t=2024-05-29T13:44:15.675119188Z level=debug msg="State manager processing evaluation results" resultCount=2
+logger=ngalert.scheduler user=698963 slug=lemonade version=1 fingerprint=404505bccc452ab1 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.675009104Z level=debug msg="Alert rule evaluated" results="[{Instance:app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9 State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9 Value:0xc036c6c2f8} THRESHOLD:{Var:THRESHOLD Labels:app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9 Value:0xc036c6c320}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.674583749s EvaluationString:[ var='QUERY' labels={app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9} value=0 ], [ var='THRESHOLD' labels={app=munic-device-management, pod=munic-device-management-7b66f56644-jtsf9} value=0 ]} {Instance:app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg Value:0xc036c6c348} THRESHOLD:{Var:THRESHOLD Labels:app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg Value:0xc036c6c370}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.674599964s EvaluationString:[ var='QUERY' labels={app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg} value=0 ], [ var='THRESHOLD' labels={app=munic-device-management, pod=munic-device-management-7b66f56644-k2ntg} value=0 ]}]" duration=42.657383ms
+logger=ngalert.state.manager.persist user=20177 slug=paddledash t=2024-05-29T13:44:15.675013012Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=22.322388ms
+level=debug ts=2024-05-29T13:44:15.674611693Z caller=remote_instance_store.go:51 user=756904 slug=orbdatanfr msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=158536 slug=clearsaleantifraude version=22 fingerprint=980185d2a9b2b90f attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.674500456Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=alert_disater_recovery_connections_counter State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.674214799s EvaluationString:}]" duration=13.531697ms
+logger=ngalert.scheduler user=402122 slug=leapwallet version=41 fingerprint=810cea1fe67883aa attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.673406118Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.673060613s EvaluationString:}]" duration=22.498891ms
+level=debug ts=2024-05-29T13:44:15.672963586Z caller=remote_instance_store.go:51 user=521139 slug=adevintamobiledepro msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=114492 slug=railsbank instance="QueueName=PROD-PLAY-RB_QUEUE_STATE_MACHINE_LEDGER-SQS" t=2024-05-29T13:44:15.670814199Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.66903228Z caller=remote_instance_store.go:51 user=166705 slug=crossnokaye msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.668981158Z caller=remote_instance_store.go:51 user=94501 slug=datastax msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=55491 slug=demandbase t=2024-05-29T13:44:15.668801851Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=23.265211ms
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="persistentvolume=pvc-8a51615fd7ec486a, persistentvolumeclaim=data-zookeeper-0" t=2024-05-29T13:44:15.668764373Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="persistentvolume=pvc-8a51615fd7ec486a, persistentvolumeclaim=data-zookeeper-0" t=2024-05-29T13:44:15.668751741Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="persistentvolume=pvc-2db0546e54a0406f, persistentvolumeclaim=data-zookeeper-1" t=2024-05-29T13:44:15.668654571Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.667553644Z caller=remote_instance_store.go:51 user=18798 slug=smsportal msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.667299798Z caller=remote_instance_store.go:51 user=177465 slug=fairtiq msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=344017 slug=descript t=2024-05-29T13:44:15.664570521Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=96267 slug=dhlcamarafria2019 instance= t=2024-05-29T13:44:15.664122392Z level=debug msg="Setting next state" handler=resultError
+logger=ngalert.state.manager user=96267 slug=dhlcamarafria2019 t=2024-05-29T13:44:15.664080874Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.663771094Z caller=remote_instance_store.go:51 user=487988 slug=microstrategyits msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.66300213Z caller=remote_instance_store.go:51 user=22398 slug=sunfolding msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.662970569Z caller=remote_instance_store.go:51 user=612525 slug=adleyeview msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.662183408Z caller=remote_instance_store.go:51 user=738479 slug=gohero msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.662145529Z caller=remote_instance_store.go:51 user=442934 slug=arqit msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.662121694Z caller=remote_instance_store.go:51 user=687021 slug=heviai msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=844274 slug=tixity instance="diskmountid=/" t=2024-05-29T13:44:15.66206385Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.662104618Z caller=remote_image_capturer.go:54 user=22398 slug=sunfolding rule_org_id=1 rule_uid=edae5869-8fa6-4fb1-8011-9257895c3628 dashboard=UsnySUPZz panel=69 msg="rendering alert image with grafana"
+level=debug ts=2024-05-29T13:44:15.66206511Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=22398 slug=sunfolding instance="datasource_uid=grafanacloud-sunfolding, ref_id=B" t=2024-05-29T13:44:15.662015889Z level=debug msg="Execution no data state is Alerting" handler=resultAlerting previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.661809738Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.661759037Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.661676906Z caller=remote_instance_store.go:51 user=350037 slug=morpho msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=22398 slug=sunfolding t=2024-05-29T13:44:15.661065735Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=359640 slug=swfseu instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.660739473Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.660687581Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=4947 slug=mediamath t=2024-05-29T13:44:15.660555984Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=A" t=2024-05-29T13:44:15.660543077Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:47:10Z next_ends_at=2024-05-29T13:48:10Z
+level=debug ts=2024-05-29T13:44:15.659823559Z caller=remote_instance_store.go:51 user=753403 slug=romich msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=890273 slug=cmhusqnp t=2024-05-29T13:44:15.659836767Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=12.536285ms
+level=debug ts=2024-05-29T13:44:15.659510429Z caller=remote_instance_store.go:51 user=147497 slug=rhodev msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.658739826Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.657758453Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.657916199Z caller=remote_instance_store.go:51 user=4947 slug=mediamath msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=4947 slug=mediamath instance= t=2024-05-29T13:44:15.657835317Z level=warn msg="Failed to take an image" dashboard=wG02QrzZk panel=120 error="rpc error: code = Code(422) desc = screenshots unavailable"
+level=debug ts=2024-05-29T13:44:15.657706367Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.657034278Z caller=remote_image_capturer.go:54 user=4947 slug=mediamath rule_org_id=1 rule_uid=fdbhspzwx1hj7e dashboard=wG02QrzZk panel=120 msg="rendering alert image with grafana"
+logger=ngalert.scheduler user=4947 slug=mediamath version=1 fingerprint=bc3710097ab94a2e attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.656839828Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Alerting Error: Results:map[] Values:map[B0:{Var:B Labels:__name__=kube_node_spec_unschedulable, container=kube-state-metrics, endpoint=http, instance=10.233.75.27:8080, job=kube-state-metrics, namespace=prometheus, node=ord-mathco-prd023, pod=prometheus-kube-state-metrics-685b975bb7-n9tc8, service=prometheus-kube-state-metrics Value:0xc0351fe7f8} B1:{Var:B Labels:__name__=kube_node_spec_unschedulable, container=kube-state-metrics, endpoint=http, instance=10.233.75.27:8080, job=kube-state-metrics, namespace=prometheus, node=ord-mathco-prd035, pod=prometheus-kube-state-metrics-685b975bb7-n9tc8, service=prometheus-kube-state-metrics Value:0xc0351fe888}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.656547528s EvaluationString:[ var='B0' metric='kube_node_spec_unschedulable' labels={__name__=kube_node_spec_unschedulable, container=kube-state-metrics, endpoint=http, instance=10.233.75.27:8080, job=kube-state-metrics, namespace=prometheus, node=ord-mathco-prd023, pod=prometheus-kube-state-metrics-685b975bb7-n9tc8, service=prometheus-kube-state-metrics} value=1501 ], [ var='B1' metric='kube_node_spec_unschedulable' labels={__name__=kube_node_spec_unschedulable, container=kube-state-metrics, endpoint=http, instance=10.233.75.27:8080, job=kube-state-metrics, namespace=prometheus, node=ord-mathco-prd035, pod=prometheus-kube-state-metrics-685b975bb7-n9tc8, service=prometheus-kube-state-metrics} value=1501 ]}]" duration=74.557611ms
+level=debug ts=2024-05-29T13:44:15.656590808Z caller=remote_instance_store.go:51 user=206107 slug=hydrolix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=206107 slug=hydrolix t=2024-05-29T13:44:15.656533959Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=206107 slug=hydrolix instance= t=2024-05-29T13:44:15.656504355Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=250150 slug=bizagi t=2024-05-29T13:44:15.656072492Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager.persist user=253106 slug=elenasmonitor t=2024-05-29T13:44:15.655787303Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=47.234785ms
+level=debug ts=2024-05-29T13:44:15.655543358Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=spectrum-cable, uri=/rpc/com.asapp.schemas.product.chat.core.services.Core/PublishEvent" t=2024-05-29T13:44:15.655285113Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=jetblue, uri=/rpc/com.asapp.schemas.product.chat.core.services.Core/PublishEvent" t=2024-05-29T13:44:15.655222173Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=371756 slug=asapp instance="company_marker=aizhomesol, uri=/rpc/com.asapp.schemas.product.chat.core.services.Core/PublishEvent" t=2024-05-29T13:44:15.655057172Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ingress-nginx-internal-controller, namespace=ingress-nginx-internal" t=2024-05-29T13:44:15.654958991Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-rkrp-dataplatform-adapter-worker, namespace=fairtiq-rkrp-dataplatform-adapter" t=2024-05-29T13:44:15.654591697Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.65444881Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-purchase-clearing-worker, namespace=fairtiq-purchase" t=2024-05-29T13:44:15.654329977Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-pricing-data-manager-web, namespace=fairtiq-pricing-data-manager" t=2024-05-29T13:44:15.65429408Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-payment-rkrp-web, namespace=fairtiq-payment-rkrp" t=2024-05-29T13:44:15.654207411Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-notification-gateway-pushnotification-worker, namespace=fairtiq-notification-gateway" t=2024-05-29T13:44:15.65387503Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-notification-gateway-pushnotification-worker, namespace=fairtiq-notification-gateway" t=2024-05-29T13:44:15.653859345Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.653739902Z caller=remote_instance_store.go:51 user=309009 slug=elestyle msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.653782908Z caller=remote_instance_store.go:51 user=180994 slug=cgmonitor msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-idle-tracker-notifier-notification-worker, namespace=fairtiq-idle-tracker-notifier" t=2024-05-29T13:44:15.653562046Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-hermes-worker, namespace=fairtiq-hermes" t=2024-05-29T13:44:15.653464684Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-hermes-worker, namespace=fairtiq-hermes" t=2024-05-29T13:44:15.653455884Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-go-worker, namespace=fairtiq-go" t=2024-05-29T13:44:15.653388396Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-fraudnonscalable-worker, namespace=fairtiq-fraud" t=2024-05-29T13:44:15.653117604Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-customer-care-worker, namespace=fairtiq-customer-care" t=2024-05-29T13:44:15.652960206Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-campaign-web, namespace=fairtiq-campaign" t=2024-05-29T13:44:15.652769084Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=20177 slug=paddledash instance="Component=currency-service, SLI=CurrencySettingsPatchAPILatency" t=2024-05-29T13:44:15.652770224Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=20177 slug=paddledash instance="Component=currency-service, SLI=CurrencySettingsPatchAPILatency" t=2024-05-29T13:44:15.652758381Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=20177 slug=paddledash t=2024-05-29T13:44:15.652719136Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=20177 slug=paddledash version=2 fingerprint=704656f72f8efcfe attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.652633379Z level=debug msg="Alert rule evaluated" results="[{Instance:Component=currency-service, SLI=CurrencySettingsPatchAPILatency State:Normal Error: Results:map[] Values:map[AlertCondition:{Var:AlertCondition Labels:Component=currency-service, SLI=CurrencySettingsPatchAPILatency Value:0xc0415f13e0} BurnRate:{Var:BurnRate Labels:Component=currency-service, SLI=CurrencySettingsPatchAPILatency Value:0xc0415f1420} GoodEvents:{Var:GoodEvents Labels:Component=currency-service, SLI=CurrencySettingsPatchAPILatency Value:0xc0415f1340} ValidEvents:{Var:ValidEvents Labels:Component=currency-service, SLI=CurrencySettingsPatchAPILatency Value:0xc0415f1390}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.652304888s EvaluationString:[ var='AlertCondition' labels={Component=currency-service, SLI=CurrencySettingsPatchAPILatency} value=0 ], [ var='BurnRate' labels={Component=currency-service, SLI=CurrencySettingsPatchAPILatency} value=NaN ], [ var='GoodEvents' labels={Component=currency-service, SLI=CurrencySettingsPatchAPILatency} value=0 ], [ var='ValidEvents' labels={Component=currency-service, SLI=CurrencySettingsPatchAPILatency} value=0 ]}]" duration=89.309657ms
+logger=ngalert.state.manager user=20177 slug=paddledash t=2024-05-29T13:44:15.652625724Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=fairtiq-activity-log-worker, namespace=fairtiq-activity-log" t=2024-05-29T13:44:15.652583934Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.652464516Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=external-secrets-webhook, namespace=external-secrets" t=2024-05-29T13:44:15.652486928Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=coredns, namespace=kube-system" t=2024-05-29T13:44:15.652331621Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-pricing-pricing-worker, namespace=ciaco-pricing" t=2024-05-29T13:44:15.652255161Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-pricing-data-installer-web, namespace=ciaco-pricing-data-installer" t=2024-05-29T13:44:15.652239073Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-novapt-web, namespace=ciaco-novapt" t=2024-05-29T13:44:15.652157208Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-novapt-web, namespace=ciaco-novapt" t=2024-05-29T13:44:15.652138177Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-ml-lcidetector-worker, namespace=ciaco-ml" t=2024-05-29T13:44:15.652073787Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-jm-worker, namespace=ciaco-jm" t=2024-05-29T13:44:15.65192881Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.651943376Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-backend-web, namespace=ciaco-backend" t=2024-05-29T13:44:15.651810606Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.651869652Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=ciaco-access-web, namespace=ciaco-access-web" t=2024-05-29T13:44:15.651731864Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.651846379Z caller=remote_instance_store.go:51 user=536824 slug=forgerockit msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.651841071Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.651698267Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=argocd-dex-server, namespace=argocd" t=2024-05-29T13:44:15.651516908Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=staging, deployment=argocd-applicationset-controller, namespace=argocd" t=2024-05-29T13:44:15.651455946Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=prometheus-blackbox-exporter, namespace=grafana-agent" t=2024-05-29T13:44:15.651349985Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=kube-state-metrics, namespace=grafana-agent" t=2024-05-29T13:44:15.651174186Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=kube-dns-autoscaler, namespace=kube-system" t=2024-05-29T13:44:15.651097648Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=external-secrets-webhook, namespace=external-secrets" t=2024-05-29T13:44:15.650838823Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=external-secrets, namespace=external-secrets" t=2024-05-29T13:44:15.650759574Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=coredns, namespace=kube-system" t=2024-05-29T13:44:15.650644037Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.650634388Z caller=remote_instance_store.go:51 user=893158 slug=cmfollnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=argocd-commenter-controller-manager, namespace=argocd-commenter-system" t=2024-05-29T13:44:15.650350241Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=argo-workflows-server, namespace=argo-workflows" t=2024-05-29T13:44:15.650237024Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=amqp-sensor-5m5gq, namespace=fairtiq-hermes-ops" t=2024-05-29T13:44:15.650194694Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=sandbox, deployment=amqp-eventsource-2ggmx, namespace=fairtiq-hermes-ops" t=2024-05-29T13:44:15.650144987Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=surface, namespace=surface" t=2024-05-29T13:44:15.650061222Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=rabbitmq-exporter-prometheus-rabbitmq-exporter, namespace=rabbitmq-exporter" t=2024-05-29T13:44:15.650019593Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=733461 slug=lattice instance="instance=localhost:7400, job=sequencer-1, layer=l2, network=garnet, type=l2_safe" t=2024-05-29T13:44:15.649913802Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=733461 slug=lattice instance="instance=localhost:7400, job=follower-0, layer=l2, network=garnet, type=l2_safe" t=2024-05-29T13:44:15.64977443Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=277970 slug=teckresourcestest instance= t=2024-05-29T13:44:15.648125754Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.647943657Z caller=remote_instance_store.go:51 user=691102 slug=deluxeconfdev msg="calling SaveAlertInstance"
+level=error ts=2024-05-29T13:44:15.647956796Z caller=remote_rule_evaluator.go:110 user=277970 slug=teckresourcestest msg="remote evaluate failed" code=Code(422) err="failed to build query 'A': data source not found"
+logger=ngalert.state.manager user=691102 slug=deluxeconfdev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.647782414Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=691102 slug=deluxeconfdev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.647772164Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=691102 slug=deluxeconfdev t=2024-05-29T13:44:15.647758204Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=691102 slug=deluxeconfdev version=1 fingerprint=4bc6cea9ad43a1b0 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.647691532Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.647523313s EvaluationString:}]" duration=10.797075ms
+level=debug ts=2024-05-29T13:44:15.646441392Z caller=remote_instance_store.go:51 user=206107 slug=hydrolix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=866972 slug=mitsubishi t=2024-05-29T13:44:15.646359339Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=11.208702ms
+logger=ngalert.state.manager user=55491 slug=demandbase instance="datasource_uid=000000350, ref_id=B,C" t=2024-05-29T13:44:15.645441926Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=journey-tracer, namespace=journey-tracer" t=2024-05-29T13:44:15.64469215Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-sts-web, namespace=fairtiq-sts" t=2024-05-29T13:44:15.644387282Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-rkrp-loyalty-web, namespace=fairtiq-rkrp-loyalty" t=2024-05-29T13:44:15.6441099Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.644148037Z caller=remote_instance_store.go:51 user=166705 slug=crossnokaye msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-realtime-tracker-metrics-connector-worker, namespace=fairtiq-realtime-tracker-metrics" t=2024-05-29T13:44:15.643959228Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.643783545Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-payment-worker, namespace=fairtiq-payment" t=2024-05-29T13:44:15.643683298Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=430961 slug=solifi version=4 fingerprint=d4e0cd5dd23ee714 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.643658456Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.643380509s EvaluationString:}]" duration=114.793013ms
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-payment-web, namespace=fairtiq-payment" t=2024-05-29T13:44:15.643654265Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.643627842Z caller=remote_instance_store.go:51 user=260796 slug=expressvpn msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-partner-worker, namespace=fairtiq-partner" t=2024-05-29T13:44:15.64354296Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-notification-worker, namespace=fairtiq-notification" t=2024-05-29T13:44:15.643477592Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-mobile-telemetry-web, namespace=fairtiq-mobile-telemetry" t=2024-05-29T13:44:15.643211523Z level=debug msg="Setting next state" handler=resultNormal
+level=info ts=2024-05-29T13:44:15.643139686Z caller=grafana.go:247 user=786662 slug=skycareaignoc msg="rules manager rule groups request" path=/prometheus/api/v1/rules grafana_org_id=1 query="dashboard_uid=c78d146f-55b3-43c2-bde7-5e0d2c9e9f46" groups=0 alerts=0
+logger=ngalert.state.manager user=624354 slug=truliooworkflow instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.643042979Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=624354 slug=truliooworkflow instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.643013669Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-jmlab-web, namespace=fairtiq-jmlab" t=2024-05-29T13:44:15.642712524Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=765874 slug=rhwstaging t=2024-05-29T13:44:15.642594706Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=15.043877ms
+level=debug ts=2024-05-29T13:44:15.642436634Z caller=remote_instance_store.go:51 user=639839 slug=silae msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-ftd-manager-web, namespace=fairtiq-ftd-manager" t=2024-05-29T13:44:15.642271922Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-ftd-manager-web, namespace=fairtiq-ftd-manager" t=2024-05-29T13:44:15.6422442Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-data-exporter, namespace=fairtiq-data-exporter" t=2024-05-29T13:44:15.641994405Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-customer-care-worker, namespace=fairtiq-customer-care" t=2024-05-29T13:44:15.641792471Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.641833477Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=fairtiq-activity-log-web, namespace=fairtiq-activity-log" t=2024-05-29T13:44:15.641386083Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=external-secrets-cert-controller, namespace=external-secrets" t=2024-05-29T13:44:15.641253207Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-pricing-clearing-worker, namespace=ciaco-pricing" t=2024-05-29T13:44:15.640726272Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-ml-lcidetector-worker, namespace=ciaco-ml" t=2024-05-29T13:44:15.640604507Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-jm-worker, namespace=ciaco-jm" t=2024-05-29T13:44:15.6404576Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-beout-simulation, namespace=ciaco-beout-simulation" t=2024-05-29T13:44:15.64037573Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-backend-pricing-worker, namespace=ciaco-backend" t=2024-05-29T13:44:15.640249845Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.640170228Z caller=remote_instance_store.go:51 user=80938 slug=fispan msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-access-web, namespace=ciaco-access-web" t=2024-05-29T13:44:15.6401982Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=ciaco-access, namespace=ciaco-access" t=2024-05-29T13:44:15.640167303Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=528849 slug=bitvavo t=2024-05-29T13:44:15.640136293Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.640080282Z caller=remote_instance_store.go:51 user=520342 slug=atrati msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=production, deployment=aws-load-balancer-controller, namespace=alb-controller" t=2024-05-29T13:44:15.640116508Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.639937063Z caller=remote_instance_store.go:51 user=618621 slug=sendamatic msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=fairtiq-travel-web, namespace=fairtiq-travel" t=2024-05-29T13:44:15.639456521Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.63960187Z caller=remote_instance_store.go:51 user=87052 slug=polystream msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=fairtiq-hermes-worker, namespace=fairtiq-hermes" t=2024-05-29T13:44:15.63937294Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=fairtiq-go-worker, namespace=fairtiq-go" t=2024-05-29T13:44:15.639291419Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=618621 slug=sendamatic t=2024-05-29T13:44:15.639059776Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=external-secrets-cert-controller, namespace=external-secrets" t=2024-05-29T13:44:15.638975975Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=856040 slug=kuady instance= t=2024-05-29T13:44:15.639041775Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=856040 slug=kuady instance= t=2024-05-29T13:44:15.639023035Z level=debug msg="Setting next state" handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.638902456Z caller=remote_image_capturer.go:54 user=87052 slug=polystream rule_org_id=1 rule_uid=G89Eyn_nk dashboard=Trb3KjvZz panel=43 msg="rendering alert image with grafana"
+logger=ngalert.state.manager user=87052 slug=polystream instance= t=2024-05-29T13:44:15.638839686Z level=debug msg="Setting next state" handler=resultError
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=ciaco-pricing-pricing-worker, namespace=ciaco-pricing" t=2024-05-29T13:44:15.638771269Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=ciaco-jm-worker, namespace=ciaco-jm" t=2024-05-29T13:44:15.63865192Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=ciaco-jm-web, namespace=ciaco-jm" t=2024-05-29T13:44:15.638613278Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=ciaco-backend-pricing-worker, namespace=ciaco-backend" t=2024-05-29T13:44:15.63850284Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=ciaco-access, namespace=ciaco-access" t=2024-05-29T13:44:15.638405403Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=aws-load-balancer-controller, namespace=alb-controller" t=2024-05-29T13:44:15.638366497Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=argocd-server, namespace=argocd" t=2024-05-29T13:44:15.638323881Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=argocd-notifications-controller, namespace=argocd" t=2024-05-29T13:44:15.638229576Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=177465 slug=fairtiq instance="cluster=loadtest, deployment=argocd-dex-server, namespace=argocd" t=2024-05-29T13:44:15.638180301Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.638293191Z caller=remote_instance_store.go:51 user=841587 slug=tfxprod msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.638323401Z caller=remote_instance_store.go:51 user=781424 slug=n1eko msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.638297311Z caller=remote_instance_store.go:51 user=729654 slug=bmsmonitoring msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.638032753Z caller=remote_instance_store.go:51 user=475799 slug=dpdcz msg="calling SaveAlertInstance"
+Error parsing panelUID for alert annotationruleID2665dashactualerrorstrconv.ParseInt: parsing "": invalid syntaxlogger=ngalert.scheduler user=698963 slug=lemonade version=4 fingerprint=7ddbf80cca564d67 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.638013293Z level=debug msg="Alert rule evaluated" results="[{Instance:app=arcs-api, pod=arcs-api-69cdf955-bx2x6 State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=arcs-api, pod=arcs-api-69cdf955-bx2x6 Value:0xc02a9a8d58} THRESHOLD:{Var:THRESHOLD Labels:app=arcs-api, pod=arcs-api-69cdf955-bx2x6 Value:0xc02a9a8d90}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.637655643s EvaluationString:[ var='QUERY' labels={app=arcs-api, pod=arcs-api-69cdf955-bx2x6} value=0 ], [ var='THRESHOLD' labels={app=arcs-api, pod=arcs-api-69cdf955-bx2x6} value=0 ]} {Instance:app=arcs-api, pod=arcs-api-69cdf955-q7z28 State:Normal Error: Results:map[] Values:map[QUERY:{Var:QUERY Labels:app=arcs-api, pod=arcs-api-69cdf955-q7z28 Value:0xc02a9a8dd0} THRESHOLD:{Var:THRESHOLD Labels:app=arcs-api, pod=arcs-api-69cdf955-q7z28 Value:0xc02a9a8e10}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.637670298s EvaluationString:[ var='QUERY' labels={app=arcs-api, pod=arcs-api-69cdf955-q7z28} value=0 ], [ var='THRESHOLD' labels={app=arcs-api, pod=arcs-api-69cdf955-q7z28} value=0 ]}]" duration=74.661277ms
+level=debug ts=2024-05-29T13:44:15.637630696Z caller=remote_instance_store.go:51 user=269887 slug=blackrockdev msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.636897447Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=245291 slug=pismo instance= t=2024-05-29T13:44:15.636837038Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.636745144Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-us-ewr-gsl-02" t=2024-05-29T13:44:15.636389339Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-us-dca-latitude-02" t=2024-05-29T13:44:15.636009874Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-us-dca-latitude-02" t=2024-05-29T13:44:15.635999064Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=245291 slug=pismo version=652 fingerprint=0677a999737228a5 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.635594297Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.635351369s EvaluationString:}]" duration=401.610858ms
+logger=ngalert.state.manager.persist user=245291 slug=pismo t=2024-05-29T13:44:15.635282039Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.63521168Z caller=remote_instance_store.go:51 user=866972 slug=mitsubishi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=866972 slug=mitsubishi t=2024-05-29T13:44:15.635148478Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=866972 slug=mitsubishi instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.635133379Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=866972 slug=mitsubishi instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.635108968Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=866972 slug=mitsubishi t=2024-05-29T13:44:15.635094737Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=25487 slug=cryptoview instance= t=2024-05-29T13:44:15.634850343Z level=debug msg="Setting next state" handler=resultError
+logger=ngalert.state.manager user=25487 slug=cryptoview t=2024-05-29T13:44:15.634562446Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.634012539Z caller=remote_instance_store.go:51 user=242310 slug=suzy msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.633555956Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.63302841Z caller=remote_instance_store.go:51 user=612213 slug=crl msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=612213 slug=crl t=2024-05-29T13:44:15.632960269Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=612213 slug=crl instance= t=2024-05-29T13:44:15.632946597Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.632406784Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.632346636Z caller=remote_instance_store.go:51 user=84360 slug=sib msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=608555 slug=ias t=2024-05-29T13:44:15.632319592Z level=info msg="Detected stale state entry" cacheID="[[\"Series\",\"query67221c8bc1874a12b7883a0c023ee3a2\"],[\"TargetGroup\",\"targetgroup/eng-ct-zh-zt-ml/7540ca45763622da\"],[\"__alert_rule_namespace_uid__\",\"a63484ae-feeb-4a65-a678-b4b6dd77dc42\"],[\"__alert_rule_uid__\",\"f681bf88-eab2-4dbf-b554-1e735e6f40eb\"],[\"alertname\",\"LTS High response time [NO]\"],[\"grafana_folder\",\"CRE\"],[\"team\",\"cre\"]]" state=Normal reason=
+logger=ngalert.state.manager user=608555 slug=ias instance="Series=query17f5ae661fb9489889ce2e64e413fb9c, TargetGroup=targetgroup/eng-ct-zh-zt-ml/7540ca45763622da" t=2024-05-29T13:44:15.632292596Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.631524411Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.631513845Z caller=remote_instance_store.go:51 user=530405 slug=zetetic msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=530405 slug=zetetic instance="chain=Kusama, pool=Mermaid 1" t=2024-05-29T13:44:15.631461502Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=444728 slug=stgnextgen instance= t=2024-05-29T13:44:15.631329215Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=444728 slug=stgnextgen t=2024-05-29T13:44:15.631148964Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=444728 slug=stgnextgen version=2 fingerprint=e63cf2f1b06fb2fa attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.631032945Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.63070082s EvaluationString:}]" duration=362.430284ms
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-nz-akl-gsl-02" t=2024-05-29T13:44:15.631078364Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.629771663Z caller=remote_instance_store.go:51 user=916144 slug=cmjjilpd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-nl-ams-gsl-01" t=2024-05-29T13:44:15.629620652Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.629423115Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=426229 slug=accelbyte instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.62860727Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-mx-mex-vultr-01" t=2024-05-29T13:44:15.628091325Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.627945989Z caller=remote_instance_store.go:51 user=615392 slug=shinemetrics msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.62760564Z caller=remote_instance_store.go:51 user=765874 slug=rhwstaging msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=765874 slug=rhwstaging instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.627516338Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=765874 slug=rhwstaging instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.627479167Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-jp-tyo-dp-02" t=2024-05-29T13:44:15.627460827Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=662363 slug=facephi t=2024-05-29T13:44:15.627355215Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=12.853488ms
+logger=ngalert.scheduler user=313711 slug=julienbeduneau version=1 fingerprint=3d54c5e55847ac82 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.625639363Z level=debug msg="Alert rule evaluated" results="[{Instance:onprem=ALTEREGO State:Normal Error: Results:map[] Values:map[NB_LOGS_30_MIN:{Var:NB_LOGS_30_MIN Labels:onprem=ALTEREGO Value:0xc02567c9d0} NB_LOGS_BELOW_1:{Var:NB_LOGS_BELOW_1 Labels:onprem=ALTEREGO Value:0xc02567ca18} NB_LOGS_LAST_30M:{Var:NB_LOGS_LAST_30M Labels:onprem=ALTEREGO Value:0xc02567c998}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.625266858s EvaluationString:[ var='NB_LOGS_30_MIN' labels={onprem=ALTEREGO} value=3 ], [ var='NB_LOGS_BELOW_1' labels={onprem=ALTEREGO} value=0 ], [ var='NB_LOGS_LAST_30M' labels={onprem=ALTEREGO} value=3 ]}]" duration=62.546563ms
+level=debug ts=2024-05-29T13:44:15.625104477Z caller=remote_instance_store.go:51 user=475799 slug=dpdcz msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=250150 slug=bizagi t=2024-05-29T13:44:15.625072594Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=13.306162ms
+logger=ngalert.state.manager user=96036 slug=stivenc instance= t=2024-05-29T13:44:15.624756439Z level=debug msg="Execution error state is Alerting" handler=resultAlerting previous_handler=resultError
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-in-del-vultr-01" t=2024-05-29T13:44:15.624398597Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.624062997Z caller=remote_instance_store.go:51 user=530405 slug=zetetic msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.624115101Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.624023784Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.623220596Z caller=remote_instance_store.go:51 user=491157 slug=prd01wr msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=491157 slug=prd01wr t=2024-05-29T13:44:15.623162885Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=491157 slug=prd01wr instance="DatabaseClass=db.r5.4xlarge" t=2024-05-29T13:44:15.623141584Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=155740 slug=routific t=2024-05-29T13:44:15.621760847Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-il-tlv-dp-01" t=2024-05-29T13:44:15.621665351Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager.persist user=27737 slug=edfmancapital t=2024-05-29T13:44:15.621476878Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=27737 slug=edfmancapital instance= t=2024-05-29T13:44:15.621461971Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=696798 slug=mcv instance= t=2024-05-29T13:44:15.621350998Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.620924381Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-gb-lon-ukserv-03" t=2024-05-29T13:44:15.619759711Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.619316823Z caller=remote_instance_store.go:51 user=504140 slug=chipotlestg msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.619019615Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.617752196Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.617445343Z caller=remote_instance_store.go:51 user=447897 slug=mysten msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.617322233Z caller=remote_instance_store.go:51 user=506300 slug=jostens msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.608576964Z caller=remote_instance_store.go:51 user=253106 slug=elenasmonitor msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.615823715Z caller=remote_instance_store.go:51 user=114492 slug=railsbank msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.615604327Z caller=remote_instance_store.go:51 user=325783 slug=bloxprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=114492 slug=railsbank t=2024-05-29T13:44:15.615725794Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.614826986Z caller=remote_instance_store.go:51 user=615392 slug=shinemetrics msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.614532847Z caller=remote_instance_store.go:51 user=662363 slug=facephi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=139073 slug=cargo1 t=2024-05-29T13:44:15.614579311Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=139073 slug=cargo1 instance= t=2024-05-29T13:44:15.614549767Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=662363 slug=facephi t=2024-05-29T13:44:15.614446355Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=206107 slug=hydrolix instance="persistentvolume=pvc-76246a41d2c84388, persistentvolumeclaim=main-main-795j-pgdata" t=2024-05-29T13:44:15.613946676Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=206107 slug=hydrolix t=2024-05-29T13:44:15.613767101Z level=debug msg="State manager processing evaluation results" resultCount=9
+level=debug ts=2024-05-29T13:44:15.613747417Z caller=remote_instance_store.go:51 user=4947 slug=mediamath msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=4947 slug=mediamath t=2024-05-29T13:44:15.613687818Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.612806607Z caller=remote_instance_store.go:51 user=805026 slug=powwro11y msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=61907 slug=fullstory t=2024-05-29T13:44:15.612469987Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=67.806033ms
+Error parsing panelUID for alert annotationruleID2429dashactualerrorstrconv.ParseInt: parsing "": invalid syntaxlogger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.61255013Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=75.005451ms
+level=debug ts=2024-05-29T13:44:15.612194846Z caller=remote_instance_store.go:51 user=727299 slug=dellisgtechops msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.612028804Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.611532159Z caller=remote_instance_store.go:51 user=679831 slug=joveostageaws msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=489921 slug=statuscake t=2024-05-29T13:44:15.611482042Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=489921 slug=statuscake instance= t=2024-05-29T13:44:15.611466515Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=489921 slug=statuscake instance= t=2024-05-29T13:44:15.61145251Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-fr-mrs-gsl-02" t=2024-05-29T13:44:15.611158064Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.611100952Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=campaign_margins_calc_query" t=2024-05-29T13:44:15.610803228Z level=debug msg="Execution no data state is Alerting" handler=resultAlerting previous_handler=resultNoData
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=campaign_margins_calc_query" t=2024-05-29T13:44:15.610793713Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager.persist user=716631 slug=sugatsune t=2024-05-29T13:44:15.610789713Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.scheduler user=4947 slug=mediamath version=1 fingerprint=75f131f94e1a675d attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.610695796Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=000000020, ref_id=campaign_margins_calc_query State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.610433213s EvaluationString:}]" duration=1.647020763s
+logger=ngalert.state.manager user=716631 slug=sugatsune instance="__name__=windows_service_status, agent_hostname=sgst-qa-app01, instance=sgst-qa-app01:12345, job=integrations/windows_exporter, name=enable_pimql, status=ok" t=2024-05-29T13:44:15.610744822Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.610799605Z caller=remote_instance_store.go:51 user=430961 slug=solifi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi t=2024-05-29T13:44:15.610680557Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=716631 slug=sugatsune t=2024-05-29T13:44:15.61063679Z level=debug msg="State manager processing evaluation results" resultCount=2
+level=debug ts=2024-05-29T13:44:15.610402016Z caller=remote_instance_store.go:51 user=520342 slug=atrati msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.610220799Z caller=remote_instance_store.go:51 user=147497 slug=rhodev msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.608148938Z caller=remote_instance_store.go:51 user=148654 slug=tinybeans msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-fr-cdg-dp-02" t=2024-05-29T13:44:15.608381575Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.608351038Z caller=remote_instance_store.go:51 user=756904 slug=orbdatanfr msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.608208304Z caller=remote_instance_store.go:51 user=932433 slug=cmhdmxnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=901230 slug=integromonitor instance= previous_handler=resultError t=2024-05-29T13:44:15.608044782Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=901230 slug=integromonitor instance= previous_handler=resultError t=2024-05-29T13:44:15.60803596Z level=debug msg="Execution keep last state is Normal" handler=resultNormal
+logger=ngalert.state.manager.persist user=716527 slug=newpigqa t=2024-05-29T13:44:15.607512757Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=716527 slug=newpigqa t=2024-05-29T13:44:15.607468076Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.scheduler user=716527 slug=newpigqa version=1 fingerprint=dec1ac71fe198463 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.607389965Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.60720621s EvaluationString:}]" duration=9.147846ms
+level=debug ts=2024-05-29T13:44:15.607373525Z caller=remote_instance_store.go:51 user=504140 slug=chipotlestg msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.607270503Z caller=remote_instance_store.go:51 user=94501 slug=datastax msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.605050951Z caller=remote_instance_store.go:51 user=447897 slug=mysten msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.604488143Z caller=remote_instance_store.go:51 user=800848 slug=flowfoundation msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=800848 slug=flowfoundation instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.604381208Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=800848 slug=flowfoundation version=2 fingerprint=17c3b456abf7e11e attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.604265322Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.603624143s EvaluationString:}]" duration=101.078902ms
+logger=ngalert.state.manager.persist user=698963 slug=lemonade t=2024-05-29T13:44:15.603877039Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=139.737818ms
+level=debug ts=2024-05-29T13:44:15.60168123Z caller=remote_instance_store.go:51 user=183214 slug=vectorizedio msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.601176756Z caller=remote_instance_store.go:51 user=527204 slug=lnrsusinsurancenonprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=615392 slug=shinemetrics instance="metric.label.memory=256, metric.label.status=error, metric.label.trigger_type=google.pubsub.topic.publish, resource.label.function_name=transactionsCreate, resource.label.project_id=shine-163816, resource.label.region=europe-west1, resource.type=cloud_function" t=2024-05-29T13:44:15.60101864Z level=debug msg="Setting next state" handler=resultAlerting
+logger=ngalert.state.manager.persist user=679831 slug=joveostageaws t=2024-05-29T13:44:15.600860408Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=679831 slug=joveostageaws instance="datasource_uid=fe98eaba-ee1b-4198-8ef3-9181223fbc0d, ref_id=A" t=2024-05-29T13:44:15.600852209Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=679831 slug=joveostageaws instance="datasource_uid=fe98eaba-ee1b-4198-8ef3-9181223fbc0d, ref_id=A" t=2024-05-29T13:44:15.600837075Z level=debug msg="Setting next state" handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.600758677Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=615392 slug=shinemetrics instance="metric.label.memory=2048, metric.label.status=error, metric.label.trigger_type=HTTP_TRIGGER, resource.label.function_name=createBankAccount, resource.label.project_id=shine-163816, resource.label.region=europe-west1, resource.type=cloud_function" t=2024-05-29T13:44:15.60047074Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.599292668Z caller=remote_instance_store.go:51 user=112732 slug=gleamer msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=696798 slug=mcv t=2024-05-29T13:44:15.598191561Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager.persist user=714577 slug=readypactest t=2024-05-29T13:44:15.598128347Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=20.106554ms
+level=debug ts=2024-05-29T13:44:15.597739284Z caller=remote_instance_store.go:51 user=504140 slug=chipotlestg msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=844490 slug=prea t=2024-05-29T13:44:15.596768429Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=20.923654ms
+logger=ngalert.state.manager.persist user=214309 slug=spenmo t=2024-05-29T13:44:15.59616226Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=11.727891ms
+level=debug ts=2024-05-29T13:44:15.595823373Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.595815327Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=27737 slug=edfmancapital version=1 fingerprint=284ac7f980de3869 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.595176894Z level=debug msg="Alert rule evaluated" results="[{Instance:QueueName=os-updates-queue State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:QueueName=os-updates-queue Value:0xc00c786cc0} C:{Var:C Labels:QueueName=os-updates-queue Value:0xc00c786cc8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.594638329s EvaluationString:[ var='B' labels={QueueName=os-updates-queue} value=0 ], [ var='C' labels={QueueName=os-updates-queue} value=0 ]}]" duration=107.770697ms
+level=debug ts=2024-05-29T13:44:15.594284739Z caller=remote_instance_store.go:51 user=464973 slug=equansdatahub msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=464973 slug=equansdatahub t=2024-05-29T13:44:15.594192865Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.594088677Z caller=remote_instance_store.go:51 user=531208 slug=knosc msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=314067 slug=itsme t=2024-05-29T13:44:15.593322854Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=16.585595ms
+level=debug ts=2024-05-29T13:44:15.592470587Z caller=remote_instance_store.go:51 user=80938 slug=fispan msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=696798 slug=mcv t=2024-05-29T13:44:15.591518284Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.591393003Z caller=remote_instance_store.go:51 user=633501 slug=y2engineering msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.588912852Z caller=remote_instance_store.go:51 user=183214 slug=vectorizedio msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.588718643Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.587820947Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=430961 slug=solifi t=2024-05-29T13:44:15.587783415Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.587183858Z caller=remote_instance_store.go:51 user=447897 slug=mysten msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.586875614Z caller=remote_instance_store.go:51 user=615392 slug=shinemetrics msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.586328285Z caller=remote_instance_store.go:51 user=714577 slug=readypactest msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=698119 slug=simonsprod t=2024-05-29T13:44:15.585952839Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=698119 slug=simonsprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.585928529Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.scheduler user=698119 slug=simonsprod version=1 fingerprint=b2f2e19f869bfe7c attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.585817107Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.585645546s EvaluationString:}]" duration=6.573962ms
+level=debug ts=2024-05-29T13:44:15.585151916Z caller=remote_instance_store.go:51 user=871095 slug=cmcnginp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.584479883Z caller=remote_instance_store.go:51 user=214309 slug=spenmo msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=214309 slug=spenmo instance="datasource_uid=grafanacloud-prom, ref_id=A,B" t=2024-05-29T13:44:15.584404032Z level=debug msg="Setting next state" handler=resultNoData
+level=info ts=2024-05-29T13:44:15.584418726Z caller=remote_alert_sender.go:94 user=54972 slug=zanglang host=zanglang-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.152.53.86:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=XlGHxvWVk alerts=1
+logger=ngalert.state.manager.persist user=54972 slug=zanglang t=2024-05-29T13:44:15.584269573Z level=debug msg="Saving alert states done" count=16 max_state_save_concurrency=1 duration=236.612066ms
+level=debug ts=2024-05-29T13:44:15.582961222Z caller=remote_instance_store.go:51 user=636704 slug=nmartin2 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.582764513Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.582754552Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.58272442Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.582512429Z caller=remote_instance_store.go:51 user=727299 slug=dellisgtechops msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=114286 slug=enverus t=2024-05-29T13:44:15.582515037Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.582284428Z caller=remote_instance_store.go:51 user=506300 slug=jostens msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.58183773Z caller=remote_instance_store.go:51 user=893158 slug=cmfollnp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.580627408Z caller=remote_instance_store.go:51 user=196013 slug=inmediasoftware msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.579509773Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.579422204Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=698047 slug=gamesworkshop t=2024-05-29T13:44:15.579093583Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=18.385714ms
+level=debug ts=2024-05-29T13:44:15.578470533Z caller=remote_instance_store.go:51 user=868411 slug=cmpladnp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.578355317Z caller=remote_instance_store.go:51 user=412779 slug=microstrategy msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.578149274Z caller=remote_instance_store.go:51 user=327842 slug=exabeam msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=714577 slug=readypactest t=2024-05-29T13:44:15.578018194Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+logger=ngalert.state.manager user=714577 slug=readypactest instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.577967783Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=714577 slug=readypactest instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.577954412Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=430961 slug=solifi t=2024-05-29T13:44:15.577975026Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.577874945Z caller=remote_instance_store.go:51 user=932433 slug=cmhdmxnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi t=2024-05-29T13:44:15.577913723Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.577865125Z caller=remote_instance_store.go:51 user=821294 slug=bcpdesa msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=314067 slug=itsme t=2024-05-29T13:44:15.576672856Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=info ts=2024-05-29T13:44:15.576559517Z caller=remote_alert_sender.go:94 user=4947 slug=mediamath host=mediamath-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.145.156.57:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=edbhspyyxf85jc alerts=1
+logger=ngalert.scheduler user=696798 slug=mcv version=1 fingerprint=b9c613047193b706 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.576343003Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=a7c4b457-6a7a-416c-b994-1407dd32ed34, ref_id=Query,QueryPrevious State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.575855469s EvaluationString:}]" duration=57.647083ms
+level=debug ts=2024-05-29T13:44:15.576077732Z caller=remote_instance_store.go:51 user=337951 slug=pawapay msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.575291792Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.575140597Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.575076873Z caller=remote_instance_store.go:51 user=527204 slug=lnrsusinsurancenonprod msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.575104014Z caller=remote_instance_store.go:51 user=530405 slug=zetetic msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.574536976Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.574105474Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=691059 slug=deluxeconfstg t=2024-05-29T13:44:15.572903187Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=691059 slug=deluxeconfstg instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.572883666Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.scheduler user=472647 slug=planet version=3 fingerprint=819450fa000f5e87 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.572799903Z level=debug msg="Alert rule evaluated" results="[{Instance:metric.name=value_num_undelivered_messages_max_max State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:metric.name=value_num_undelivered_messages_max_max Value:0xc01fd93668} C:{Var:C Labels:metric.name=value_num_undelivered_messages_max_max Value:0xc01fd936a0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.572422738s EvaluationString:[ var='B' labels={metric.name=value_num_undelivered_messages_max_max} value=0 ], [ var='C' labels={metric.name=value_num_undelivered_messages_max_max} value=0 ]}]" duration=113.773568ms
+logger=ngalert.state.historian backend=loki user=516446 slug=awarehqdev t=2024-05-29T13:44:15.570304699Z level=debug msg="Done saving alert state history batch"
+level=debug ts=2024-05-29T13:44:15.568499653Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.567649428Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.567454712Z caller=remote_instance_store.go:51 user=349736 slug=elephanthealthcare msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.567184469Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=689030 slug=simonsuat t=2024-05-29T13:44:15.566824783Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=9.616753ms
+level=debug ts=2024-05-29T13:44:15.566753932Z caller=remote_instance_store.go:51 user=715709 slug=mtbprod msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.566435461Z caller=remote_instance_store.go:51 user=636704 slug=nmartin2 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-dk-cph-glesys-02" t=2024-05-29T13:44:15.56625081Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=307001 slug=hirerightdev t=2024-05-29T13:44:15.564829713Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/sda1, fstype=vfat, instance=puuswe1cjillwcadbs1001.jill.gcp.hclsw.internal, job=Auth-DB-Host-VM, mountpoint=/boot/efi" t=2024-05-29T13:44:15.564290762Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/mapper/vg01-lv01, fstype=ext4, instance=puuswe1bjillwcldbs1001.jill.gcp.hclsw.internal, job=Live-DB-Host-VM, mountpoint=/opt" t=2024-05-29T13:44:15.564137909Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/mapper/vg01-lv01, fstype=ext4, instance=puuswe1bjillwcldbs1001.jill.gcp.hclsw.internal, job=Live-DB-Host-VM, mountpoint=/db2inst" t=2024-05-29T13:44:15.564079818Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/mapper/vg01-lv01, fstype=ext4, instance=puuswe1bjillwcldbs1001.jill.gcp.hclsw.internal, job=Live-DB-Host-VM, mountpoint=/data" t=2024-05-29T13:44:15.564066428Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/mapper/vg01-lv01, fstype=ext4, instance=puuswe1ajillutlbst1001.jill.gcp.hclsw.internal, job=Bastion-VM-Host, mountpoint=/export" t=2024-05-29T13:44:15.563985896Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=916144 slug=cmjjilpd instance="device=/dev/mapper/vg01-lv01, fstype=ext4, instance=puuswe1ajillutlbst1001.jill.gcp.hclsw.internal, job=Bastion-VM-Host, mountpoint=/data" t=2024-05-29T13:44:15.563972566Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=472647 slug=planet t=2024-05-29T13:44:15.564082477Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-ch-zrh-dp-01" t=2024-05-29T13:44:15.563467248Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=806229 slug=simplisafe t=2024-05-29T13:44:15.563301317Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=806229 slug=simplisafe instance= t=2024-05-29T13:44:15.563277927Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.scheduler user=806229 slug=simplisafe version=41 fingerprint=c821e5b8447d5ba5 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.563151125Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[A:{Var:A Labels: Value:0xc0712e52d0} B:{Var:B Labels: Value:0xc0712e52d8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.562228805s EvaluationString:[ var='A' labels={} value=0 ], [ var='B' labels={} value=0 ]}]" duration=48.771187ms
+level=debug ts=2024-05-29T13:44:15.562684646Z caller=remote_instance_store.go:51 user=520342 slug=atrati msg="calling SaveAlertInstance"
+logger=ngalert.state.historian backend=loki user=516446 slug=awarehqdev t=2024-05-29T13:44:15.562607707Z level=debug msg="Alert state changed creating annotation" newState="Normal (MissingSeries)" oldState=Pending
+logger=ngalert.state.historian backend=loki user=516446 slug=awarehqdev t=2024-05-29T13:44:15.562581207Z level=debug msg="Alert state changed creating annotation" newState=Pending oldState=Normal
+logger=ngalert.state.manager.persist user=4947 slug=mediamath t=2024-05-29T13:44:15.562219104Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=28.761329ms
+level=debug ts=2024-05-29T13:44:15.561869911Z caller=remote_instance_store.go:51 user=155740 slug=routific msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=155740 slug=routific instance= t=2024-05-29T13:44:15.561811986Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=155740 slug=routific instance= t=2024-05-29T13:44:15.561799685Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=155740 slug=routific t=2024-05-29T13:44:15.561751434Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-at-vie-dp-03" t=2024-05-29T13:44:15.560905487Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=698047 slug=gamesworkshop instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.560694618Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=698047 slug=gamesworkshop version=1 fingerprint=c4fecc013f465f0a attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.560578607Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.56042631s EvaluationString:}]" duration=6.321697ms
+logger=ngalert.state.manager user=696798 slug=mcv instance="datasource_uid=a7c4b457-6a7a-416c-b994-1407dd32ed34, ref_id=Query,QueryPrevious" t=2024-05-29T13:44:15.560584413Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=260796 slug=expressvpn instance="host=proxy-at-vie-dp-01" t=2024-05-29T13:44:15.560625874Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=269887 slug=blackrockdev t=2024-05-29T13:44:15.56008357Z level=debug msg="Skip rule evaluation because it is paused"
+level=debug ts=2024-05-29T13:44:15.559694524Z caller=remote_instance_store.go:51 user=723897 slug=inthepocket msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.559422601Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=350346 slug=restake t=2024-05-29T13:44:15.559114979Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=11.581134ms
+level=debug ts=2024-05-29T13:44:15.559125148Z caller=remote_instance_store.go:51 user=325146 slug=farseer msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.558937688Z caller=remote_instance_store.go:51 user=805026 slug=powwro11y msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.558773689Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=245291 slug=pismo instance= t=2024-05-29T13:44:15.558026717Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager.persist user=923052 slug=magicairestricted t=2024-05-29T13:44:15.557701366Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=65.960028ms
+logger=ngalert.state.manager.persist user=689030 slug=simonsuat t=2024-05-29T13:44:15.557204299Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=689030 slug=simonsuat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.557188779Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=689030 slug=simonsuat instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.557165879Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=715709 slug=mtbprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.556978425Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=715709 slug=mtbprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.556966214Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=715709 slug=mtbprod t=2024-05-29T13:44:15.556941774Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager.persist user=716630 slug=coapdev t=2024-05-29T13:44:15.556562148Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=7.261674ms
+level=debug ts=2024-05-29T13:44:15.556276088Z caller=remote_instance_store.go:51 user=137351 slug=pinnacle21 msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.55577084Z caller=remote_instance_store.go:51 user=80938 slug=fispan msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.555750678Z caller=remote_instance_store.go:51 user=170883 slug=datacontrol msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=146728 slug=dgc t=2024-05-29T13:44:15.555761407Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=116.259935ms
+level=debug ts=2024-05-29T13:44:15.554952315Z caller=remote_instance_store.go:51 user=527202 slug=lnrsusinsurancedev msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=901230 slug=integromonitor t=2024-05-29T13:44:15.554523154Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=13.407907ms
+level=debug ts=2024-05-29T13:44:15.554514546Z caller=remote_instance_store.go:51 user=521139 slug=adevintamobiledepro msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.553218015Z caller=remote_instance_store.go:51 user=430961 slug=solifi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.553154537Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.552874755Z caller=remote_instance_store.go:51 user=932433 slug=cmhdmxnp msg="calling SaveAlertInstance"
+level=info ts=2024-05-29T13:44:15.552821217Z caller=remote_alert_sender.go:94 user=186562 slug=defier host=defier-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.152.62.20:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=iKyxEnUnk alerts=1
+level=debug ts=2024-05-29T13:44:15.552404077Z caller=remote_instance_store.go:51 user=713314 slug=tpceunonprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=713314 slug=tpceunonprod t=2024-05-29T13:44:15.552338515Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=713314 slug=tpceunonprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.552309956Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=713314 slug=tpceunonprod version=1 fingerprint=a69f99a7576a4002 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.552227424Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.551971791s EvaluationString:}]" duration=7.450237ms
+level=debug ts=2024-05-29T13:44:15.551998773Z caller=remote_instance_store.go:51 user=183214 slug=vectorizedio msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=327842 slug=exabeam instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.551612858Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=327842 slug=exabeam instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.551604825Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=327842 slug=exabeam instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.551599135Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=327842 slug=exabeam version=357 fingerprint=78d0fe53863ad6f5 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.551515357Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.551271932s EvaluationString:}]" duration=30.490684ms
+logger=ngalert.state.manager user=707603 slug=canoneurope instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.550409227Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=707603 slug=canoneurope instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.550368297Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=707603 slug=canoneurope version=1 fingerprint=45b65a2d12d4383a attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.550208377Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.549924429s EvaluationString:}]" duration=10.69707ms
+level=debug ts=2024-05-29T13:44:15.550211598Z caller=remote_instance_store.go:51 user=504140 slug=chipotlestg msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.550137005Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=191103 slug=amazonadmin version=194 fingerprint=a6e9d70e99d781a8 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.549912853Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.549722846s EvaluationString:}]" duration=58.709103ms
+logger=ngalert.state.manager user=114492 slug=railsbank instance="datasource_uid=KZy8Z1O7k, ref_id=DLQ" t=2024-05-29T13:44:15.549924906Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.549784952Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.549648133Z caller=remote_instance_store.go:51 user=743579 slug=neotax msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=516446 slug=awarehqdev t=2024-05-29T13:44:15.549531781Z level=debug msg="Deleting alert states" count=1
+logger=ngalert.state.manager user=516446 slug=awarehqdev t=2024-05-29T13:44:15.549523281Z level=info msg="Detected stale state entry" cacheID="[[\"EndpointName\",\"screenshotdetect-deployment\"],[\"Series\",\"query206560f3514447f49eb8f0c066f71a2a\"],[\"__alert_rule_namespace_uid__\",\"D-8RyMx4z\"],[\"__alert_rule_uid__\",\"kLXBSfb4kz\"],[\"alertname\",\"screenshotdetect-deployment-no-invocations\"],[\"grafana_folder\",\"bi\"],[\"group\",\"SageMakerNoInvocations\"],[\"route\",\"team=bi\"],[\"team\",\"bi\"]]" state=Pending reason=
+logger=ngalert.state.manager user=516446 slug=awarehqdev instance="EndpointName=screenshotdetect-deployment, Series=query754aaa9a67e94850a7b1d780d85f296e" t=2024-05-29T13:44:15.54950148Z level=debug msg="Changing state" previous_state=Normal next_state=Pending previous_ends_at=2024-05-29T13:44:10Z next_ends_at=2024-05-29T13:48:10Z
+level=debug ts=2024-05-29T13:44:15.549556881Z caller=remote_instance_store.go:57 user=516446 slug=awarehqdev msg="calling DeleteAlertInstances - not implemented"
+level=debug ts=2024-05-29T13:44:15.549362927Z caller=remote_instance_store.go:51 user=516613 slug=blackrocktp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=716630 slug=coapdev t=2024-05-29T13:44:15.549253434Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.549280237Z caller=remote_instance_store.go:51 user=530405 slug=zetetic msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.549252324Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.549066482Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.548026644Z caller=remote_instance_store.go:51 user=263582 slug=prestowillis msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=350346 slug=restake t=2024-05-29T13:44:15.54752881Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=277970 slug=teckresourcestest t=2024-05-29T13:44:15.547499213Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=35.443774ms
+logger=ngalert.state.manager user=350346 slug=restake t=2024-05-29T13:44:15.54747184Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=191103 slug=amazonadmin instance= t=2024-05-29T13:44:15.547297428Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=191103 slug=amazonadmin version=63 fingerprint=b8a6d2fcc07b3cc0 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.547182523Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.546947577s EvaluationString:}]" duration=172.720412ms
+level=debug ts=2024-05-29T13:44:15.546422624Z caller=remote_instance_store.go:51 user=430961 slug=solifi msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=430961 slug=solifi t=2024-05-29T13:44:15.546364761Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.546350373Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.546342184Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.546334533Z level=debug msg="Setting next state" handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.545968606Z caller=remote_instance_store.go:51 user=841587 slug=tfxprod msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.545210647Z caller=remote_instance_store.go:51 user=527204 slug=lnrsusinsurancenonprod msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=309009 slug=elestyle version=1 fingerprint=9b388181970bf5fd attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.545014176Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=zxr_3eR4z, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.544749642s EvaluationString:}]" duration=156.290885ms
+logger=ngalert.state.manager user=77750 slug=screenmeet instance= t=2024-05-29T13:44:15.544849885Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.scheduler user=77750 slug=screenmeet version=4 fingerprint=78e2a544267ce0cb attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.544766904Z level=debug msg="Alert rule evaluated" results="[{Instance: State:NoData Error: Results:map[] Values:map[B0:{Var:B Labels: Value:}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.544404172s EvaluationString:[ var='B0' metric='NoData' labels={} value=null ]}]" duration=59.062786ms
+logger=ngalert.state.manager.persist user=61907 slug=fullstory t=2024-05-29T13:44:15.544659537Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=61907 slug=fullstory t=2024-05-29T13:44:15.544566727Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager.persist user=245291 slug=pismo t=2024-05-29T13:44:15.544553134Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=26.464456ms
+logger=ngalert.scheduler user=61907 slug=fullstory version=1 fingerprint=c18f10eaa780028b attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.54444143Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[A:{Var:A Labels: Value:0xc0197d1750} B:{Var:B Labels: Value:0xc0197d1758}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.544018445s EvaluationString:[ var='A' labels={} value=71 ], [ var='B' labels={} value=0 ]}]" duration=111.768536ms
+logger=ngalert.state.manager user=332555 slug=omexomcs t=2024-05-29T13:44:15.543040161Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.542039788Z caller=remote_instance_store.go:51 user=806229 slug=simplisafe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=901230 slug=integromonitor instance= t=2024-05-29T13:44:15.541073248Z level=debug msg="Setting next state" handler=resultError
+logger=ngalert.state.manager user=148654 slug=tinybeans instance="instance=https://accounts.tinybeans.com/health/alive" t=2024-05-29T13:44:15.540724664Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.540754674Z caller=remote_rule_evaluator.go:193 user=396586 slug=opengov msg="sending loaded metrics" count=0 reason="expression contains hysteresis command"
+level=debug ts=2024-05-29T13:44:15.540598138Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=695885 slug=lululemonprod t=2024-05-29T13:44:15.539588919Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=10.884925ms
+level=info ts=2024-05-29T13:44:15.539569701Z caller=grafana.go:247 user=523256 slug=enovaafrica msg="rules manager rule groups request" path=/prometheus/api/v1/rules grafana_org_id=1 query="limit_alerts=16" groups=2 alerts=0
+level=debug ts=2024-05-29T13:44:15.5391315Z caller=remote_instance_store.go:51 user=781424 slug=n1eko msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.538975627Z caller=remote_instance_store.go:51 user=682219 slug=mcb1 msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=682219 slug=mcb1 instance="beta_kubernetes_io_arch=amd64, beta_kubernetes_io_os=linux, device=/dev/sda1, fstype=ext4, instance=10.1.9.183:9100, job=node, kubernetes_io_arch=amd64, kubernetes_io_hostname=mcb-paas-node-9, kubernetes_io_os=linux, microk8s_io_cluster=true, mountpoint=/, node_kubernetes_io_microk8s_controlplane=microk8s-controlplane" t=2024-05-29T13:44:15.538812433Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8" t=2024-05-29T13:44:15.538806454Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8" t=2024-05-29T13:44:15.538792232Z level=debug msg="Setting next state" handler=resultNormal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6" t=2024-05-29T13:44:15.538704551Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=682219 slug=mcb1 instance="beta_kubernetes_io_arch=amd64, beta_kubernetes_io_os=linux, device=/dev/sda1, fstype=ext4, instance=10.1.82.67:9100, job=node, kubernetes_io_arch=amd64, kubernetes_io_hostname=mcb-paas-node-8, kubernetes_io_os=linux, microk8s_io_cluster=true, mountpoint=/, node_kubernetes_io_microk8s_controlplane=microk8s-controlplane" t=2024-05-29T13:44:15.538583609Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2" t=2024-05-29T13:44:15.53860448Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10" t=2024-05-29T13:44:15.538560668Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=781424 slug=n1eko instance="__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1" t=2024-05-29T13:44:15.538506267Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=781424 slug=n1eko version=19 fingerprint=45eb457d264a4b2f attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.538088609Z level=debug msg="Alert rule evaluated" results="[{Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1 Value:0xc067f80078} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1 Value:0xc067f800c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.527457579s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1} value=48 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp1} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10 Value:0xc067f80160} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10 Value:0xc067f80198}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.527465609s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10} value=0 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp10} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2 Value:0xc067f80228} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2 Value:0xc067f80270}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.527468779s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2} value=36.5 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp2} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp4 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp4 Value:0xc067f802f0} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp4 Value:0xc067f80328}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.527473089s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp4} value=32 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp4} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6 Value:0xc067f80398} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6 Value:0xc067f803d0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.52747595s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6} value=31 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp6} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp7 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp7 Value:0xc067f80460} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp7 Value:0xc067f80498}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.527479449s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp7} value=33.5 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp7} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8 Value:0xc067f80518} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8 Value:0xc067f80550}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.5274848s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8} value=0 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp8} value=0 ]} {Instance:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp9 State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp9 Value:0xc067f805c8} C:{Var:C Labels:__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp9 Value:0xc067f80600}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.52748992s EvaluationString:[ var='B' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp9} value=0 ], [ var='C' labels={__name__=node_hwmon_temp_celsius, chip=platform_nct6775_2592, instance=192.168.1.138:9100, job=node, sensor=temp9} value=0 ]}]" duration=63.57497ms
+logger=ngalert.state.manager user=682219 slug=mcb1 t=2024-05-29T13:44:15.538311843Z level=debug msg="State manager processing evaluation results" resultCount=3
+logger=ngalert.state.manager.persist user=538355 slug=flogic t=2024-05-29T13:44:15.537980183Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=35.443082ms
+level=debug ts=2024-05-29T13:44:15.537576509Z caller=remote_instance_store.go:51 user=698963 slug=lemonade msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=698963 slug=lemonade instance="app=underwriting-platform-events-worker, pod=underwriting-platform-events-worker-5658d54585-xbhkr" t=2024-05-29T13:44:15.537497049Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.537403529Z caller=remote_instance_store.go:51 user=35611 slug=play msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=78401 slug=ayadav6 t=2024-05-29T13:44:15.534889745Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=78401 slug=ayadav6 instance= t=2024-05-29T13:44:15.534030785Z level=debug msg="Setting next state" handler=resultError
+level=debug ts=2024-05-29T13:44:15.536792824Z caller=remote_instance_store.go:51 user=166705 slug=crossnokaye msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.536807244Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+logger=ngalert.scheduler user=332555 slug=omexomcs version=52 fingerprint=f1e71cbc343b969f attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.536673669Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.536421343s EvaluationString:}]" duration=634.598249ms
+level=debug ts=2024-05-29T13:44:15.536715709Z caller=remote_instance_store.go:51 user=94501 slug=datastax msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=A" t=2024-05-29T13:44:15.536468305Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:47:10Z next_ends_at=2024-05-29T13:48:10Z
+logger=ngalert.state.manager user=4947 slug=mediamath instance="datasource_uid=000000020, ref_id=A" t=2024-05-29T13:44:15.536454075Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=4947 slug=mediamath t=2024-05-29T13:44:15.536406819Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.53543564Z caller=remote_instance_store.go:51 user=527202 slug=lnrsusinsurancedev msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.535252642Z caller=remote_instance_store.go:51 user=696798 slug=mcv msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.535298711Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=109452 slug=deltarisk t=2024-05-29T13:44:15.535004572Z level=debug msg="State manager processing evaluation results" resultCount=1
+logger=ngalert.state.manager user=170883 slug=datacontrol instance="datasource_uid=fKD2mywGk, ref_id=A" t=2024-05-29T13:44:15.53506726Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=170883 slug=datacontrol instance="datasource_uid=fKD2mywGk, ref_id=A" t=2024-05-29T13:44:15.535062952Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+Error parsing panelUID for alert annotationruleID308dashactualerrorstrconv.ParseInt: parsing "": invalid syntaxlogger=ngalert.scheduler user=109452 slug=deltarisk version=14 fingerprint=3402c6fe099f610d attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.534893673Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.534320873s EvaluationString:}]" duration=128.610011ms
+logger=ngalert.state.manager user=170883 slug=datacontrol instance="datasource_uid=fKD2mywGk, ref_id=A" t=2024-05-29T13:44:15.535039393Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=170883 slug=datacontrol instance="datasource_uid=fKD2mywGk, ref_id=A" t=2024-05-29T13:44:15.535014496Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=170883 slug=datacontrol instance="datasource_uid=fKD2mywGk, ref_id=A" t=2024-05-29T13:44:15.535001365Z level=debug msg="Setting next state" handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.533844133Z caller=remote_instance_store.go:51 user=261837 slug=empowercloud msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.533527946Z caller=remote_instance_store.go:51 user=4947 slug=mediamath msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=4947 slug=mediamath t=2024-05-29T13:44:15.533453476Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.533360266Z caller=remote_instance_store.go:51 user=343704 slug=haendlerbund msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.533401783Z caller=remote_instance_store.go:51 user=456946 slug=menlosecurityredge msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=849222 slug=franv2dev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.53146871Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.scheduler user=849222 slug=franv2dev version=1 fingerprint=44f4b8f738802764 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.531339628Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error: Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.53112675s EvaluationString:}]" duration=8.824161ms
+logger=ngalert.state.manager.persist user=713299 slug=btcnonprod t=2024-05-29T13:44:15.529839092Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=7.076111ms
+logger=ngalert.state.manager.persist user=713315 slug=mtbnonprod t=2024-05-29T13:44:15.529414825Z level=debug msg="Saving alert states" count=2 max_state_save_concurrency=1
+level=debug ts=2024-05-29T13:44:15.529468696Z caller=remote_instance_store.go:51 user=713315 slug=mtbnonprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=713315 slug=mtbnonprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.529385674Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=713315 slug=mtbnonprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.529222972Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=713315 slug=mtbnonprod t=2024-05-29T13:44:15.529204552Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.528894407Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=696798 slug=mcv t=2024-05-29T13:44:15.528882132Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=8.831019ms
+logger=ngalert.state.manager.persist user=695885 slug=lululemonprod t=2024-05-29T13:44:15.528701313Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=695885 slug=lululemonprod t=2024-05-29T13:44:15.528628602Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.527809981Z caller=remote_instance_store.go:51 user=932433 slug=cmhdmxnp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=485988 slug=alfromentpro instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.527283578Z level=debug msg="Keeping state" state=NoData previous_ends_at=2024-05-29T13:59:10Z next_ends_at=2024-05-29T14:04:10Z
+logger=ngalert.state.manager user=316418 slug=workmotion instance="ApiId=1mr10216z5, Method=ANY, Resource=/service-fees/{proxy+}, Stage=$default" t=2024-05-29T13:44:15.527141974Z level=debug msg="Setting next state" handler=resultNormal
+level=debug ts=2024-05-29T13:44:15.527006032Z caller=remote_instance_store.go:51 user=727299 slug=dellisgtechops msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.526827869Z caller=remote_instance_store.go:51 user=465816 slug=metricgamingqa msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=82372 slug=fout t=2024-05-29T13:44:15.526109029Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=82372 slug=fout instance= t=2024-05-29T13:44:15.526082912Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=82372 slug=fout t=2024-05-29T13:44:15.52598453Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.525727103Z caller=remote_instance_store.go:51 user=183214 slug=vectorizedio msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.525220553Z caller=remote_instance_store.go:51 user=325146 slug=farseer msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.524477Z caller=remote_instance_store.go:51 user=82372 slug=fout msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=716519 slug=bradfordprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.524268887Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.523289555Z caller=remote_instance_store.go:51 user=516613 slug=blackrocktp msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=713299 slug=btcnonprod t=2024-05-29T13:44:15.522759681Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=713299 slug=btcnonprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.522726201Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.522826313Z caller=remote_instance_store.go:51 user=713299 slug=btcnonprod msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=713299 slug=btcnonprod instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.52271648Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.522499764Z caller=remote_instance_store.go:51 user=261837 slug=empowercloud msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.522535058Z caller=remote_instance_store.go:51 user=374423 slug=bitburst msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.522414257Z caller=remote_instance_store.go:51 user=668587 slug=brightacceptance msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=374423 slug=bitburst t=2024-05-29T13:44:15.522286749Z level=debug msg="State manager processing evaluation results" resultCount=1
+level=debug ts=2024-05-29T13:44:15.521527111Z caller=remote_instance_store.go:51 user=371756 slug=asapp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.521457835Z caller=remote_instance_store.go:51 user=447897 slug=mysten msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.521404341Z caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_Japan, vcenter_host_name=jpspesx02.brainlab.net" t=2024-05-29T13:44:15.521350543Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_Japan, vcenter_host_name=jpspesx01.brainlab.net" t=2024-05-29T13:44:15.521325072Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager.persist user=373502 slug=stakeandrelax t=2024-05-29T13:44:15.521238038Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=12.358114ms
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_Israel, vcenter_host_name=ilspesx02.brainlab.net" t=2024-05-29T13:44:15.521199859Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_HPC_Germany, vcenter_host_name=despesxhpc08.brainlab.net" t=2024-05-29T13:44:15.521144868Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_HPC_Germany, vcenter_host_name=despesxhpc07.brainlab.net" t=2024-05-29T13:44:15.521114547Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_HPC_Germany, vcenter_host_name=despesxhpc04.brainlab.net" t=2024-05-29T13:44:15.521034364Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.521100136Z caller=remote_instance_store.go:51 user=527202 slug=lnrsusinsurancedev msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=527202 slug=lnrsusinsurancedev t=2024-05-29T13:44:15.521047374Z level=debug msg="Saving alert states" count=3 max_state_save_concurrency=1
+logger=ngalert.state.manager.persist user=472647 slug=planet t=2024-05-29T13:44:15.520973163Z level=debug msg="Saving alert states done" count=20 max_state_save_concurrency=1 duration=509.845216ms
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_HPC_Germany, vcenter_host_name=despesxhpc01.brainlab.net" t=2024-05-29T13:44:15.520966974Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=527202 slug=lnrsusinsurancedev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.520987362Z level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=vSphere_Farm_Austria, vcenter_host_name=atspesx01.brainlab.net" t=2024-05-29T13:44:15.52086264Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=527202 slug=lnrsusinsurancedev instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.520899907Z level=debug msg="Setting next state" handler=resultNoData
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=despesxml02, vcenter_host_name=despesxml02.brainlab.net" t=2024-05-29T13:44:15.5208065Z level=debug msg="Keeping state" state=Normal
+logger=ngalert.state.manager user=762770 slug=brainlab instance="__name__=vcenter_host_cpu_utilization_percent, bl_gc_lbac_tenants=;ww-itcis_virtualization, vcenter_cluster_name=despesxgrid05, vcenter_host_name=despesxgrid05.brainlab.net" t=2024-05-29T13:44:15.520752958Z level=debug msg="Keeping state" state=Normal
+level=debug ts=2024-05-29T13:44:15.520817136Z caller=remote_instance_store.go:51 user=608555 slug=ias msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=186562 slug=defier t=2024-05-29T13:44:15.52075505Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+logger=ngalert.state.manager user=186562 slug=defier instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.520710228Z level=debug msg="Execution no data state is Alerting" handler=resultAlerting previous_handler=resultNoData
+level=debug ts=2024-05-29T13:44:15.520690003Z caller=remote_instance_store.go:51 user=893158 slug=cmfollnp msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.520734661Z caller=remote_image_capturer.go:33 user=186562 slug=defier rule_org_id=1 rule_uid=iKyxEnUnk msg="Cannot take screenshot for alert rule as it is not associated with a dashboard"
+level=debug ts=2024-05-29T13:44:15.520042587Z caller=remote_instance_store.go:51 user=309009 slug=elestyle msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.519958549Z caller=remote_instance_store.go:51 user=242310 slug=suzy msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=245291 slug=pismo t=2024-05-29T13:44:15.519290351Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=14.633267ms
+level=debug ts=2024-05-29T13:44:15.519092021Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.518923411Z caller=remote_instance_store.go:51 user=459086 slug=metricgamingprd msg="calling SaveAlertInstance"
+level=debug ts=2024-05-29T13:44:15.518917419Z caller=remote_instance_store.go:51 user=449554 slug=metricgamingppe msg="calling SaveAlertInstance"
+logger=ngalert.state.manager.persist user=245291 slug=pismo t=2024-05-29T13:44:15.51807352Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1
+ logger=ngalert.state.manager user=245291 slug=pismo instance="datasource_uid=grafanacloud-logs, ref_id=Query" t=2024-05-29T13:44:15.518057087Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=trustami-cache.haendlerbund.de" t=2024-05-29T13:44:15.518101629Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=swarm-manager.haendlerbund.de" t=2024-05-29T13:44:15.517978904Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=swarm-manager.haendlerbund.de" t=2024-05-29T13:44:15.51796928Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.517860508Z caller=remote_instance_store.go:51 user=174927 slug=syndic82690 msg="calling SaveAlertInstance"
+ level=debug ts=2024-05-29T13:44:15.517766507Z caller=remote_instance_store.go:51 user=743579 slug=neotax msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=shopauskunft.de" t=2024-05-29T13:44:15.517703155Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.517796056Z caller=remote_instance_store.go:51 user=265692 slug=beekeeper msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=rtvergleicher.haendlerbund.de" t=2024-05-29T13:44:15.517616388Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.517502447Z caller=remote_instance_store.go:51 user=688926 slug=atriumhq msg="calling SaveAlertInstance"
+ level=debug ts=2024-05-29T13:44:15.517529592Z caller=remote_instance_store.go:51 user=900395 slug=jcla1234 msg="calling SaveAlertInstance"
+ level=info ts=2024-05-29T13:44:15.517508665Z caller=remote_alert_sender.go:94 user=114286 slug=enverus host=enverus-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.144.27.157:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=bd5628ea-8ba9-4f55-b682-e9f191cd8d23 alerts=1
+ level=info ts=2024-05-29T13:44:15.517472955Z caller=remote_alert_sender.go:94 user=114286 slug=enverus host=enverus-grafana-http.hosted-grafana.svc.cluster.local.:10000 addr=10.144.89.182:10000 msg="sending alerts to grafana" rule_org_id=1 rule_uid=bd5628ea-8ba9-4f55-b682-e9f191cd8d23 alerts=1
+ level=debug ts=2024-05-29T13:44:15.517216651Z caller=remote_instance_store.go:51 user=639839 slug=silae msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager.persist user=206107 slug=hydrolix t=2024-05-29T13:44:15.517075358Z level=debug msg="Saving alert states done" count=2 max_state_save_concurrency=1 duration=44.583618ms
+ level=debug ts=2024-05-29T13:44:15.517090128Z caller=remote_instance_store.go:51 user=778383 slug=nicolegbilaw msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=marketsupply.com" t=2024-05-29T13:44:15.516990237Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=marketplace.haendlerbund.de" t=2024-05-29T13:44:15.516926019Z level=debug msg="Keeping state" state=Normal
+ level=debug ts=2024-05-29T13:44:15.516745477Z caller=remote_instance_store.go:51 user=456946 slug=menlosecurityredge msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=lta.haendlerbund.de" t=2024-05-29T13:44:15.51681501Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.516799418Z caller=remote_instance_store.go:51 user=615392 slug=shinemetrics msg="calling SaveAlertInstance"
+ level=debug ts=2024-05-29T13:44:15.516724088Z caller=remote_instance_store.go:51 user=251760 slug=forgerock msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=logistik-watchblog.de" t=2024-05-29T13:44:15.516707764Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=legaltext-cache.haendlerbund.de" t=2024-05-29T13:44:15.516647325Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=legal-connect.de" t=2024-05-29T13:44:15.51660167Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=legal-connect.de" t=2024-05-29T13:44:15.516587851Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=leech.haendlerbund.de" t=2024-05-29T13:44:15.516535656Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=karriere.haendlerbund.de" t=2024-05-29T13:44:15.516477595Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=karriere.haendlerbund.de" t=2024-05-29T13:44:15.516467535Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=info-service.haendlerbund.de" t=2024-05-29T13:44:15.516296902Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=haendlerbund.de" t=2024-05-29T13:44:15.51623886Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=haendlerbund.de" t=2024-05-29T13:44:15.5162279Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.516227491Z caller=remote_instance_store.go:51 user=183214 slug=vectorizedio msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de" t=2024-05-29T13:44:15.516056716Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de" t=2024-05-29T13:44:15.515983996Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de" t=2024-05-29T13:44:15.515779792Z level=debug msg="Setting next state" handler=resultNormal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de" t=2024-05-29T13:44:15.515669828Z level=debug msg="Keeping state" state=Normal
+ level=debug ts=2024-05-29T13:44:15.515645461Z caller=remote_instance_store.go:51 user=391538 slug=risknarrative msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de" t=2024-05-29T13:44:15.515521744Z level=debug msg="Keeping state" state=Normal
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de" t=2024-05-29T13:44:15.515445035Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.515447629Z caller=remote_instance_store.go:51 user=391538 slug=risknarrative msg="calling SaveAlertInstance"
+ logger=ngalert.state.manager user=343704 slug=haendlerbund instance="__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de" t=2024-05-29T13:44:15.515383607Z level=debug msg="Setting next state" handler=resultNormal
+ level=debug ts=2024-05-29T13:44:15.515275809Z caller=remote_instance_store.go:51 user=308298 slug=xbto msg="calling SaveAlertInstance"
+ level=debug ts=2024-05-29T13:44:15.515237496Z caller=remote_instance_store.go:51 user=349229 slug=kropyva msg="calling SaveAlertInstance"
+ logger=ngalert.scheduler user=343704 slug=haendlerbund version=1 fingerprint=d79acb88c797342b attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.514444245Z level=debug msg="Alert rule evaluated" results="[{Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=adz-personal.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=adz-personal.de Value:0xc04b671c60} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=adz-personal.de Value:0xc04b671cb0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511797656s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=adz-personal.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=adz-personal.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de Value:0xc04b671d30} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de Value:0xc04b671d78}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511821178s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=amazon-watchblog.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de Value:0xc04b671e00} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de Value:0xc04b671e48}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511832138s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=api-docs-tools.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de Value:0xc04b671ed8} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de Value:0xc04b671f20}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.5118428s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-logs-tools.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-user-management.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-user-management.haendlerbund.de Value:0xc04b671fa8} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-user-management.haendlerbund.de Value:0xc04b671fe8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511850138s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-user-management.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=auth0-user-management.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de Value:0xc0119a21c0} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de Value:0xc0119a22e0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.51185823s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=cms.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de Value:0xc0119a2390} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de Value:0xc0119a2428}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511879533s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=credit-check.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de Value:0xc0119a2698} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de Value:0xc0119a27c0}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511887467s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=crm.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de Value:0xc0119a2a98} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de Value:0xc0119a29f8}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.51189564s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=datenschutz-weiterleitungen.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=events.haendlerbund.de State:Normal Error: Results:map[] Values:map[B:{Var:B Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=events.haendlerbund.de Value:0xc0119a2c50} C:{Var:C Labels:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=events.haendlerbund.de Value:0xc0119a2c98}] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.511903833s EvaluationString:[ var='B' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=events.haendlerbund.de} value=1 ], [ var='C' labels={__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=events.haendlerbund.de} value=0 ]} {Instance:__name__=http_certificate_valid, cluster=hb-productive-droplets, instance=pythia:2617, job=pythia, server=haendlerbund.de State:Normal Error: