Skip to content

Commit

Permalink
Add alt text and fix broken image links (#6955)
Browse files Browse the repository at this point in the history
* Add alt text and fix broekn image links

* Match case to page title

(cherry picked from commit a834fa5)
  • Loading branch information
clayton-cornell committed Jun 12, 2024
1 parent 47d6169 commit 6c0a844
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 18 deletions.
16 changes: 10 additions & 6 deletions docs/sources/flow/tasks/debug.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Follow these steps to debug issues with {{< param "PRODUCT_NAME" >}}:
### Home page

![](../../assets/ui_home_page.png)
![The Agent UI home page showing a table of components.](/media/docs/agent/ui_home_page.png)

The home page shows a table of components defined in the configuration file and their health.

Expand All @@ -75,14 +75,14 @@ Click the {{< param "PRODUCT_ROOT_NAME" >}} logo to navigate back to the home pa

### Graph page

![](../../assets/ui_graph_page.png)
![The Graph page showing a graph view of components.](/media/docs/agent/ui_graph_page.png)

The **Graph** page shows a graph view of components defined in the configuration file and their health.
Clicking a component in the graph navigates to the [Component detail page](#component-detail-page) for that component.

### Component detail page

![](../../assets/ui_component_detail_page.png)
![The component detail page showing detailed information about the components.](/media/docs/agent/ui_component_detail_page.png)

The component detail page shows the following information for each component:

Expand All @@ -95,9 +95,9 @@ The component detail page shows the following information for each component:
### Clustering page

![](../../assets/ui_clustering_page.png)
![The Clustering page showing detailed information about each cluster node.](/media/docs/agent/ui_clustering_page.png)

The clustering page shows the following information for each cluster node:
The Clustering page shows the following information for each cluster node:

* The node's name.
* The node's advertised address.
Expand Down Expand Up @@ -139,4 +139,8 @@ To debug issues when using [clustering](ref:clustering), check for the following
- **Node stuck in terminating state**: The node attempted to gracefully shut down and set its state to Terminating, but it has not completely gone away.
Check the clustering page to view the state of the peers and verify that the terminating {{< param "PRODUCT_ROOT_NAME" >}} has been shut down.


{{< admonition type="note" >}}
Some issues that appear to be clustering issues may be symptoms of other issues,
for example, problems with scraping or service discovery can result in missing
metrics for an agent that can be interpreted as a node not joining the cluster.
{{< /admonition >}}
11 changes: 5 additions & 6 deletions docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ loki.write "default" {
To use Loki with basic-auth, which is required with Grafana Cloud Loki, you must configure the [loki.write](ref:loki.write) component.
You can get the Loki configuration from the Loki **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/loki-config.png)
![The Loki Details page showing information about the Loki configuration.](/media/docs/agent/loki-config.png)

```river
otelcol.exporter.loki "grafana_cloud_loki" {
Expand Down Expand Up @@ -200,7 +200,7 @@ otelcol.exporter.otlp "default" {
To use Tempo with basic-auth, which is required with Grafana Cloud Tempo, you must use the [otelcol.auth.basic](ref:otelcol.auth.basic) component.
You can get the Tempo configuration from the Tempo **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/tempo-config.png)
![The Tempo Details page showing information about the Tempo configuration.](/media/docs/agent//tempo-config.png)

```river
otelcol.exporter.otlp "grafana_cloud_tempo" {
Expand Down Expand Up @@ -237,7 +237,7 @@ prometheus.remote_write "default" {
To use Prometheus with basic-auth, which is required with Grafana Cloud Prometheus, you must configure the [prometheus.remote_write](ref:prometheus.remote_write) component.
You can get the Prometheus configuration from the Prometheus **Details** page in the [Grafana Cloud Portal][]:

![](../../../assets/tasks/prometheus-config.png)
![The Prometheus Details page showing information about the Prometheus configuration.](/media/docs/agent/prometheus-config.png)

```river
otelcol.exporter.prometheus "grafana_cloud_prometheus" {
Expand Down Expand Up @@ -361,14 +361,13 @@ ts=2023-05-09T09:37:15.304109Z component=otelcol.receiver.otlp.default level=inf
ts=2023-05-09T09:37:15.304234Z component=otelcol.receiver.otlp.default level=info msg="Starting HTTP server" endpoint=0.0.0.0:4318
```

You can now check the pipeline graphically by visiting http://localhost:12345/graph
You can now check the pipeline graphically by visiting <http://localhost:12345/graph>

![](../../../assets/tasks/otlp-lgtm-graph.png)
![The Graph page showing a graphical representation of the pipeline.](/media/docs/agent/otlp-lgtm-graph.png)

[OpenTelemetry]: https://opentelemetry.io
[Grafana Loki]: https://grafana.com/oss/loki/
[Grafana Tempo]: https://grafana.com/oss/tempo/
[Grafana Cloud Portal]: https://grafana.com/docs/grafana-cloud/account-management/cloud-portal#your-grafana-cloud-stack
[Prometheus Remote Write]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage
[Grafana Mimir]: https://grafana.com/oss/mimir/

Original file line number Diff line number Diff line change
Expand Up @@ -355,27 +355,27 @@ pick the ones you need.
`length` controls how far back in time CloudWatch metrics are considered during each agent scrape.
If both settings are configured, the time parameters when calling CloudWatch APIs work as follows:

![](https://grafana.com/media/docs/agent/cloudwatch-period-and-length-time-model-2.png)
![A diagram showing how the time parameters work when both period and length are configured.](/media/docs/agent/cloudwatch-period-and-length-time-model-2.png)

As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job,
As noted above, if there is a different `period` or `length` across multiple metrics under the same static or discovery job,
the minimum of all periods, and maximum of all lengths is configured.

On the other hand, if `length` is not configured, both period and length settings are calculated based on
On the other hand, if `length` isn't configured, both period and length settings are calculated based on
the required `period` configuration attribute.

If all metrics within a job (discovery or static) have the same `period` value configured, CloudWatch APIs will be
requested for metrics from the scrape time, to `period`s seconds in the past.
requested for metrics from the scrape time, to `period`s seconds in the past.
The values of these metrics are exported to Prometheus.

![](https://grafana.com/media/docs/agent/cloudwatch-single-period-time-model.png)
![A diagram showing how the time parameters work when a single period is configured.](/media/docs/agent/cloudwatch-single-period-time-model.png)

On the other hand, if metrics with different `period`s are configured under an individual job, this works differently.
First, two variables are calculated aggregating all periods: `length`, taking the maximum value of all periods, and
the new `period` value, taking the minimum of all periods. Then, CloudWatch APIs will be requested for metrics from
`now - length` to `now`, aggregating each in samples for `period` seconds. For each metric, the most recent sample
is exported to CloudWatch.

![](https://grafana.com/media/docs/agent/cloudwatch-multiple-period-time-model.png)
![A diagram showing how the time parameters work when multiple periods are configured.](/media/docs/agent/cloudwatch-multiple-period-time-model.png)

## Supported services in discovery jobs

Expand Down

0 comments on commit 6c0a844

Please sign in to comment.