Skip to content

Commit

Permalink
Add logging configuration example to the Configure inputs page (#489)
Browse files Browse the repository at this point in the history
* Update standalone input settings docs to match elastic-agent.yml

* fixup

* Move simplfied input config to input page

* Re-add separate logging page

* Add input types tables

* Add input types tables; address comments

* touchup

* Remove list of agent inputs; I'll merge them as a separate PR

* Rebuild preview page

* Re-add inputs list link

* Rebuild
  • Loading branch information
kilfoyle authored Jun 11, 2024
1 parent 6696f32 commit 0c56b67
Showing 1 changed file with 60 additions and 28 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,38 +7,70 @@

The `inputs` section of the `elastic-agent.yml` file specifies how {agent} locates and processes input data.

By default {agent} collects system metrics, such as CPU, memory, network, and file system metrics, and sends them to the default output. For example, to define the datastream for `cpu` metrics, this is the configuration:
* <<elastic-agent-input-configuration-sample-metrics>>
* <<elastic-agent-input-configuration-sample-logs>>

[discrete]
[[elastic-agent-input-configuration-sample-metrics]]
== Sample metrics input configuration

By default {agent} collects system metrics, such as CPU, memory, network, and file system metrics, and sends them to the default output. For example, to define datastreams for `cpu`, `memory`, `network` and `filesystem` metrics, this is the configuration:

["source","yaml"]
-----------------------------------------------------------------------
- id: unique-system-metrics-id <1>
type: system/metrics <2>
use_output: default <3>
meta:
package: <4>
name: system
version: 0.10.9
data_stream:
namespace: default <5>
- type: system/metrics <1>
id: unique-system-metrics-id <2>
data_stream.namespace: default <3>
use_output: default <4>
streams:
- data_stream:
dataset: system.cpu <6>
type: metrics <7>
metricsets: <8>
- cpu
period: 10s
cpu.metrics:
- percentages
- normalized_percentages
- metricsets: <5>
- cpu
data_stream.dataset: system.cpu <6>
- metricsets:
- memory
data_stream.dataset: system.memory
- metricsets:
- network
data_stream.dataset: system.network
- metricsets:
- filesystem
data_stream.dataset: system.filesystem
-----------------------------------------------------------------------

<1> A unique ID for the input.
<2> The name of the input. Refer to <<elastic-agent-inputs-list>> for the list of what's available.
<3> The name of the `output` to use. If not specified, `default` will be used.
<4> Package specification.
<5> A user-defined namespace.
<6> A user-defined dataset. It can contain anything that makes sense to signify the source of the data.
<7> The type of the data stream.
<8> Enabled module metricsets.
<1> The name of the input. Refer to <<elastic-agent-inputs-list>> for the list of what's available.
<2> A unique ID for the input.
<3> A user-defined namespace.
<4> The name of the `output` to use. If not specified, `default` will be used.
<5> The set of enabled module metricsets.
+
In the {metricbeat-ref}/metricbeat-module-system.html[System module] there are several options. The `cpu` is just one of them. Its fields can be configured.
Refer to the {metricbeat} {metricbeat-ref}/metricbeat-module-system.html[System module] for a list of available options. The metricset fields can be configured.
<6> A user-defined dataset. It can contain anything that makes sense to signify the source of the data.

[discrete]
[[elastic-agent-input-configuration-sample-logs]]
== Sample log files input configuration

To enable {agent} to collect log files, you can use a configuration like the following.

["source","yaml"]
-----------------------------------------------------------------------
- type: filestream <1>
id: your-input-id <2>
streams:
- id: your-filestream-stream-id <3>
data_stream: <4>
dataset: generic
paths:
- /var/log/*.log
-----------------------------------------------------------------------

<1> The name of the input. Refer to <<elastic-agent-inputs-list>> for the list of what's available.
<2> A unique ID for the input.
<3> A unique ID for the data stream to track the state of the ingested files.
<4> The streams block is required only if multiple streams are used on the same input. Refer to the {filebeat} {filebeat-ref}/filebeat-input-filestream.html[filestream] documentation for a list of available options. Also, specifically for the `filestream` input type, refer to the <<elastic-agent-simplified-input-configuration,simplified log ingestion>> for an example of ingesting a set of logs specified as an array.

The input in this example harvests all files in the path `/var/log/*.log`, that is, all logs in the directory `/var/log/` that end with `.log`. All patterns supported by https://golang.org/pkg/path/filepath/#Glob[Go Glob] are also supported here.

To fetch all files from a predefined level of subdirectories, use this pattern:
`/var/log/*/*.log`. This fetches all `.log` files from the subfolders of `/var/log`. It does not fetch log files from the `/var/log` folder itself.
Currently it is not possible to recursively fetch all files in all subdirectories of a directory.

0 comments on commit 0c56b67

Please sign in to comment.