Learn more about the Telemetry Module. Use it to enable observability for your application.
Fundamentally, "Observability" is a measure of how well the application's external outputs can reflect the internal states of single components. The insights that an application and the surrounding infrastructure expose are displayed in the form of metrics, traces, and logs - collectively, that's called "telemetry" or "signals". These can be exposed by employing modern instrumentation.
- In order to implement Day-2 operations for a distributed application running in a container runtime, the single components of an application must expose these signals by employing modern instrumentation.
- Furthermore, the signals must be collected and enriched with the infrastructural metadata in order to ship them to a target system.
- Instead of providing a one-size-for-all backend solution, the Telemetry module supports you with instrumenting and shipping your telemetry data in a vendor-neutral way.
- This way, you can conveniently enable observability for your application by integrating it into your existing or desired backends. Pick your favorite among many observability backends (available either as a service or as a self-manageable solution) that focus on different aspects and scenarios.
The Telemetry module focuses exactly on the aspects of instrumentation, collection, and shipment that happen in the runtime and explicitly defocuses on backends.
Tip
An enterprise-grade setup demands a central solution outside the cluster, so we recommend in-cluster solutions only for testing purposes. If you want to install lightweight in-cluster backends for demo or development purposes, see Integration Guides.
To support telemetry for your applications, the Telemetry module provides the following features:
- Tooling for collection, filtering, and shipment: Based on the Open Telemetry Collector and Fluent Bit, you can configure basic pipelines to filter and ship telemetry data.
- Integration in a vendor-neutral way to a vendor-specific observability system (traces and metrics only): Based on the OpenTelemetry protocol (OTLP), you can integrate backend systems.
- Guidance for the instrumentation (traces and metrics only): Based on Open Telemetry, you get community samples on how to instrument your code using the Open Telemetry SDKs in nearly every programming language.
- Enriching telemetry data by automatically adding common attributes. This is done in compliance with established semantic conventions, ensuring that the enriched data adheres to industry best practices and is more meaningful for analysis. For details, see Data Enrichment.
- Opt-out of features for advanced scenarios: At any time, you can opt out for each data type, and use custom tooling to collect and ship the telemetry data.
- SAP BTP as first-class integration: Integration into SAP BTP Observability services, such as SAP Cloud Logging, is prioritized. For more information, see SAP Cloud Logging. <!--- replace with Help Portal link once published? --->
The Telemetry module focuses only on the signals of application logs, distributed traces, and metrics. Other kinds of signals are not considered. Also, audit logs are not in scope.
Supported integration scenarios are neutral to the vendor of the target system.
The Telemetry module ships Telemetry Manager as its core component. Telemetry Manager is a Kubernetes operator that implements the Kubernetes controller pattern and manages the whole lifecycle of all other components covered in the Telemetry module. Telemetry Manager watches for the user-created Kubernetes resources: LogPipeline, TracePipeline, and MetricPipeline. In these resources, you specify what data of a signal type to collect and where to ship it. If Telemetry Manager detects a configuration, it deploys the related gateway and agent components accordingly and keeps them in sync with the requested pipeline definition.
For more information, see Telemetry Manager.
The Traces and Metrics features share the common approach of providing a gateway based on the OTel Collector. It acts as a central point in the cluster to push data in the OTLP format. From here, the data is enriched and filtered and then dispatched as defined in the individual pipeline resources.
For more information, see Telemetry Gateways.
The log agent is based on a Fluent Bit installation running as a DaemonSet. It reads all containers' logs in the runtime and ships them according to a LogPipeline configuration.
For more information, see Logs.
The trace gateway is based on an OTel Collector Deployment. The gateway provides an OTLP-based endpoint to which applications can push the trace signals. According to a TracePipeline configuration, the gateway processes and ships the trace data to a target system.
For more information, see Traces and Telemetry Gateways.
The metric gateway and agent are based on an OTel Collector Deployment and a DaemonSet. The gateway provides an OTLP-based endpoint to which applications can push the metric signals. The agent scrapes annotated Prometheus-based workloads. According to a MetricPipeline configuration, the gateway processes and ships the metric data to a target system.
For more information, see Metrics and Telemetry Gateways.
To learn about integration with SAP Cloud Logging, read SAP Cloud Logging. <!--- replace with Help Portal link once published? --->
For integration with other backends, such as Dynatrace, see:
To learn how to collect data from applications based on the OpenTelemetry Demo App, see:
The API of the Telemetry module is based on Kubernetes Custom Resource Definitions (CRD), which extend the Kubernetes API with custom additions. To inspect the specification of the Telemetry module API, see:
To learn more about the resources used by the Telemetry module, see Kyma Modules' Sizing.