There is a new receiver: Kubernetes Objects Receiver that can pull or watch any object from Kubernetes API server. It will replace the Kubernetes Events Receiver in the future.
To migrate from Kubernetes Events Receiver to Kubernetes Object Receiver, configure clusterReceiver
values.yaml section with:
k8sObjects:
- mode: watch
name: events
There are differences in the log record formatting between the previous k8s_events
receiver and the now adopted k8sobjects
receiver results.
The k8s_events
receiver stores event messages their log body, with the following fields added as attributes:
k8s.object.kind
k8s.object.name
k8s.object.uid
k8s.object.fieldpath
k8s.object.api_version
k8s.object.resource_version
k8s.event.reason
k8s.event.action
k8s.event.start_time
k8s.event.name
k8s.event.uid
k8s.namespace.name
Now with the k8sobjects
receiver, the whole payload is stored in the log body and object.message
refers to the event message.
You can monitor more Kubernetes objects configuring by clusterReceiver.k8sObjects
according to the instructions from the
Kubernetes Objects Receiver documentation.
Remember to define rbac.customRules
when needed. For example, when configuring:
objectsEnabled: true
k8sObjects:
- name: events
mode: watch
group: events.k8s.io
namespaces: [default]
You should add events.k8s.io
API group to the rbac.customRules
:
rbac:
customRules:
- apiGroups:
- "events.k8s.io"
resources:
- events
verbs:
- get
- list
- watch
[receiver/filelogreceiver] Datatype for force_flush_period
and poll_interval
were changed from map to string.
If you are using custom filelog receiver plugin, you need to change the config from:
filelog:
poll_interval:
duration: 200ms
force_flush_period:
duration: "0"
to:
filelog:
poll_interval: 200ms
force_flush_period: "0"
[receiver/filelogreceiver] Datatype for force_flush_period
and poll_interval
were changed from
sring to map. Because of that, the default values in Helm Chart were causing problems #519
If you are using custom filelog receiver plugin, you need to change the config from:
filelog:
poll_interval: 200ms
force_flush_period: "0"
to:
filelog:
poll_interval:
duration: 200ms
force_flush_period:
duration: "0"
If you are disabling this feature gate to keep previous functionality, you will have to complete the steps in upgrade guidelines 0.47.0 to 0.47.1 to upgrade since the feature gate no longer exists.
OTel Kubernetes receiver is now used for events collection instead of Signalfx events receiver
Before this change, if clusterReceiver.k8sEventsEnabled=true
, Kubernetes events used to be collected by a Signalfx
receiver and sent both to Splunk Observability Infrastructure Monitoring and Splunk Observability Log Observer.
Now we utilize a native OpenTelemetry receiver for collecting Kubernetes
events.
Therefore clusterReceiver.k8sEventsEnabled
option is now deprecated and replaced by the following two options:
clusterReceiver.eventsEnabled
: to send Kubernetes events in the new OTel format to Splunk Observability Log Observer (if splunkObservability.logsEnabled=true) or to Splunk Platform (if splunkPlatform.logsEnabled=true).splunkObservability.infrastructureMonitoringEventsEnabled
: to collect Kubernetes events using the Signalfx Kubernetes events receiver and send them to Splunk Observability Infrastructure Monitoring.
If you have clusterReceiver.k8sEventsEnabled
set to true
to send Kubernetes events to both Splunk Observability
Infrastructure Monitoring and Splunk Observability Log Observer, remove clusterReceiver.k8sEventsEnabled
from your
custom values.yaml enable both clusterReceiver.eventsEnabled
and
splunkObservability.infrastructureMonitoringEventsEnabled
options. This will send the Kubernetes events to Splunk
Observability Log Observer in the new OpenTelemetry format.
If you want to keep sending Kubernetes events to Splunk Observability Log Observer in the old Signalfx format to keep
exactly the same behavior as before, remove clusterReceiver.k8sEventsEnabled
from your custom values.yaml and add the
following configuration:
splunkObservability:
logsEnabled: true
infrastructureMonitoringEventsEnabled: true
clusterReceiver:
config:
exporters:
splunk_hec/events:
endpoint: https://ingest.<SPLUNK_OBSERVABILITY_REALM>.signalfx.com/v1/log
log_data_enabled: true
profiling_data_enabled: false
source: kubelet
sourcetype: kube:events
token: ${SPLUNK_OBSERVABILITY_ACCESS_TOKEN}
service:
pipelines:
logs/events:
exporters:
- signalfx
- splunk_hec/events
where SPLUNK_OBSERVABILITY_REALM
must be replaced by splunkObservability.realm
value.
New releases of opentelemetry-log-collection ( v0.29.0, v0.28.0 ) have breaking changes
Several of the logging receivers supported by the Splunk Otel Collector Chart were updated to use v0.29.0 instead v0.27.2 of opentelemetry-log-collection.
- Check to see if you have any custom log monitoring setup with the extraFileLogs config, the logsCollection.containers.extraOperators config, or any of the affected receivers. If you don't have any custom log monitoring setup, you can stop here.
- Read the documentation for upgrading to opentelemetry-log-collection v0.29.0.
- If opentelemetry-log-collection v0.29.0 or v0.28.0 will break any of your custom log monitoring, update your log monitoring to accommodate the breaking changes.
If you haven't already completed the steps in upgrade guidelines 0.47.0 to 0.47.1 , then complete them.
[receiver/k8sclusterreceiver] Fix k8s node and container cpu metrics not being reported properly
The Splunk Otel Collector added a feature gate to enable a bug fix for three metrics. These metrics have a current and a legacy name, we list both as pairs (current, legacy) below.
- Affected Metrics
k8s.container.cpu_request
,kubernetes.container_cpu_request
k8s.container.cpu_limit
,kubernetes.container_cpu_limit
k8s.node.allocatable_cpu
,kubernetes.node_allocatable_cpu
- Upgrade Steps
- Check to see if any of your custom monitoring uses the affected metrics. Check for the current and legacy names of the affected metrics. If you don't use the affected metrics in your custom monitoring, you can stop here.
- Read the documentation for the receiver.k8sclusterreceiver.reportCpuMetricsAsDouble feature gate and the bug fix it applies.
- If the bug fix will break any of your custom monitoring for the affected metrics, update your monitoring to accommodate the bug fix.
- Feature Gate Stages and Versions
- Alpha (versions 0.47.1-0.48.0):
- The feature gate is disabled by default. Use the
--set clusterReceiver.featureGates=receiver.k8sclusterreceiver.reportCpuMetricsAsDouble
argument with the helm install/upgrade command, or add the following line to your custom values.yaml to enable the feature gate:
clusterReceiver: featureGates: receiver.k8sclusterreceiver.reportCpuMetricsAsDouble
- The feature gate is disabled by default. Use the
- Beta (versions 0.49.0-0.54.0):
- The feature gate is enabled by default. Use the
--set clusterReceiver.featureGates=-receiver.k8sclusterreceiver.reportCpuMetricsAsDouble
argument with the helm install/upgrade command, or add the following line to your custom values.yaml to disable the feature gate:
clusterReceiver: featureGates: -receiver.k8sclusterreceiver.reportCpuMetricsAsDouble
- The feature gate is enabled by default. Use the
- Generally Available (versions +0.55.0):
- The receiver.k8sclusterreceiver.reportCpuMetricsAsDouble feature gate functionality is permanently enabled and the feature gate is no longer available for anyone.
- Alpha (versions 0.47.1-0.48.0):
[receiver/k8sclusterreceiver] Use newer batch and autoscaling APIs
Kubernetes clusters with version 1.20 stopped having active support on 2021-12-28 and had an end of life date on 2022-02-28. The k8s_cluster receiver was refactored to use newer Kubernetes APIs that are available starting in Kubernetes version 1.21. The latest version of the k8s_cluster receiver will no longer be able to collect all the previously available metrics with Kubernetes clusters that have versions below 1.21.
If version 0.45.0 of the chart cannot collect metrics from your Kubernetes cluster that is a version below 1.21, you will see error messages in your cluster receiver logs that look like this.
Failed to watch *v1.CronJob: failed to list *v1.CronJob: the server could not find the requested resource
To better support users, in a future release we are adding a feature that will allow users to use the last version of the k8s_cluster receiver that supported Kubernetes clusters below version 1.21.
If you still want to keep the previous behavior of the k8s_cluster receiver and upgrade to v0.45.0 of the chart, make sure your Kubernetes cluster uses one of the following versions.
kubernetes
,aks
,eks
,eks/fargate
,gke
,gke/autopilot
- Use version 1.21 or above
openshift
- Use version 4.8 or above
#375 Resource detection processor is configured to override all host and cloud attributes
If you still want to keep the previous behavior, use the following custom values.yaml configuration:
agent:
config:
processors:
resourcedetection:
override: false
#357 Double expansion issue in splunk-otel-collector is fixed
If you use OTel native logs collection with any custom log processing operators
in filelog
receiver, please replace any occurrences of $$$$
with $$
.
#325 Logs collection is now disabled by default for Splunk Observability destination
If you send logs to Splunk Observability destination, make sure to enable logs.
Use --set="splunkObservability.logsEnabled=true"
argument with helm
install/upgrade command, or add the following line to your custom values.yaml:
splunkObservability:
logsEnabled: true
#297, #301 Several parameters in values.yaml configuration were renamed according to Splunk GDI Specification
If you use the following parameters in your custom values.yaml, please rename them accordingly:
provider
->cloudProvider
distro
->distribution
otelAgent
->agent
otelCollector
->gateway
otelK8sClusterReceiver
->clusterReceiver
#306 Some parameters under splunkPlatform
group were
renamed
If you use the following parameters under splunkPlatform
group, please make
sure they are updated:
metrics_index
->metricsIndex
max_connections
->maxConnections
disable_compression
->disableCompression
insecure_skip_verify
->insecureSkipVerify
#295 Secret names are changed according to the GDI specification
If you provide access token for Splunk Observability using a custom Kubernetes
secret (secter.create=false), please update the secret key from
splunk_o11y_access_token
to splunk_observability_access_token
#273 Changed configuration to fetch attributes from labels and annotations of pods and namespaces
podLabels
parameter under the extraAttributes
group is now deprecated.
in favor of fromLabels
. Please update your custom values.yaml accordingly.
For example, the following config:
extraAttributes:
podLabels:
- app
- git_sha
Should be changed to:
extraAttributes:
fromLabels:
- key: app
- key: git_sha
#316 Busybox dependency is removed, splunk/fluentd-hec image is used in init container instead
image.fluentd.initContainer
is not being used anymore. Please remove it from
your custom values.yaml.
If you have any extra receivers that require access to node's files or directories that are not mounted by default, you need to setup additional volume mounts.
For example, if you have the following smartagent/docker-container-stats
receiver added to your configuration:
agent:
config:
receivers:
smartagent/docker-container-stats:
type: docker-container-stats
dockerURL: unix:///hostfs/var/run/docker.sock
You need to mount the docker socket to your container as follows:
extraVolumeMounts:
- mountPath: /hostfs/var/run/docker.sock
name: host-var-run-docker
readOnly: true
extraVolumes:
- name: host-var-run-docker
hostPath:
path: /var/run/docker.sock
#246 Simplify configuration for switching to native OTel logs collection
The config to enable native OTel logs collection was changed from
fluentd:
enabled: false
logsCollection:
enabled: true
to
logsEngine: otel
Enabling both engines is not supported anymore. If you need that, you can install fluentd separately.
The following parameters are now deprecated and moved under
splunkObservability
group. They need to be updated in your custom values.yaml
files before backward compatibility is discontinued.
Required parameters:
splunkRealm
changed tosplunkObservability.realm
splunkAccessToken
changed tosplunkObservability.accessToken
Optional parameters:
ingestUrl
changed tosplunkObservability.ingestUrl
apiUrl
changed tosplunkObservability.apiUrl
metricsEnabled
changed tosplunkObservability.metricsEnabled
tracesEnabled
changed tosplunkObservability.tracesEnabled
logsEnabled
changed tosplunkObservability.logsEnabled
#163 Auto-detection of prometheus metrics is disabled by default: If you rely on automatic prometheus endpoints detection to scrape prometheus metrics from pods in your k8s cluster, make sure to add this configuration to your values.yaml:
autodetect:
prometheus: true