Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metric event type handling support #282

Open
cosmo0920 opened this issue Aug 26, 2021 · 4 comments
Open

Metric event type handling support #282

cosmo0920 opened this issue Aug 26, 2021 · 4 comments

Comments

@cosmo0920
Copy link

cosmo0920 commented Aug 26, 2021

I'm currently investigating metric event type with this connector with the following config and event format via custom forwarder.

connect-distributed.properties for connect

# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
bootstrap.servers=localhost:9092

group.id=kafka-connect-splunk-hec-sink

or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

key.converter.schemas.enable=false
value.converter.schemas.enable=false

internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter

internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

offset.storage.topic=connect-offsets
offset.storage.replication.factor=1

config.storage.topic=connect-configs
config.storage.replication.factor=1

status.storage.topic=connect-status
status.storage.replication.factor=1
#status.storage.partitions=5

# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000

plugin.path=connectors/

Record format

JSON string with the following format:

{"host":"development-box", "time":"1629859258.5508862","event":"metric", "fields":{"metric_name":"network_device_eth0_transmit_bytes_total", "_value":36589580.0}}

as mentioned in

Created Connector Task

{
  "name": "kafka-connect-splunk",
    "config": {
     "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
     "tasks.max": "10",
     "topics":"myapp.test,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10",
     "splunk.hec.uri":"https://localhost:8088",
     "splunk.hec.token": "<Splunk HEC Token>",
     "splunk.hec.ack.enabled" : "true",
     "splunk.hec.raw" : "false",
     "splunk.hec.track.data" : "true",
     "splunk.hec.ssl.trust.store.path":"/etc/ssl/certs/java/cacerts",
     "splunk.hec.ssl.trust.store.password":"changeit",
     "splunk.hec.ssl.validate.certs": "false"
    }
}

And created metric index on Splunk with this instruction: https://docs.splunk.com/Documentation/Splunk/8.2.1/Metrics/GetMetricsInOther#Get_metrics_in_from_clients_over_HTTP_or_HTTPS


But no luck. What am I missing about ingestion of metric record via custom HEC forwarder via this connect? Or, this kafka-connect-splunk doesn't support metric type Splunk HEC events for now?

Additional context

With the above settings, event type indices not for metric type indices, and hec token for normal events, I got succeeded to ingest Splunk HEC events normally.

@chaitanyaphalak
Copy link
Contributor

Hey @cosmo0920 we dont support send metrics on this integration yet.

@cosmo0920
Copy link
Author

Thanks for the reply. Any ETA for this issue? Or, currently no plan to support this?

@cohuebn
Copy link

cohuebn commented Sep 13, 2021

@cosmo0920 : For what it's worth, I ran into the same issue and was able to get metrics indices populated via the Splunk Connector, but my events aren't shaped quite like yours. If I understand correctly, it looks like your events are already in HEC format. Is the "Record format" section in your original post an example of the records on the Kafka topic? If so, you might need to set the splunk.hec.json.event.formatted: "true" in the connector config so that the Splunk Connector doesn't nest your entire record inside the event property when sending to Splunk.

If the above helps at all, feel free to ignore the details below. However, in case it's helpful, here's some details on how I was able to get metrics working:

  • Using version 2.0.2 of the Splunk Connector and version 8.0.10 of Splunk Enterprise
  • When events land on the Kafka topic the Splunk Connector is consuming from, the events look like this:
{ "metric_name:cpu.cpuPercentUsage": 43.33333333333333, "metric_name:cpu.userCpuPercentUsage": 26.683333333333337, "metric_name:cpu.kernelCpuPercentUsage": 16.65, "hostname": "fluentbit-node-c56d647f5-ml4qz" }
  • The Splunk Connector is configured like this:
{
  "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
  "tasks.max": "1",
  "topics": "canonical.metrics.cpu",
  "splunk.indexes": "cpu",
  "splunk.sourcetypes": "_json",
  "splunk.hec.uri":"http://splunk:8088",
  "splunk.hec.token": "default-token",
  "splunk.hec.ack.enabled" : "false",
  "splunk.hec.raw" : "false",
  "splunk.hec.track.data" : "true",
  "value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
  "value.converter.schema.registry.url": "http://kafka-platform-cp-schema-registry:8081"
}

I think the key portions of the above configuration:

  • I'm sending explicitly to my cpu index which is a metrics index.
  • Also, I'm using the "splunk.sourcetypes": "_json" property to force Splunk to perform field extraction on my event to extract the metrics fields. In your case, I don't think that field extraction is necessary as you are using the fields property rather than embedding your metric fields within the event.

Outcome: As the connector is sending this data to Splunk it creates three metrics: cpu.cpuPercentUsage, cpu.kernelCpuPercentUsage, cpu.userCpuPercentUsage

@cosmo0920
Copy link
Author

@cohuebn Thanks for the hints.

If so, you might need to set the splunk.hec.json.event.formatted: "true" in the connector config so that the Splunk Connector doesn't nest your entire record inside the event property when sending to Splunk.

This is no luck. But, metrics_name:<name of metric>:<values> and "splunk.sourcetypes": "_json" property is working well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants