Splunk Sink Connector for Confluent Platform¶
The Splunk Sink connector is used to move messages from Apache Kafka® to Splunk.
The Splunk Sink connector includes the following features:
- At least once delivery
- Dead Letter Queue
- Multiple tasks
- Data ingestion
- In-flight data transformation and enrichment
- Acknowledgement mode
At least once delivery¶
This connector guarantees that records are delivered at least once from the Kafka topic.
Dead Letter Queue¶
This connector supports the Dead Letter Queue (DLQ) functionality. For information about accessing and using the DLQ, see Confluent Platform Dead Letter Queue.
The Splunk Sink connector supports running one or more tasks. You can specify
the number of tasks in the
tasks.max configuration parameter. This can lead
to huge performance gains when multiple files need to be parsed.
The Splunk HTTP Event Collector (HEC) receives data from Kafka topics through HTTP or HTTPS connection using an Event Collector token configured in Splunk.
In-flight data transformation and enrichment¶
This feature is used to enrich raw data with extra metadata fields. The configured enrichment metadata is indexed along with raw event data by the Splunk software. See Indexed Field Extractions for more information.
Data enrichment for
/event HEC endpoint is only available in Splunk
Enterprise 6.5 and above.
This feature implements guaranteed delivery by polling Splunk for acknowledgement before committing the Kafka offset.
The following are required to run the Splunk Sink connector:
- Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above
- Connect: Confluent Platform 4.0 or above, or Kafka 1.0 or above
- Java 1.8
- Splunk 6.5 or above, configured with valid HTTP Event Collector (HEC) tokens
- Splunk Indexers and Heavy Forwarders that send information to this connector should have the same HEC token settings as this connector
- Task configuration parameters vary depending on acknowledgement setting (see the Configuration Properties for details)
HEC Acknowledgement prevents potential data loss but may slow down event ingestion.
Install the Splunk Sink Connector¶
You can install this connector by using the Confluent Hub client installation instructions or by manually downloading the ZIP file.
You must install the connector on every machine where Connect will run.
An install of the Confluent Hub Client.
This is installed by default with Confluent Enterprise.
An install of the latest (
latest) connector version.
To install the
latestconnector version, navigate to your Confluent Platform installation directory and run the following command:
confluent-hub install splunk/kafka-connect-splunk:latest
You can install a specific version by replacing
latestwith a version number as shown in the following example:
confluent-hub install splunk/kafka-connect-splunk:1.1.1
The Splunk Sink connector is an open source connector and does not require a Confluent Enterprise License.
For a complete list of configuration properties for this connector, see Splunk Sink Connector Configuration Properties.
The default port used by a Splunk HEC is
8088. However, the ksqlDB
component of Confluent Platform also uses that port. For this quick start, since both
Splunk and Confluent Platform will be running, we configure the HEC to use port
If that port is in use by another process, change
8889 to a different,
Start a Splunk Enterprise instance by running the Splunk Docker container.
docker run -d -p 8000:8000 -p 8889:8889 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=password" --name splunk splunk/splunk:7.3.0
Open http://localhost:8000 to access Splunk Web. Log in with username
Configure a Splunk HEC using Splunk Web.
- Click Settings > Data Inputs.
- Click HTTP Event Collector.
- Click Global Settings.
- In the All Tokens toggle button, select Enabled.
- Ensure SSL disabled is checked.
- Change the HTTP Port Number to 8889.
- Click Save.
- Click New Token.
- In the Name field, enter a name for the token:
- Click Next.
- Click Review.
- Click Submit.
Note the token value on the Token has been created successfully page. This token value is needed for the connector configuration later.
Install the connector through the Confluent Hub Client.
# run from your Confluent Platform installation directory confluent-hub install splunk/kafka-connect-splunk:latest
Start Confluent Platform.
The command syntax for the Confluent CLI development commands changed in 5.3.0. These commands have been moved to
confluent local. For example, the syntax for
confluent startis now
confluent local services start. For more information, see confluent local.
confluent local services start
Produce test data to the
splunk-qstopic in Kafka.
echo event 1 | confluent local services kafka produce splunk-qs echo event 2 | confluent local services kafka produce splunk-qs
splunk-sink.propertiesfile with the properties below. Substitute
<HEC_TOKEN>with the Splunk HEC token created earlier.
name=SplunkSink topics=splunk-qs tasks.max=1 connector.class=com.splunk.kafka.connect.SplunkSinkConnector splunk.indexes=main splunk.hec.uri=http://localhost:8889 splunk.hec.token=<HEC_TOKEN> splunk.sourcetypes=my_sourcetype confluent.topic.bootstrap.servers=localhost:9092 confluent.topic.replication.factor=1 value.converter=org.apache.kafka.connect.storage.StringConverter
Start the connector.
You must include a double dash (
--) between the topic name and your flag. For more information, see this post.
confluent local services connect connector load splunk --config splunk-sink.properties
In the Splunk user interface, verify that data is flowing into your Splunk platform instance by searching using the search parameter
Shut down Confluent Platform.
confluent local destroy
Shut down the Docker container.
docker stop splunk docker rm splunk