Splunk S2S Source Connector for Confluent Platform

The Splunk S2S Source connector provides a way to integrate Splunk with Apache Kafka®. The connector receives data from Splunk universal forwarder (UF).


The Splunk S2S Source connector listens on a network port. Running more than one connector task, or running in distributed mode can produce undesirable results if another task already has the port open. Confluent recommends you run the Splunk S2S Source connector in Standalone Mode.


The Splunk S2S Source connector includes the following features:

Supports one task

The Splunk S2S Source connector supports running only one task.

Metadata Support

The Splunk S2S Source connector supports parsing of metadata fields (host, source, sourcetype, and index) along with a raw event. Here is an example of message in a Kafka topic:

   "event": "sample log event",
   "time": 1623175216,
   "host": "sample host",
   "source": "/opt/splunkforwarder/splunk-s2s-test.log",
   "index": "default",
   "sourcetype": "splunk-s2s-test-too_small"

The Connector also supports parsing of custom meta fields which can be configured at the forwarder’s end by using the _meta tag as shown in the following example:

sourcetype = test
disabled = false
_meta = testField::testValue

The following example shows a message with the previous input configuration:

   "event": "sample log event",
   "time": 1623175216,
   "host": "sample host",
   "source": "/opt/splunkforwarder/splunk-s2s-test.log",
   "index": "default",
   "sourcetype": "test",
   "testField": "testValue"

Data Ingestion

The Splunk S2S Source connector supports data ingestion from the Splunk UF for the following input types:

For help with configuring these input types on UFs, see Configure Inputs on Splunk universal forwarder.

Multiline Event Parsing

The Splunk S2S Source connector also supports multiline event parsing by providing the following event break options for each sourcetype:

  1. EVERY_LINE: Create new events on every new line.
  2. REGEX: Create events as defined in regex.

See SourceType Parsing Config Example for help with defining event break options for a sourcetype.


The following prerequisites are required to run the Splunk S2S Source connector:

  • Kafka Broker: Confluent Platform 6.0.0 or above
  • Connect: Confluent Platform 6.0.0 or above
  • Java 1.8
  • Splunk UF: 8.x

Splunk Universal Forwarder Configuration

For a complete list of configuration properties for the Splunk UF, see Splunk Universal Forwarder Configuration Properties

Install the Splunk S2S Source Connector

You can install this connector by using the Confluent Hub client installation instructions or by manually downloading the ZIP file.



You must install the connector on every machine where Connect will run.

  • An install of the Confluent Hub Client.


    This is installed by default with Confluent Enterprise.

  • An install of the latest (latest) connector version.

    To install the latest connector version, navigate to your Confluent Platform installation directory and run the following command:

    confluent-hub install confluentinc/kafka-connect-splunk-s2s:latest

    You can install a specific version by replacing latest with a version number as shown in the following example:

    confluent-hub install confluentinc/kafka-connect-splunk-s2s:1.0.0

Install the connector manually

Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.


You can use this connector for a 30-day trial period without a license key.

After 30 days, this connector is available under a Confluent enterprise license. Confluent issues Confluent enterprise license keys to subscribers, along with providing enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, please contact Confluent Support at support@confluent.io for more information.

See Confluent Platform license for license properties and License topic configuration for information about the license topic.

Configuration Properties

For a complete list of configuration properties for this connector, see Splunk S2S Source Connector Configuration Properties.


For an example of how to get Kafka Connect connected to Confluent Cloud, see Distributed Cluster.

SourceType Parsing Config Example

The splunk.s2s.sourcetypes configuration contains a list of sourcetypes for defining regex to parse events. The following example shows how to use this configuration:

splunk.s2s.sourcetypes = typeA,typeB
splunk.s2s.sourcetype.typeA.eventbreak = EVERY_LINE
splunk.s2s.sourcetype.typeB.eventbreak = REGEX
splunk.s2s.sourcetype.typeB.regex = ([\r\n]+)(?:\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d{3})


  • By default, the event break option for each sourcetype is configured to EVERY_LINE.
  • To add custom properties, such as splunk.s2s.sourcetype.typeA.eventbreak (which may not be visible initially in the user interface), click Add a property while defining the configuration.

Quick Start

This Quick Start uses the Splunk S2S Source connector to receive data from the Splunk UF and ingests it into Kafka.

  1. Install the connector using the Confluent Hub Client.

    # run from your CP installation directory
    confluent-hub install confluentinc/kafka-connect-splunk-s2s:latest
  2. Start the Confluent Platform.


    The command syntax for the Confluent CLI development commands changed in 5.3.0. These commands have been moved to confluent local. For example, the syntax for confluent start is now confluent local services start. For more information, see confluent local.

    confluent local services start
  3. Create a splunk-s2s-source.properties file with the following contents:

  4. Load the Splunk S2S Source connector.

    confluent local services connect connector load splunk-s2s-source --config splunk-s2s-source.properties


    Don’t use the Confluent CLI in production environments.

  5. Confirm the connector is in a RUNNING state.

    confluent local services connect connector status splunk-s2s-source
  6. Start a Splunk UF by running the Splunk UF Docker container.

    docker run -d -p 9998:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=password" --name splunk-uf splunk/universalforwarder:8.1.2
  7. Create a splunk-s2s-test.log file with the following sample log events:

    log event 1
    log event 2
    log event 3
  8. Copy the splunk-s2s-test.log file to the Splunk UF Docker container using the following command:

    docker cp splunk-s2s-test.log splunk-uf:/opt/splunkforwarder/splunk-s2s-test.log
  9. Configure the UF to monitor the splunk-s2s-test.log file:

    docker exec -it splunk-uf sudo ./bin/splunk add monitor -source /opt/splunkforwarder/splunk-s2s-test.log -auth admin:password
  10. Configure the UF to connect to Splunk S2S Source connector:

    • For Mac/Windows systems:

      docker exec -it splunk-uf sudo ./bin/splunk add forward-server host.docker.internal:9997
    • For Linux systems:

      docker exec -it splunk-uf sudo ./bin/splunk add forward-server
  11. Verify the data was ingested into the Kafka topic.

    To look for events from a monitored file (splunk-s2s-test.log) in the Kafka topic, run the following command:

    kafka-console-consumer --bootstrap-server localhost:9092 --topic splunk-s2s-events --from-beginning | grep 'log event'


    When you use the previous command without grep, you will see many Splunk internal events get ingested in the Kafka topic as Splunk UF sends internal Splunk log events to connector by default.

  12. Shut down Confluent Platform.

    confluent local destroy
  13. Shut down the Docker container.

    docker stop splunk-uf
    docker rm splunk-uf