Splunk S2S Source Connector for Confluent Platform
The Splunk S2S Source connector provides a way to integrate Splunk with Apache Kafka®. The connector receives data from Splunk universal forwarder (UF) or Splunk heavy forwarder (HF).
Important
The Splunk S2S Source connector listens on a network port. Running more than one connector task, or running in distributed mode can produce undesirable results if another task already has the port open. Confluent recommends you run the Splunk S2S Source connector in Standalone Mode.
Features
The Splunk S2S Source connector includes the following features:
Note
At least once delivery is supported only when acknowledgements are enabled on both the forwarder and the connector. For more details, see - At Least Once Delivery.
Supports one task
The Splunk S2S Source connector supports running only one task.
Metadata support
The Splunk S2S Source connector supports parsing metadata fields (host
,
source
, sourcetype
, and index
) along with a raw event. The following
is an example of a message in a Kafka topic:
{
"event": "sample log event",
"time": 1623175216,
"host": "sample host",
"source": "/opt/splunkforwarder/splunk-s2s-test.log",
"index": "default",
"sourcetype": "splunk-s2s-test-too_small"
}
The connector also supports parsing custom meta fields which can be configured
at the forwarder’s end by using the _meta
tag as shown in the following
example:
[monitor://$SPLUNK_HOME/splunk-s2s-test.log]
sourcetype = test
disabled = false
_meta = testField::testValue
The following example shows a message with the previous input configuration:
{
"event": "sample log event",
"time": 1623175216,
"host": "sample host",
"source": "/opt/splunkforwarder/splunk-s2s-test.log",
"index": "default",
"sourcetype": "test",
"testField": "testValue"
}
Data ingestion
The Splunk S2S Source connector supports data ingestion from the Splunk forwarder for the following input types:
For help with configuring these input types on UFs, see Configure Inputs on Splunk Forwarder.
Multiline event parsing
The Splunk S2S Source connector also supports multiline event parsing by providing the following event break options for each sourcetype:
EVERY_LINE
: Create new events on every new line.REGEX
: Create events as defined in regex.
For help with defining event break options for a sourcetype, see SourceType parsing config example.
Compression support
The Splunk S2S Source connector supports compression for communication between the connector and Splunk forwarders. To enable compression, set the following configuration property:
"splunk.s2s.compression.enable": "true"
Note
The connector supports only native Splunk compression–that is, the
compressed=true
setting. It does not support theuseClientSSLCompression
setting provided by Splunk.Be sure to set
compressed
totrue
on forwarders before setting the"splunk.s2s.compression.enable": "true"
.
SSL communication support
The Splunk S2S Source connector supports SSL communication between the connector and Splunk forwarders. To enable SSL communication, set the following configuration properties:
"splunk.s2s.ssl.enable": "true"
"splunk.s2s.ssl.key.path":"Path to SSL Server Private Key File"
"splunk.s2s.ssl.key.password":"SSL Server Private Key Password"
"splunk.s2s.ssl.cert.chain.path":"Path to SSL Server Certificate Chain"
The Splunk S2S connector supports client authentication in SSL communication between the connector and Splunk forwarders. To enable client authentication in SSL communication, in addition to above properties, set the following configuration properties:
"splunk.s2s.ssl.client.auth.enable": "true"
"splunk.s2s.ssl.root.ca.cert.chain.path":"Path to Root CA Certificate Chain"
"splunk.s2s.ssl.cn.list":"List of authorized Common Names to validate the client certificate"
Note
Be sure to set
useSSL
totrue
andsslRootCAPath
tolocation of the certificate authority certificate
on forwarders before setting the"splunk.s2s.ssl.enable": "true"
.Be sure to set
clientCert
andsslPassword
on forwarders before setting the"splunk.s2s.ssl.client.auth.enable": "true"
.
Dynamic Metadata Support
The Splunk S2S Source Connector supports parsing dynamic metadata fields which are generated via
INDEXED_EXTRACTIONS=JSON
setting on Splunk HF. For more details, see
Indexed field extractions.
HEC Record Format Support
The Splunk S2S Connector supports producing records in HEC compliant format where additional
metadata is stored in fields
key. This is compatible with Splunk HEC Sink Connector. Here is a
sample record in this format -
{
"event": "sample log event",
"time": 1623175216,
"host": "sample host",
"source": "/opt/splunkforwarder/splunk-s2s-test.log",
"index": "default",
"sourcetype": "test",
"fields": {
"testField": "testValue",
"sampleField": "sampleValue"
}
}
To enable producing records in this format, set the following configuration -
"splunk.s2s.headers.metadata": "body",
"splunk.s2s.record.format": "hec"
For more details on this format, see Format Events for HTTP Event Collector.
At Least Once Delivery
In the event of a failure, the Splunk S2S Source connector ensures that no messages are lost, although the last few messages may be processed again. This is achieved through an acknowledgement mechanism between the connector and the forwarder. With acknowledgements, the forwarder will resend any data that the connector has not acknowledged as “received”.
To enable acknowledgement, set the following configuration -
"splunk.s2s.enable.ack": "true"
Note
Be sure to set
useACK
totrue
on forwarders before setting the"splunk.s2s.enable.ack": "true"
.
For more details on at least once delivery, see Protect against loss of in-flight data.
CSFLE (Client-side Field level encryption)
This connector supports the CSFLE functionality. For more information, see Manage CSFLE.
Limitations
The Splunk S2S Source connector does not support the useClientSSLCompression
setting that Splunk provides.
License
Confluent’s Splunk S2S Source connector is a Confluent Premium connector subject to the Confluent enterprise license and therefore requires an additional subscription.
You can use this connector for a 30-day trial period without a license key. After 30 days, you must purchase a connector subscription to Confluent’s Splunk S2S Source connector which includes Confluent enterprise license keys to subscribers, along with enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, contact Confluent Support for more information.
For license properties, see Confluent Platform license. For information about the license topic, see License topic configuration.
Configuration properties
For a complete list of configuration properties for this connector, see Configuration Reference for Splunk S2S Source Connector for Confluent Platform.
For an example of how to get Kafka Connect connected to Confluent Cloud, see Connect Self-Managed Kafka Connect to Confluent Cloud.
Splunk Forwarder configuration
For a complete list of configuration properties for the Splunk forwarder, see Configuration Reference for Splunk Forwarder
Install the Splunk S2S Source connector
You can install this connector by using the confluent connect plugin install command, or by manually downloading the ZIP file.
Prerequisites
You must install the connector on every machine where Connect will run.
Kafka Broker: Confluent Platform 6.0.0 or later.
Connect: Confluent Platform 6.0.0 or later.
Java 1.8.
Splunk UF version 8.x and 9.x.
Confluent CLI (requires separate installation)
An installation of the latest (
latest
) connector version.
Install the connector using Confluent Hub
To install the latest
connector version, navigate to your Confluent Platform
installation directory and run the following command:
confluent connect plugin install confluentinc/kafka-connect-splunk-s2s:latest
You can install a specific version by replacing latest
with a version
number as shown in the following example:
confluent connect plugin install confluentinc/kafka-connect-splunk-s2s:2.2.0
Install the connector manually
Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.
SourceType parsing config example
The splunk.s2s.sourcetypes
configuration contains a list of sourcetypes for
defining regex
to parse events. The following example shows how to use this
configuration:
splunk.s2s.sourcetypes = typeA,typeB
splunk.s2s.sourcetype.typeA.eventbreak = EVERY_LINE
splunk.s2s.sourcetype.typeB.eventbreak = REGEX
splunk.s2s.sourcetype.typeB.regex = ([\r\n]+)(?:\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d{3})
Note
By default, the event break option for each sourcetype is configured to
EVERY_LINE
.To add custom properties, such as
splunk.s2s.sourcetype.typeA.eventbreak
(which may not be visible initially in the user interface), click Add a property while defining the configuration.
Error Handling
The Splunk S2S Source connector may encounter the following types of errors:
Deque Timeout Error: Occurs when the record queue becomes full and the connector times out while waiting for the queue to process the records.
Data Parsing Error: Happens when the connector encounters new or unrecognized data and fails to parse it properly.
By default, the connector logs these errors and continues processing. If unknown data is encountered, the connector skips recording the unprocessable data.
The configuration option splunk.s2s.behavior.on.error
can be set to:
fail: Stops the connector immediately when an error is encountered, ceasing further data processing.
ignore: Silently skips over errors without logging them and continues processing subsequent data without interruption.
log: (Default) Logs the error message in the connector logs and continues processing subsequent data without interruption.
Quick start
This Quick start uses the Splunk S2S Source connector to receive data from the Splunk UF and ingests it into Kafka.
Install the connector using the Confluent Hub Client.
# run from your CP installation directory confluent connect plugin install confluentinc/kafka-connect-splunk-s2s:latest
Start the Confluent Platform.
confluent local start
Create a
splunk-s2s-source.properties
file with the following contents:name=splunk-s2s-source tasks.max=1 connector.class=io.confluent.connect.splunk.s2s.SplunkS2SSourceConnector splunk.s2s.port=9997 kafka.topic=splunk-s2s-events key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false confluent.topic.bootstrap.servers=localhost:9092 confluent.topic.replication.factor=1
Load the Splunk S2S Source connector.
confluent local load splunk-s2s-source --config splunk-s2s-source.properties
Don’t use the Confluent CLI in production environments.
Confirm the connector is in a
RUNNING
state.confluent local status splunk-s2s-source
Start a Splunk UF by running the Splunk UF Docker container.
docker run -d -p 9998:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=password" --name splunk-uf splunk/universalforwarder:9.0.0
Create a
splunk-s2s-test.log
file with the following sample log events:log event 1 log event 2 log event 3
Copy the
splunk-s2s-test.log
file to the Splunk UF Docker container using the following command:docker cp splunk-s2s-test.log splunk-uf:/opt/splunkforwarder/splunk-s2s-test.log
Configure the UF to monitor the
splunk-s2s-test.log
file:docker exec -it splunk-uf sudo ./bin/splunk add monitor -source /opt/splunkforwarder/splunk-s2s-test.log -auth admin:password
Configure the UF to connect to Splunk S2S Source connector:
For Mac/Windows systems:
docker exec -it splunk-uf sudo ./bin/splunk add forward-server host.docker.internal:9997
For Linux systems:
docker exec -it splunk-uf sudo ./bin/splunk add forward-server 172.17.0.1:9997
Verify the data was ingested into the Kafka topic.
To look for events from a monitored file (
splunk-s2s-test.log
) in the Kafka topic, run the following command:kafka-console-consumer --bootstrap-server localhost:9092 --topic splunk-s2s-events --from-beginning | grep 'log event'
Note
When you use the previous command without
grep
, you will see many Splunk internal events get ingested in the Kafka topic as Splunk UF sends internal Splunk log events to connector by default.Shut down Confluent Platform.
confluent local destroy
Shut down the Docker container.
docker stop splunk-uf docker rm splunk-uf