Redis Kafka Source Connector for Confluent Cloud

The fully-managed Redis Kafka Source connector for Confluent Cloud moves data from a Redis database into an Apache Kafka® cluster. The connector captures changes from Redis data structures (streams and keys) and publishes them to a Kafka topic, enabling real-time data streaming from Redis to Kafka.

For example, when you configure the connector to monitor a Redis stream, it reads stream entries and publishes messages to a Kafka topic. Similarly, when monitoring Redis keyspace notifications, the connector captures changes to keys in a Redis database, and publishes keys and values to a Kafka topic. The connector maps the data structure key to the record key, and the value to the record value.

Features

The connector offers the following features:

  • Delivery guarantees: The Stream Source connector can be configured to acknowledge stream messages either automatically (for at-most-once delivery) or explicitly (for at-least-once delivery). The default is at-least-once delivery.

    The Keys Source connector does not guarantee data consistency because it relies on Redis keyspace notifications, which have no delivery guarantees. It is possible to miss some notifications, for example, due to network failures.

  • Database authentication: The connector uses password authentication.

  • Supported data formats: The connector supports AVRO, JSON, JSON_SR (JSON Schema), PROTOBUF, STRING, or BYTES output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro). For more information, see Schema Registry Enabled Environments.

  • Client-side field level encryption (CSFLE) support: The connector supports CSFLE for sensitive data. For more information about CSFLE setup, see the connector configuration.

  • Unified source connector experience: The connector offers a unified experience by combining the functionalities of a Keys Source Connector and a Stream Source Connector. This allows you to choose your desired source type during initial configuration.

    You select either KEYS or STREAM as the source.type when you set up the connector.

  • Supports multiple tasks: The Stream Source connector supports running one or more tasks. More tasks may improve performance.

    The Keys Source connector can be configured with only one task.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Maximum message size

This connector creates topics automatically. When it creates topics, the internal connector configuration property max.message.bytes is set to the following:

  • Basic cluster: 8 MB
  • Standard cluster: 8 MB
  • Enterprise cluster: 8 MB
  • Dedicated cluster: 20 MB

For more information about Confluent Cloud clusters, see Kafka Cluster Types in Confluent Cloud.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Redis Kafka Source connector. The quick start provides the basics of selecting the connector and configuring it to consume data from Redis database and persist the data to Kafka.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Redis Kafka Source connector card.

Redis Kafka Source Connector Card

Step 4: Enter the connector details

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Redis Kafka Source Connector screen, complete the following:

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  2. Click Continue.

Step 5: Check the Kafka topic

After the connector is running, verify that Redis records are populating the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
   "connector.class": "RedisKafkaSource",
   "name": "<my-connector-name>",
   "kafka.auth.mode": "KAFKA_API_KEY",
   "kafka.api.key": "<my-kafka-api-key>",
   "kafka.api.secret": "<my-kafka-api-secret>",
   "kafka.topic": "keys_source",
   "redis.host": "test-18211.c90.us-east-1.ec2.redis-cloud.com",
   "redis.port": "18211",
   "redis.database": "0",
   "redis.username": "default",
   "redis.password": "********************",
   "redis.tls": "false",
   "source.type": "KEYS",
   "keys.batch.size": "100",
   "redis.timeout": "60",
   "redis.pool": "8",
   "redis.server.mode": "Standalone",
   "redis.keys.pattern": "*",
   "mode": "LIVE",
   "redis.keys.timeout": "0",
   "output.data.format": "AVRO",
   "tasks.max": "1",
   "auto.restart.on.user.error": "true"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "redis.host": The IP address or hostname of the Redis database server.

  • "redis.port": The port number used to connect to Redis database server.

  • "redis.tls": Specify whether to use Transport Layer Security (TLS) to connect to the Redis database. Defaults to false.

  • "source.type": Defines the type of Redis Kafka source connector to use. Select KEYS to monitor Redis keyspace notifications, or STREAM to read from Redis Streams.

  • "kafka.topic": Specifies the name of the destination Kafka topic where the connector publishes Redis events.

  • "keys.batch.size": Controls the maximum size of the batch for writing into a topic. Defaults to 100. For the Stream Source connector, the property name is "stream.batch.size".

  • "output.data.format": Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, STRING, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "tasks.max": Enter the maximum number of tasks for the connector to use. More tasks might improve performance.

    Note

    The Keys Source connector can be configured with only one task.

Note

(Optional) To enable CSFLE for data encryption, specify the following properties:

  • csfle.enabled: Flag to indicate whether the connector honors CSFLE rules.
  • sr.service.account.id: A Service Account to access the Schema Registry and associated encryption rules or keys with that schema.

For more information on CSFLE setup, see Manage CSFLE for connectors.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and definitions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file redis-kafka-source.json

Example output:

Created connector confluent-redis-kafka-source lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |            Name           | Status  | Type
+-----------+---------------------------+---------+-------+
lcc-ix4dl   | confluent-redis-kafka-source  | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that Redis documents are populating the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Which topic do you want to send data to?

kafka.topic

Identifies the topic name to write the data to.

  • Type: string
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

Redis connection

redis.host

The hostname of Redis server to connect to.

  • Type: string
  • Importance: high
redis.port

The port number of Redis server to connect to.

  • Type: string
  • Importance: high
redis.database

The database index to write to.

  • Type: int
  • Default: 0
  • Importance: medium
redis.username

The username of the Redis user connecting to the Redis database server.

  • Type: string
  • Importance: medium
redis.password

The password of the Redis user connecting to the Redis database server.

  • Type: password
  • Importance: medium

Redis security

redis.tls

Establish a secure TLS connection to Redis.

  • Type: boolean
  • Default: false
  • Importance: medium
redis.cacert

X.509 CA certificate file to verify with. Use this with or without client certificates.

  • Type: password
  • Importance: medium

Redis client certificate auth

redis.key.file

Private key file (PEM format) to authenticate with. Use this file along with the certificate file for client certificate authentication.

  • Type: password
  • Importance: medium
redis.key.cert

X.509 certificate chain file (PEM format) to authenticate with. Use this file along with the private key file for client certificate authentication.

  • Type: password
  • Importance: medium
redis.key.password

Password of the private key file. Leave empty if key file is not password-protected.

  • Type: password
  • Importance: medium

Source configuration

source.type

Type of Redis source connector. Select KEYS to monitor Redis keyspace notifications, or STREAM to read from Redis Streams.

  • Type: string
  • Default: KEYS
  • Importance: high
keys.batch.size

Number of records to process in each batch for writing into a topic.

  • Type: int
  • Default: 100
  • Valid Values: [1,…,10000]
  • Importance: medium
stream.batch.size

Number of records to process in each batch for writing into a topic.

  • Type: int
  • Default: 100
  • Valid Values: [1,…,10000]
  • Importance: medium
redis.timeout

Redis command timeout in seconds.

  • Type: long
  • Default: 60
  • Valid Values: [1,…,3600]
  • Importance: medium
redis.pool

Maximum number of connections in the pool in the range of 1 to 100.

  • Type: int
  • Default: 8
  • Valid Values: [1,…,100]
  • Importance: medium
redis.server.mode

Whether redis server is running on one or multiple nodes.

  • Type: string
  • Default: Standalone
  • Importance: medium

Output messages

output.data.format

Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: JSON_SR
  • Importance: high

Keys source configuration

redis.keys.pattern

Keyspace glob-style pattern to subscribe to. Use * to subscribe to all keys.

  • Type: string
  • Default: *
  • Importance: medium
mode

Use LIVE for snapshot + updates, LIVEONLY for just updates.

  • Type: string
  • Default: LIVE
  • Importance: high
redis.keys.timeout

Idle timeout in milliseconds. Use 0 to disable.

  • Type: long
  • Default: 0
  • Valid Values: [0,…,3600000]
  • Importance: low

Stream source configuration

redis.stream.name

Name of the Redis stream to read from.

  • Type: string
  • Importance: high
redis.stream.offset

Stream offset to start reading from. Use 0-0 to start from the beginning, $ to read only new messages, or a specific offset like 1234567890-0.

  • Type: string
  • Default: 0-0
  • Importance: medium
redis.stream.block

The maximum amount of time in milliseconds to wait while polling for stream messages (XREAD [BLOCK milliseconds]).

  • Type: long
  • Default: 100
  • Valid Values: [0,…,60000]
  • Importance: low
redis.stream.delivery

Stream message delivery guarantee. The valid options are at-least-once or at-most-once.

  • Type: string
  • Default: at-least-once
  • Importance: medium
redis.stream.consumer.group

Stream consumer group name. This group will be created if it doesn’t exist.

  • Type: string
  • Default: kafka-consumer-group
  • Importance: high
redis.stream.consumer.name

A format string for the stream consumer, which may contain ‘${task}’ as a placeholder for the task id. For example, ‘consumer-${task}’ for the task id ‘123’ will map to the consumer name ‘consumer-123’.

  • Type: string
  • Default: consumer-${task}
  • Importance: medium

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Default: 1
  • Importance: high

Additional Configs

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string
  • Importance: low
producer.override.compression.type

The compression type for all data generated by the producer. Valid values are none, gzip, snappy, lz4, and zstd.

  • Type: string
  • Importance: low
producer.override.linger.ms

The producer groups together any records that arrive in between request transmissions into a single batched request. More details can be found in the documentation: https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#linger-ms.

  • Type: long
  • Valid Values: [100,…,1000]
  • Importance: low
value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean
  • Importance: low
value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean
  • Importance: low
value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.scrub.invalid.names

Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean
  • Importance: low
value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Type: string
  • Default: none
  • Importance: low
key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low
value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string
  • Default: BASE64
  • Importance: low
value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.ignore.default.for.nullables

When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: low
value.converter.replace.null.with.default

Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.

  • Type: boolean
  • Default: true
  • Importance: low
value.converter.schemas.enable

Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean
  • Default: true
  • Importance: medium

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png