RabbitMQ Sink Connector for Confluent Cloud¶
The fully-managed RabbitMQ Sink connector for Confluent Cloud uses the AMQP protocol to communicate with RabbitMQ servers. The RabbitMQ Sink connector reads data from one or more Apache Kafka® topics and sends the data to a RabbitMQ exchange.
Note
- This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see RabbitMQ Sink Connector for Confluent Platform.
- If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The RabbitMQ Sink connector provides the following features:
- At least once delivery: The connector guarantees that records are delivered at least once from the Kafka topic to the RabbitMQ exchange.
- Dead Letter Queue: This connector supports the Dead Letter Queue (DLQ) functionality. For information about accessing and using the DLQ, see the View Connector Dead Letter Queue Errors in Confluent Cloud docs.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
- Header forwarding: The connector supports forwarding Kafka headers and metadata to the RabbitMQ message as headers. The Kafka message key can also be forwarded as the
correlationID
on the RabbitMQ message - RabbitMQ Exchange delivery: The connector supports delivering to one configured RabbitMQ exchange. When multiple Kafka topics are specified to read from, the messages are produced to this one RabbitMQ exchange.
- Publishes bytes as payload: The RabbitMQ message supports publishing bytes as payload. The connector supports storing raw bytes in RabbitMQ using the
value.converter
set toorg.apache.kafka.connect.converters.ByteArrayConverter
. Using the ByteArrayConverter for value, the connector stores the binary serialized form (for example, JSON, Avro, Strings, etc.) of the Kafka record values in RabbitMQ as byte arrays. Applications accessing these values can then read this information from RabbitMQ and deserialize the bytes into a usable form. If your data in Kafka is not in the format you want to persist in RabbitMQ, consider using Configure Single Message Transforms for Kafka Connectors in Confluent Cloud to change records before they are sent to RabbitMQ. - Supports SSL/TLS security: The connector also supports SSL/TLS security to connect to the RabbitMQ server.
- Batches records: The connector batches the records from Kafka while publishing to RabbitMQ. This is controlled by the
rabbitmq.publish.max.batch.size
configuration property.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see RabbitMQ Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud RabbitMQ Sink connector. The quick start shows how to select the connector and configure it to read data from Apache Kafka® topics and persist the data to a RabbitMQ exchange.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.
- Authorized access to a RabbitMQ host server, exchange, and host security details.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Refer to Cloud connector limitations for additional information.
Note
There is no input.data.format
configuration used with this sink connector. This is because this connector defaults to ByteArrayConverter
for value and StringConverter
for key. No other converter is useful for this connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Ensure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add RabbitMQ Sink Connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
- Select the way you want to provide Kafka Cluster credentials. You can
choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
- Click Continue.
- Add the RabbitMQ Connection details.
- RabbitMQ host: The RabbitMQ host server address to connect to.
For example,
192.168.1.99
. - RabbitMQ port: The server port the connector uses to connect to
the server. For example,
5672
. - RabbitMQ username: The username to use when authenticating to RabbitMQ.
- RabbitMQ password: The password to use when authenticating to RabbitMQ.
- Security protocol: The security protocol to use when connecting to RabbitMQ. Values can be PLAINTEXT or SSL.
- RabbitMQ virtual host: The name of the virtual host created in RabbitMQ.
- SSL Key Password: The private key password in the keystore file,
or the PEM key specified in
ssl.keystore.key
. This is required for clients only if two-way authentication is configured. - Key Store button: Uploads the keystore file.
- RabbitMQ host: The RabbitMQ host server address to connect to.
For example,
- Click Continue.
Note
Configuration properties that are not shown in the Cloud Console use the default values.See Configuration Properties for all property values and definitions.
Enter the following RabbitMQ destination details:
- RabbitMQ Destination Exchange: The RabbitMQ destination exchange where messages are delivered. The connector delivers messages to this RabbitMQ exchange only, even when the connector consumes from multiple Kafka topics.
- RabbitMQ Message Routing Key: The routing key that RabbitMQ uses to determine how to route the message.
- RabbitMQ Message Delivery Mode: An option that determines the
message durability in RabbitMQ. Options are
persistent
ortransient
. For more information, see the RabbitMQ docs.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Forward Kafka Record Key: If enabled, the Kafka record key is converted to a string and forwarded on the RabbitMQ message
correlationID
property. The connector does not send acorrelationID
if the Kafka record key isnull
and this property is set totrue
.Forward Kafka Record Metadata: If set to
true
, the connector forwards Kafka record metadata as RabbitMQ message headers. This includes the record’s topic, partition, and offset. The topic name is forwarded as a header namedKAFKA_TOPIC
, the partition value is a header namedKAFKA_PARTITION
, and the offset value is a header namedKAFKA_OFFSET
.Forward Kafka Record Headers: If set to
true
, the connector adds Kafka record headers to the RabbitMQ message as headers.Maximum batch size for publish acknowledgments: The maximum number of messages in a batch to block on for acknowledgments.
Time to wait for message acknowledgments: Period of time to wait for message acknowledgement in milliseconds. Minimum allowed timeout is 1 millisecond.
Message publish retries: Number of retries for unacked or nacked messages.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
- Click Continue.
- Verify the connection details by previewing the running configuration.
- Once you’ve validated that the properties are configured to your satisfaction, click Launch.
Tip
For information about previewing your connector output, see Data Previews for Confluent Cloud Connectors.
The status for the connector should go from Provisioning to Running. It may take a few minutes.
Step 5: Check the RabbitMQ destination¶
After the connector is running, verify that messages from your Kafka topic are populated to the configured RabbitMQ exchange.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name" : "RabbitMQSinkConnector_0",
"connector.class": "RabbitMQSink",
"topics" : "pageviews",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"rabbitmq.host" : "192.168.1.99",
"rabbitmq.exchange" : "exchange_1",
"rabbitmq.routing.key" : "routingkey_1",
"rabbitmq.delivery.mode" : "PERSISTENT",
"tasks.max" : "1"
}
Note the following property definitions:
"name"
: Sets a name for your new connector."connector.class"
: Identifies the connector plugin name."topics"
: Enter Kafka topic name or comma-separated list of topic names.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"rabbitmq.<...>"
: See the RabbitMQ Sink configuration properties for property values and definitions. Note that the connector configuration defaults to host port5672
(i.e.,"rabbitmq.port"
:"5672"
)."tasks.max"
: Enter the number of tasks that the connector uses. The connector supports running one or more tasks. More tasks may improve performance.
For TLS connections, you must supply the keystore and/or truststore file
contents and the file passwords when creating JSON connector configuration. The
truststore and keystore files are binary files. For the
rabbitmq.https.ssl.keystorefile
and rabbitmq.https.ssl.truststorefile
properties, you must do the following:
Encode the truststore or keystore file in base64.
Take the encoded string and add the
data:text/plain:base64
prefix.Use the entire string as the property entry. For example:
"rabbitmq.https.ssl.keystorefile" : "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAAQAGY2xpZ...omitted...=="
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this sink connector.
See Configuration Properties for all property values and definitions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file rabbitmq-sink.json
Example output:
Created connector RabbitMQSinkConnector_0 lcc-ix4dl
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type
+-----------+---------------------------+---------+-------+
lcc-ix4dl | RabbitMQSinkConnector_0 | RUNNING | sink
Step 6: Check the RabbitMQ destination.¶
After the connector is running, verify that messages are populating from your Kafka topic to the configured RabbitMQ exchange.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
RabbitMQ Publishing¶
rabbitmq.publish.max.batch.size
Maximum number of messages in a batch to block on for acknowledgements. Maximum allowed size is 10000.
- Type: int
- Default: 100
- Valid Values: [1,…,10000]
- Importance: medium
rabbitmq.publish.ack.timeout
Period of time to wait for message acknowledgement in milliseconds. Minimum allowed timeout is 1 millisecond. Maximum allowed timeout is 60 seconds.
- Type: int
- Default: 10000
- Valid Values: [1,…,60000]
- Importance: medium
rabbitmq.publish.max.retries
Number of retries for un-acked or n-acked messages.
- Type: int
- Default: 1
- Valid Values: [0,…]
- Importance: medium
Security¶
rabbitmq.security.protocol
The security protocol to use when connection to RabbitMQ. Values can be PLAINTEXT or SSL.
- Type: string
- Default: PLAINTEXT
- Importance: medium
rabbitmq.https.ssl.key.password
The password of the private key in the key store file or the PEM key specified in
ssl.keystore.key
. This is required for clients only if two-way authentication is configured.- Type: password
- Importance: high
rabbitmq.https.ssl.keystorefile
The key store containing server certificate. Only required if using https
- Type: password
- Default: [hidden]
- Importance: high
rabbitmq.https.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if
ssl.keystore.location
is configured. Key store password is not supported for PEM format.- Type: password
- Importance: high
rabbitmq.https.ssl.truststorefile
The trust store containing server CA certificate. Only required if using https
- Type: password
- Default: [hidden]
- Importance: high
rabbitmq.https.ssl.truststore.password
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
- Type: password
- Importance: high
rabbitmq.https.ssl.keystore.type
The file format of the key store file. This is optional for client.
- Type: string
- Default: JKS
- Importance: medium
rabbitmq.https.ssl.truststore.type
The file format of the trust store file.
- Type: string
- Default: JKS
- Importance: medium
Connection¶
rabbitmq.host
RabbitMQ host to connect to.
- Type: string
- Importance: high
rabbitmq.port
RabbitMQ port to connect to.
- Type: int
- Default: 5672
- Valid Values: [0,…,65535]
- Importance: medium
rabbitmq.username
Username to authenticate to RabbitMQ with.
- Type: string
- Importance: high
rabbitmq.password
Password to authenticate to RabbitMQ with.
- Type: password
- Importance: high
rabbitmq.virtual.host
The virtual host to use when connecting to the broker.
- Type: string
- Default: /
- Importance: low
RabbitMQ¶
rabbitmq.routing.key
RabbitMQ routing key that dictates how the message travels once it reaches RabbitMQ.
- Type: string
- Importance: high
rabbitmq.delivery.mode
PERSISTENT or TRANSIENT, decides message durability in RabbitMQ.
- Type: string
- Importance: high
rabbitmq.forward.kafka.key
If enabled, the Kafka record key is converted to a string and forwarded on the correlationID property of the RabbitMQ Message. In case the Kafka record key is null and this value is true, no correlationID will be sent.
- Type: boolean
- Importance: low
rabbitmq.forward.kafka.metadata
If enabled, metadata from the Kafka record is forwarded on the RabbitMQ Message as headers. This includes the record’s topic, partition, and offset. The topic name is applied as a header named KAFKA_TOPIC, the partition value is applied as a header named KAFKA_PARTITION, and the offset value is applied as a header named KAFKA_OFFSET.
- Type: boolean
- Importance: low
rabbitmq.forward.kafka.headers
If enabled, Kafka record headers are added to the RabbitMQ Message as headers.
- Type: boolean
- Importance: low
rabbitmq.exchange
The destination RabbitMQ exchange where messages need to be delivered. The connector will deliver messages to this one RabbitMQ exchange even when the connector consumes from multiple specified Kafka topics.
- Type: string
- Importance: high
Consumer configuration¶
max.poll.interval.ms
The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.records
The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.