RabbitMQ Source Connector for Confluent Cloud¶
The fully-managed RabbitMQ Source connector for Confluent Cloud uses the AMQP protocol to communicate with RabbitMQ servers. The RabbitMQ Source connector reads data from a RabbitMQ queue (or queues) and persists the data in an Apache Kafka® topic.
Note
- This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see RabbitMQ Source Connector for Confluent Platform.
- If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The RabbitMQ Source connector provides the following features:
- At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see RabbitMQ Source Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Note
When paused, this connector continues to consume messages from RabbitMQ until the consumer times out. These messages remain in system memory while the connector is paused. There is no data loss when the connector resumes, since messages are acknowledged after they are flushed from memory and sent to Kafka. However, if you plan to keep this connector paused for an extended time, consider removing the connector, since message will continue to accumulate in system memory.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud RabbitMQ Source connector. The quick start shows how to select the connector and configure it to read data from RabbitMQ queues and persist the data in an Apache Kafka® topic.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.
- Authorized access to a RabbitMQ host server, queue, and host security details.
- A topic must exist before launching the connector.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Refer to Cloud connector limitations for additional information.
Note
There is no output.data.format configuration used with this source connector. This is because this connector defaults to ByteArrayConverter for value and StringConverter for key. No other converter is useful for this connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add RabbitMQ Source Connector screen, complete the following:
Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
Click Continue.
- Add the connection details.
- RabbitMQ host: The RabbitMQ host server address to connect to.
For example,
192.168.1.99. Add the RabbitMQ username and password. - RabbitMQ username: Username to authenticate to RabbitMQ.
- RabbitMQ password: Password to authenticate to RabbitMQ.
- RabbitMQ virtual host: The name of the virtual host created in RabbitMQ.
- RabbitMQ port: RabbitMQ port to connect to. This should be set in accordance with chosen Security protocol.
- Security protocol: The security protocol to use when connecting to RabbitMQ.
- RabbitMQ host: The RabbitMQ host server address to connect to.
For example,
- Click Continue.
Add the following RabbitMQ details:
- RabbitMQ queue(s): The RabbitMQ queue name. You can specify multiple RabbitMQ queues to read from.
Show advanced configurations
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalseto disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.RabbitMQ batch size: The maximum number of records that the connector batches and returns to the Kafka topic. The default property value is
1024records.RabbitMQ backoff time (in milliseconds): The number of milliseconds (ms) to wait when no records are returned from the RabbitMQ queue. The default property value is
100ms.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
See Configuration Properties for all property values and definitions.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of tasks, use the Range Slider to select the desired number of tasks.
- Click Continue.
Verify the connection details by previewing the running configuration.
Tip
For information about previewing your connector output, see Data Previews for Confluent Cloud Connectors.
Once you’ve validated that the properties are configured to your satisfaction, click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check the Kafka topic¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"connector.class": "RabbitMQSource",
"name": "RabbitMQSource_0",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret" : "<my-kafka-api-secret>",
"kafka.topic" : "topic_0"
"rabbitmq.host" : "192.168.1.99",
"rabbitmq.username" : "<username>",
"rabbitmq.password": "<password>",
"rabbitmq.queue": "<queue-name>",
"tasks.max" : "1"
}
Note the following property definitions:
"name": Sets a name for your new connector."connector.class": Identifies the connector plugin name.
"kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNTorKAFKA_API_KEY(the default). To use an API key and secret, specify the configuration propertieskafka.api.keyandkafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"kafka.topic": Enter the topic name where you want data sent. A topic must exist before launching the connector."rabbitmq....": Enter your RabbitMQ authentication and queue details. You can specify multiplerabbitmq.queuenames to read from using a comma-separated list."tasks.max": Enter the number of tasks that the connector uses. The connector supports running one or more tasks. More tasks may improve performance.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and definitions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file rabbitmq-source.json
Example output:
Created connector RabbitMQSourceConnector_0 lcc-ix4dl
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect plugin list
Example output:
ID | Name | Status | Type
+-----------+---------------------------+---------+-------+
lcc-ix4dl | RabbitMQSourceConnector_0 | RUNNING | source
Step 6: Check the Kafka topic.¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
How should we connect to your data?¶
nameSets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.modeKafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.keyKafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.idThe Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secretSecret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
Which topic do you want to send data to?¶
kafka.topicIdentifies the topic name to write the data to.
- Type: string
- Importance: high
How should we connect to RabbitMQ?¶
rabbitmq.hostRabbitMQ host to connect to.
- Type: string
- Importance: high
rabbitmq.usernameUsername to authenticate with RabbitMQ.
- Type: string
- Importance: high
rabbitmq.passwordPassword to authenticate with RabbitMQ.
- Type: password
- Importance: high
rabbitmq.virtual.hostThe virtual host to use when connecting to the RabbitMQ broker.
- Type: string
- Default: /
- Importance: high
rabbitmq.portRabbitMQ port to connect to. This should be set in accordance with chosen Security protocol.
- Type: int
- Default: 5672
- Valid Values: [0,…]
- Importance: high
RabbitMQ details¶
rabbitmq.queueRabbitMQ queue(s) to read from.
- Type: list
- Importance: high
batch.sizeThe maximum number of records to return to Connect for each poll, if there are more than this number of records already available from RabbitMQ.
- Type: int
- Default: 1024
- Valid Values: [1,…]
- Importance: medium
backoff.time.msThe number of milliseconds to wait when no records are returned from the RabbitMQ queue.
- Type: int
- Default: 100
- Valid Values: [1,…]
- Importance: medium
Security¶
rabbitmq.security.protocolThe security protocol to use when connecting to RabbitMQ. Valid values are PLAINTEXT and SSL.
- Type: string
- Default: PLAINTEXT
- Importance: high
rabbitmq.https.ssl.key.passwordThe password of the private key in the key store file. This is optional for client.
- Type: password
- Importance: high
rabbitmq.https.ssl.keystorefileThe key store containing server certificate. Only required if using SSL.
- Type: password
- Default: [hidden]
- Importance: high
rabbitmq.https.ssl.keystore.passwordThe store password for the key store file. This is optional for a client and is only needed if key store is configured.
- Type: password
- Importance: high
rabbitmq.https.ssl.keystore.typeThe file format of the key store file. This is optional for client.
- Type: string
- Default: JKS
- Importance: medium
rabbitmq.https.ssl.truststorefileThe trust store containing server CA certificate. Only required if using SSL.
- Type: password
- Default: [hidden]
- Importance: high
rabbitmq.https.ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
- Type: password
- Importance: high
rabbitmq.https.ssl.truststore.typeThe file format of the trust store file.
- Type: string
- Default: JKS
- Importance: medium
Number of tasks for this connector¶
tasks.maxMaximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Auto-restart policy¶
auto.restart.on.user.errorEnable connector to automatically restart on user-actionable errors.
- Type: boolean
- Default: true
- Importance: medium
Additional Configs¶
header.converterThe converter class for the headers. This is used to serialize and deserialize the headers of the messages.
- Type: string
- Importance: low
producer.override.compression.typeThe compression type for all data generated by the producer. Valid values are none, gzip, snappy, lz4, and zstd.
- Type: string
- Importance: low
value.converter.allow.optional.map.keysAllow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.auto.register.schemasSpecify if the Serializer should attempt to register the Schema.
- Type: boolean
- Importance: low
value.converter.connect.meta.dataAllow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.avro.schema.supportEnable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.protobuf.schema.supportEnable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.flatten.unionsWhether to flatten unions (oneofs). Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.index.for.unionsWhether to generate an index suffix for unions. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.struct.for.nullsWhether to generate a struct variable for null values. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.int.for.enumsWhether to represent enums as integers. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.latest.compatibility.strictVerify latest subject version is backward compatible when use.latest.version is true.
- Type: boolean
- Importance: low
value.converter.object.additional.propertiesWhether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.nullablesWhether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.proto2Whether proto2 optionals are supported. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.use.latest.versionUse latest version of schema in subject for serialization when auto.register.schemas is false.
- Type: boolean
- Importance: low
value.converter.use.optional.for.nonrequiredWhether to set non-required properties to be optional. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.nullablesWhether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.raw.primitivesWhether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
key.converter.key.subject.name.strategyHow to construct the subject name for key schema registration.
- Type: string
- Default: TopicNameStrategy
- Importance: low
value.converter.decimal.formatSpecify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
- Type: string
- Default: BASE64
- Importance: low
value.converter.flatten.singleton.unionsWhether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.reference.subject.name.strategySet the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: low
value.converter.value.subject.name.strategyDetermines how to construct the subject name under which the value schema is registered with Schema Registry.
- Type: string
- Default: TopicNameStrategy
- Importance: low
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
