IBM MQ Sink Connector for Confluent Cloud
The fully-managed IBM MQ Sink connector for Confluent Cloud reads messages from Kafka and then writes them to an IBM MQ cluster. You can use the fully-managed IBM MQ Sink connector for Confluent Cloud to export AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), BYTES or STRING data from Apache Kafka® topics to IBM MQ queues or topics in STRING, JSON, or BYTES format.
Note
This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see IBM MQ Sink Connector for Confluent Platform.
If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features
At least once delivery: The connector guarantees that records from Kafka topics are delivered at least once to IBM MQ destinations.
Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
JMS message types: The connector supports TextMessage and BytesMessage. It does not support ObjectMessage or StreamMessage.
TLS/SSL security: The connector supports TLS/SSL security for secure communication with IBM MQ servers, including full keystore and truststore configuration.
Header forwarding: The connector supports forwarding Kafka record headers and metadata to JMS message properties. The Kafka message key can also be forwarded as the
JMSCorrelationIDon the JMS message.Dead Letter Queue: This connector supports the Dead Letter Queue (DLQ) functionality. For information about accessing and using the DLQ, see the View Connector Dead Letter Queue Errors in Confluent Cloud docs.
Flexible message formatting: The connector supports multiple JMS message formats and character encodings, with configurable message time-to-live and delivery modes.
Client-side encryption (CSFLE) support: The connector supports CSFLE for sensitive data. For more information about CSFLE setup, see the connector configuration.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations
Be sure to review the following information.
For connector limitations, see IBM MQ Sink Connector limitations.
If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start
Use this quick start to get up and running with the Confluent Cloud IBM MQ Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events from Apache Kafka® topics to IBM MQ queues or topics.
- Prerequisites
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.
The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
Schema Registry must be enabled to use a Schema Registry-based format (for example, AVRO, JSON_SR (JSON Schema), or PROTOBUF). For more information, see Schema Registry Enabled Environments.
Access to an IBM MQ server with:
IBM MQ broker host and port
Queue manager name
Channel name
Username and password credentials
JMS destination (queue or topic) name
If using TLS/SSL, you’ll need the appropriate keystore and truststore files.
For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors.
Kafka cluster credentials. The following lists the different ways you can provide credentials.
Enter an existing service account resource ID.
Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
For additional information, see Cloud connector limitations.
Using the Confluent Cloud Console
Step 1: Launch your Confluent Cloud cluster
To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.
Step 2: Add a connector
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 3: Select your connector
Click the IBM MQ Sink connector card.

Step 4: Enter the connector details
Note
Ensure you have all your prerequisites completed.
An asterisk ( * ) designates a required entry.
At the Add IBM MQ Sink Connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:
My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
Click Continue.
Configure the authentication properties:
IBM MQ Connection
Username: The username to use when connecting to IBM MQ.
Password: The password to use when connecting to IBM MQ.
IBM MQ broker host: IBM MQ broker host.
IBM MQ broker port: IBM MQ broker port.
Queue Manager: The name of the queue manager.
Channel: The channel for client connections.
JMS Destination Name: The name of the JMS destination that messages are written to.
JMS Destination Type: The type of JMS destination.
IBM MQ Secure Connection
SSL Cipher Suite: The CipherSuite for SSL connections.
SSL FIPS Required: Whether SSL FIPS is required.
SSL Peer Name: Sets a distinguished name (DN) pattern. If sslCipherSuite is set, this pattern can ensure that the correct queue manager is used. The connection attempt fails if the distinguished name provided by the queue manager does not match this pattern.
TLS Protocol: The TLS protocol version for secure connections to IBM MQ.
TLS Keystore Type: The file format of the key store file. This is required only when using secure TLS communication with IBM MQ.
TLS Keystore file: The key store file. This is required only when using secure TLS communication with IBM MQ.
TLS Keystore Password: The store password for the key store file. This is optional for client and only needed if
TLS Keystore fileis configured.TLS Key Password: The password of the private key used for secure TLS communication with IBM MQ.
TLS Truststore Type: The file format of the trust store file. This is required when using TLS and secure communication with IBM MQ.
TLS Truststore file: The trust store file. This is required only when using secure TLS communication with IBM MQ.
TLS Truststore Password: The password for the trust store file.
TLS KeyManager Algorithm: The algorithm used by key manager factory for SSL connections. This is required only when using secure TLS communication with IBM MQ.
TLS TrustManager Algorithm: The algorithm used by trust manager factory for SSL connections. This is required only when using secure TLS communication with IBM MQ.
TLS Secure Random Implementation: The
SecureRandomPRNG implementation to use for SSL cryptography operations.
Click Continue.
Note
Configuration properties that are not shown in the Cloud Console use the default values. For all property values and definitions, see Configuration Properties.
Input Kafka record value format: Sets the input Kafka record value format. Valid entries are
AVRO,JSON_SR,PROTOBUF,JSON,BYTES, orSTRING.Note
You need to have Confluent Cloud Schema Registry configured if you use a schema-based message format like
AVRO,JSON_SR, andPROTOBUF. For more information, see Schema Registry.
Data decryption
Enable Client-Side Field Level Encryption for data decryption. Specify a Service Account to access the Schema Registry and associated encryption rules or keys with that schema. Select the connector behavior (
ERRORorNONE) on data decryption failure. If set toERROR, the connector fails and writes the encrypted data in the DLQ. If set toNONE, the connector writes the encrypted data in the target system without decryption. For more information on CSFLE or CSPE setup, see Manage encryption for connectors.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Forward Kafka Record Key: Convert the Kafka record key to a string and forward it on the JMS Message property
JMSCorrelationID.Forward Kafka Record Metadata: Forward the Kafka record metadata on the JMS Message properties. This includes the record topic, partition, and offset.
Forward Kafka Record Headers: Add the Kafka record headers to the JMS Message as string properties.
Connection Max Retries: Specify the maximum number of times a task attempts to connect to the JMS broker. Connecting to a JMS broker may fail for multiple reasons, this field retries connections based on the specified value.
Connection Backoff MS: Following a connection failure, this configuration parameter is the amount of time in milliseconds to wait before attempting to reconnect to the JMS broker.
JMS Message Format: The format of JMS message values.
Character Encoding: The character encoding to use while writing the message.
Include schema for JSON formatter: Include schemas within each of the serialized values and keys.
JMS Message Time to Live (ms): Time to live (TTL) in milliseconds for messages sent to the JMS broker.
JMS Producer Delivery Mode: The
PERSISTENTdelivery mode (the default) instructs the JMS provider to take extra care to ensure that a message is not lost in transit in case of a JMS provider failure. TheNON_PERSISTENTdelivery mode does not require the JMS provider to store the message or otherwise guarantee that it is not lost if the provider fails.JMS Producer Disable Message Timestamp: Sets whether message timestamps are disabled in IBM MQ.
Additional Configs
Value Converter Replace Null With Default: Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.
Value Converter Reference Subject Name Strategy: Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
Value Converter Schemas Enable: Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.
Errors Tolerance: Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.
Value Converter Ignore Default For Nullables: When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.
Value Converter Decimal Format: Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals: BASE64 to serialize DECIMAL logical types as base64 encoded binary data and NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
Value Converter Connect Meta Data: Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
Value Converter Value Subject Name Strategy: Determines how to construct the subject name under which the value schema is registered with Schema Registry.
Key Converter Key Subject Name Strategy: How to construct the subject name for key schema registration.
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalseto disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.
Consumer configuration
Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).
Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
For all property values and definitions, see Configuration Properties.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
Click Continue.
Verify the connection details.
Click Launch.
The status for the connector should go from Provisioning to Running. It may take a few minutes.
Step 5: Check the IBM MQ destination
After the connector is running, verify that messages from your Kafka topics are populating the configured IBM MQ destination (queue or topic).
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Using the Confluent CLI
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name": "IbmMqSinkConnector_0",
"config": {
"topics": "pageviews",
"connector.class": "IbmMqSink",
"name": "IbmMqSinkConnector_0",
"input.data.format": "AVRO",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"ibm.mq.host": "ibm-mq-server.example.com",
"ibm.mq.port": "1414",
"ibm.mq.username": "<ibm-mq-username>",
"ibm.mq.password": "<ibm-mq-password>",
"ibm.mq.queue.manager": "<queue-manager-name>",
"ibm.mq.channel": "<channel-name>",
"jms.destination.name": "<destination-queue-name>",
"jms.destination.type": "queue",
"tasks.max": "1"
}
}
Note the following property definitions:
"name": Sets a name for your new connector."connector.class": Identifies the connector plugin name."topics": Identifies the topic name or a comma-separated list of topic names."input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, BYTES, or STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
"kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNTorKAFKA_API_KEY(the default). To use an API key and secret, specify the configuration propertieskafka.api.keyandkafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"ibm.mq.host": The IBM MQ broker hostname."ibm.mq.port": The IBM MQ broker port (default is typically 1414)."ibm.mq.username"and"ibm.mq.password": Credentials for connecting to IBM MQ."ibm.mq.queue.manager": The name of the IBM MQ queue manager."ibm.mq.channel": The channel for client connections."jms.destination.name": The name of the JMS destination (queue or topic) that messages are written to."jms.destination.type": The type of JMS destination (queueortopic)."tasks.max": Maximum tasks for the connector to use. More tasks may improve performance.
Note
To enable CSFLE or CSPE for data encryption, specify the following properties:
csfle.enabled: Flag to indicate whether the connector honors CSFLE or CSPE rules.sr.service.account.id: A Service Account to access the Schema Registry and associated encryption rules or keys with that schema.csfle.onFailure: Configures the connector behavior (ERRORorNONE) on data decryption failure. If set toERROR, the connector fails and writes the encrypted data in the DLQ. If set toNONE, the connector writes the encrypted data in the target system without decryption.
When using CSFLE or CSPE with connectors that route failed messages to a Dead Letter Queue (DLQ), be aware that data sent to the DLQ is written in unencrypted plaintext. This poses a significant security risk as sensitive data that should be encrypted may be exposed in the DLQ.
Do not use DLQ with CSFLE or CSPE in the current version. If you need error handling for CSFLE- or CSPE-enabled data, use alternative approaches such as:
Setting the connector behavior to
ERRORto throw exceptions instead of routing to DLQImplementing custom error handling in your applications
Using
NONEto pass encrypted data through without decryption
For more information on CSFLE or CSPE setup, see Manage encryption for connectors.
TLS/SSL Configuration: For TLS connections, you must supply the keystore and/or truststore file contents and the file passwords when creating the JSON connector configuration. The truststore and keystore files are binary files. For the keystore and truststore properties, you must do the following:
Encode the truststore or keystore file in base64.
Take the encoded string and add the
data:text/plain;base64,prefix.Use the entire string as the property entry. For example:
"tls.keystore.file": "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAAQAGY2xpZ...omitted...==", "tls.keystore.password": "<keystore-password>", "tls.truststore.file": "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAAQAGY2xpZ...omitted...==", "tls.truststore.password": "<truststore-password>"
Single Message Transforms: For details about adding SMTs using the CLI, see Single Message Transforms (SMT).
For limitations on single message transforms with the IBM MQ Sink connector, see IBM MQ Sink Connector limitations.
For all property values and definitions, see Configuration Properties.
Step 4: Load the configuration file and create the connector
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file ibm-mq-sink-config.json
Example output:
Created connector IbmMqSinkConnector_0 lcc-ix4dl
Step 5: Check the connector status
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type
+-----------+--------------------------+---------+------+
lcc-ix4dl | IbmMqSinkConnector_0 | RUNNING | sink
Step 6: Check the IBM MQ destination
After the connector is running, verify that messages from your Kafka topics are populating the configured IBM MQ destination (queue or topic).
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Configuration Properties
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
How should we connect to your data?
nameSets a name for your connector.
Type: string
Valid Values: A string at most 64 characters long
Importance: high
Which topics do you want to get data from?
topics.regexA regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.
Type: string
Importance: low
topicsIdentifies the topic name or a comma-separated list of topic names.
Type: list
Importance: high
errors.deadletterqueue.topic.nameThe name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. Defaults to ‘dlq-${connector}’ if not set. The DLQ topic will be created automatically if it does not exist. You can provide
${connector}in the value to use it as a placeholder for the logical cluster ID.Type: string
Default: dlq-${connector}
Importance: low
Schema Config
schema.context.nameAdd a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
Type: string
Default: default
Importance: medium
Input messages
input.data.formatSets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, BYTES or STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
Type: string
Default: JSON
Importance: high
Kafka Cluster credentials
kafka.auth.modeKafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode, whenever possible.
Type: string
Valid Values: SERVICE_ACCOUNT, KAFKA_API_KEY
Importance: high
kafka.api.keyKafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
Type: password
Importance: high
kafka.service.account.idThe Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
Type: string
Importance: high
kafka.api.secretSecret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
Type: password
Importance: high
IBM MQ Connection
mq.usernameThe username to use when connecting to IBM MQ.
Type: string
Importance: high
mq.passwordThe password to use when connecting to IBM MQ.
Type: password
Importance: high
mq.hostnameIBM MQ broker host
Type: string
Importance: high
mq.portIBM MQ broker port
Type: int
Default: 1414
Importance: high
mq.queue.managerThe name of the queue manager.
Type: string
Importance: high
mq.channelThe channel for client connections.
Type: string
Default: “”
Importance: high
jms.destination.nameThe name of the JMS destination that messages are written to.
Type: string
Importance: high
jms.destination.typeThe type of JMS destination.
Type: string
Default: queue
Valid Values: queue, topic
Importance: high
IBM MQ Secure Connection
mq.ssl.cipher.suiteThe CipherSuite for SSL connections.
Type: string
Default: “”
Importance: high
mq.ssl.fips.requiredWhether SSL FIPS is required.
Type: boolean
Importance: high
mq.ssl.peer.nameSets a distinguished name (DN) pattern. If sslCipherSuite is set, this pattern can ensure that the correct queue manager is used. The connection attempt fails if the distinguished name provided by the queue manager does not match this pattern.
Type: string
Default: “”
Importance: high
mq.tls.protocolThe TLS protocol version for secure connections to IBM MQ.
Type: string
Default: TLSv1.2
Valid Values: TLSv1.2, TLSv1.3
Importance: medium
mq.tls.keystore.typeThe file format of the key store file. This is required only when using secure TLS communication with IBM MQ.
Type: string
Default: JKS
Importance: medium
mq.tls.keystore.locationThe key store file. This is required only when using secure TLS communication with IBM MQ.
Type: password
Importance: high
mq.tls.keystore.passwordThe store password for the key store file. This is optional for client and only needed if
TLS Keystore fileis configured.Type: password
Importance: high
mq.tls.key.passwordThe password of the private key used for secure TLS communication with IBM MQ.
Type: password
Importance: high
mq.tls.truststore.typeThe file format of the trust store file. This is required when using TLS and secure communication with IBM MQ.
Type: string
Default: JKS
Importance: medium
mq.tls.truststore.locationThe trust store file. This is required only when using secure TLS communication with IBM MQ.
Type: password
Importance: high
mq.tls.truststore.passwordThe password for the trust store file.
Type: password
Importance: high
mq.tls.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. This is required only when using secure TLS communication with IBM MQ.
Type: string
Default: SunX509
Valid Values: PKIX, SunX509
Importance: low
mq.tls.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. This is required only when using secure TLS communication with IBM MQ.
Type: string
Default: PKIX
Valid Values: PKIX, SunX509
Importance: low
mq.tls.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.
Type: string
Valid Values: NativePRNG, NativePRNGBlocking, NativePRNGNonBlocking, PKCS11, SHA1PRNG, Windows-PRNG
Importance: low
JMS details
jms.forward.kafka.keyConvert the Kafka record key to a string and forward it on the JMS Message property JMSCorrelationID.
Type: boolean
Default: false
Importance: low
jms.forward.kafka.metadataForward the Kafka record metadata on the JMS Message properties. This includes the record topic, partition, and offset.
Type: boolean
Default: false
Importance: low
jms.forward.kafka.headersAdd the Kafka record headers to the JMS Message as string properties.
Type: boolean
Default: false
Importance: low
jms.connection.max.retriesConnecting to a JMS broker may fail for multiple reasons. This determines the maximum number of times a task attempts to connect to the JMS broker.
Type: int
Default: 5
Valid Values: [0,…,25]
Importance: medium
jms.connection.backoff.msFollowing a connection failure, this configuration parameter is the amount of time in milliseconds to wait before attempting to reconnect to the JMS broker.
Type: long
Default: 2000 (2 seconds)
Valid Values: [0,…,120000]
Importance: medium
JMS formatter
jms.message.formatThe format of JMS message values.
Type: string
Default: string
Valid Values: bytes, json, string
Importance: high
character.encodingThe character encoding to use while writing the message.
Type: string
Default: UTF-8
Valid Values: UTF-8, ISO-8859-1, US-ASCII, UTF-16, UTF-16BE, UTF-16LE
Importance: low
jms.message.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
JMS MessageProducer
jms.producer.time.to.live.msTime to live (TTL) in milliseconds for messages sent to the JMS broker.
Type: long
Default: 0
Importance: low
jms.producer.delivery.modeThe
PERSISTENTdelivery mode (the default) instructs the JMS provider to take extra care to ensure that a message is not lost in transit in case of a JMS provider failure. TheNON_PERSISTENTdelivery mode does not require the JMS provider to store the message or otherwise guarantee that it is not lost if the provider fails.Type: string
Default: persistent
Valid Values: non_persistent, persistent
Importance: low
jms.producer.disable.message.timestampSets whether message timestamps are disabled in IBM MQ.
Type: boolean
Default: false
Importance: low
Consumer configuration
max.poll.interval.msThe maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
Type: long
Default: 300000 (5 minutes)
Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
Importance: low
max.poll.recordsThe maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
Type: long
Default: 500
Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
Importance: low
Number of tasks for this connector
tasks.maxMaximum number of tasks for the connector.
Type: int
Valid Values: [1,…]
Importance: high
Additional Configs
consumer.override.auto.offset.resetDefines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset
Type: string
Importance: low
consumer.override.isolation.levelControls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level
Type: string
Importance: low
header.converterThe converter class for the headers. This is used to serialize and deserialize the headers of the messages.
Type: string
Importance: low
value.converter.allow.optional.map.keysAllow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.auto.register.schemasSpecify if the Serializer should attempt to register the Schema.
Type: boolean
Importance: low
value.converter.connect.meta.dataAllow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.enhanced.avro.schema.supportEnable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.enhanced.protobuf.schema.supportEnable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.flatten.unionsWhether to flatten unions (oneofs). Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.generate.index.for.unionsWhether to generate an index suffix for unions. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.generate.struct.for.nullsWhether to generate a struct variable for null values. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.int.for.enumsWhether to represent enums as integers. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.latest.compatibility.strictVerify latest subject version is backward compatible when use.latest.version is true.
Type: boolean
Importance: low
value.converter.object.additional.propertiesWhether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
Type: boolean
Importance: low
value.converter.optional.for.nullablesWhether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.optional.for.proto2Whether proto2 optionals are supported. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.scrub.invalid.namesWhether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.
Type: boolean
Importance: low
value.converter.use.latest.versionUse latest version of schema in subject for serialization when auto.register.schemas is false.
Type: boolean
Importance: low
value.converter.use.optional.for.nonrequiredWhether to set non-required properties to be optional. Applicable for JSON_SR Converters.
Type: boolean
Importance: low
value.converter.wrapper.for.nullablesWhether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.wrapper.for.raw.primitivesWhether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
Type: boolean
Importance: low
errors.toleranceUse this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.
Type: string
Default: all
Importance: low
key.converter.key.subject.name.strategyHow to construct the subject name for key schema registration.
Type: string
Default: TopicNameStrategy
Importance: low
value.converter.decimal.formatSpecify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
Type: string
Default: BASE64
Importance: low
value.converter.flatten.singleton.unionsWhether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
Type: boolean
Default: false
Importance: low
value.converter.ignore.default.for.nullablesWhen set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.
Type: boolean
Default: false
Importance: low
value.converter.reference.subject.name.strategySet the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
Type: string
Default: DefaultReferenceSubjectNameStrategy
Importance: low
value.converter.replace.null.with.defaultWhether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.
Type: boolean
Default: true
Importance: low
value.converter.schemas.enableInclude schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.
Type: boolean
Default: false
Importance: low
value.converter.value.subject.name.strategyDetermines how to construct the subject name under which the value schema is registered with Schema Registry.
Type: string
Default: TopicNameStrategy
Importance: low
Auto-restart policy
auto.restart.on.user.errorEnable connector to automatically restart on user-actionable errors.
Type: boolean
Default: true
Importance: medium
Next Steps
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
