MQTT Sink Connector for Confluent Cloud

The fully-managed Kafka Connect MQTT Sink connector streams data from Apache Kafka® to an MQTT broker.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The MQTT Sink connector provides the following features:

  • At least once delivery: The connector guarantees that records are delivered at least once to the MQTT topic.
  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
  • Schemas: The connector supports Avro, JSON Schema, and Protobuf input data formats. Schema Registry must be enabled to use a Schema Registry-based format. Note that the connector only supports bytes and string schemas. It does not support structs. If you want to have struct type schemas, you can store the struct data as bytes and select bytes in the connector.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Refer to Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud MQTT sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an MQTT broker.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the MQTT Sink connector icon.

MQTT Sink Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Select an Input message format (data coming from the Kafka topic): AVRO, BYTES, JSON, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format.

  2. Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).

  3. Add the MQTT broker connection details.

    • Add the list of server URIs: The MQTT broker URI. Must be in the format <PROTOCOL>//:URI. The supported protocols are TCP, SSL, WS, and WSS. For TLS connections you must additionally provide credentials and upload Keystore and Truststore files.
    • If the MQTT broker does not support anonymous mode, you must enter a username and password that the connector uses to attach to the broker.

    Note

    The MQTT topic name where data lands is the same as the Kafka topic name.

  4. For secure connections, add the required Keystore and Truststore files and passwords. For Keystores, you must also add the password for the stored client certificate.

  5. Add the session connection details:

    • Clean Session?: Sets whether the client and server should remember their state after restarts and reconnects. For unreceived messages to be received when the client and server reconnect, the MQTT Quality of Service (QOS) property must be set to at least 1 or 2. For more information, see Quality of Service in this man page.
    • Connection Timeout: The amount of time to wait in seconds when connecting to the MQTT broker. The default is 30 seconds.
    • Connection Keepalive: Defines the maximum time interval between messages sent or received (in seconds). In the absence of a data-related message during the time period entered, the client sends a very small ping message for the broker to acknowledge. The default value is 60 seconds.
    • Max Retry Time: The maximum time in milliseconds (ms) the connector spends backing off and retrying a connection to the MQTT broker. The default value is 30000 ms (30 seconds).
    • Retain Messages: When set to true, the last message sent is retained on the MQTT broker for future subscribers. The default value is true.
    • MQTT QOS: The default value is 0 which means the message gets delivered once, with no confirmation. The QOS property must be set to at least 1 or 2 for unreceived messages to be received when the client and server reconnect. For more information, see Quality of Service in this man page.
  6. Enter the number of tasks in use by the connector. The connector supports multiple tasks. More tasks may improve performance.

  7. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See the MQTT Sink configuration properties for additional property values and definitions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 7: Check the results on the broker.

Verify that new records are being added to the MQTT broker

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe MySqlSink

Example output:

Following are the required configs:
connector.class: MqttSink
name
input.data.format
kafka.auth.mode
kafka.auth.mode
kafka.api.key
kafka.api.secret
mqtt.server.uri
tasks.max
topics

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "MqttSink",
  "name": "MqttSink_0",
  "input.data.format": "AVRO",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "mqtt.server.uri" : ""tcp://192.0.0.1:1881",
  "topics" : "kafka_topic_0",
  "tasks.max" : "1"
}

Note the following property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "input.data.format": Supports AVRO, BYTES, JSON, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY. To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "mqtt.server.uri": The MQTT broker URI. Must be in the format <PROTOCOL>//:URI. The supported protocols are TCP, SSL, WS, and WSS. For TLS connections you must additionally provide credentials and upload Keystore and Truststore files. See the MQTT Source configuration properties for these property values and definitions.

    Note

    If the MQTT broker does not support anonymous mode, you must add the following two additional properties:

    • "mqtt.username": "<mqtt_broker_username>"
    • "mqtt.password": "<user_password>"
  • "topics": The Kafka topic name (or comma-separated topic names) where the data for the MQTT broker is located.

  • "tasks.max": Enter the number of tasks in use by the connector. The connector supports multiple tasks. More tasks may improve performance.

Note

The MQTT topic name where data lands is the same as the Kafka topic name.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See the MQTT Sink configuration properties for additional property values and definitions.

Step 4: Load the configuration file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config mqtt-server-sink-config.json

Example output:

Created connector MqttSink_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect plugin list

Example output:

ID          |       Name   | Status  | Type
+-----------+--------------+---------+------+
lcc-ix4dl   | MqttSink_0   | RUNNING | sink

Step 6: Check the results in the database.

Verify that new records are being added to the MQTT database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Configuration Properties

The following connector configuration properties can be used with the MQTT Source connector for Confluent Cloud.

mqtt.server.uri

The URI of the MQTT broker. This must be given in the format <PROTOCOL>//:URI. The supported protocols are TCP, SSL, WS, and WSS. For a connection that uses TLS you must provide the required key stores and trust stores.

  • Type: List
  • Importance: High
mqtt.username

Username to connect with, or blank to connect without a username.

  • Type: string
  • Default: “”
  • Importance: High
mqtt.password

Password to connect with, or blank to connect without a password.

  • Type: password
  • Default: [hidden]
  • Importance: High
mqtt.ssl.trust.store.file

The Java TrustStore file containing the certificates required to validate the SSL connection to the server. When using this property in the CLI, you must encode the binary truststore file in base64, take the encoded string, add the data:text/plain;base64 prefix, and then specify the entire string as the property entry. For example: "ssl.truststorefile" : "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAAQAGY2xpZ...==".

  • Type: string
  • Default: “”
  • Importance: High
mqtt.ssl.trust.store.password

Password used to open the Java TrustStore file.

  • Type: password
  • Default: [hidden]
  • Importance: High
mqtt.ssl.key.store.file

The Java KeyStore file containing the private key to use for authenticating with the server. When using this property in the CLI, you must encode the binary keystore file in base64, take the encoded string, add the data:text/plain;base64 prefix, and then specify the entire string as the property entry. For example: "ssl.truststorefile" : "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAAQAGY2xpZ...==".

  • Type: string
  • Default: “”
  • Importance: High
mqtt.ssl.key.store.password

Password used to open the Java KeyStore file.

  • Type: password
  • Default: [hidden]
  • Importance: High
mqtt.ssl.key.password

Password for the client certificate contained in the Java KeyStore.

  • Type: password
  • Default: [hidden]
  • Importance: High
kafka.topic

The Apache Kafka® topic (or comma-separated topics) from which to get data.

  • Type: String
  • Importance: Medium
  • Default Value: mqtt
mqtt.qos

The MQTT Quality of Service (QOS) level. The default value is 0 which means the Kafka message gets delivered once, with no confirmation. The QOS property must be set to at least 1 or 2 for undelivered messages to be delivered when the client and server reconnect. For more information, see Quality of Service in this man page.

  • Type: Int
  • Importance: Medium
  • Default Value: 0
  • Valid Values: [0,…,2]
max.retry.time.ms

The maximum time in milliseconds (ms) the connector will spend backing off and retrying a connection to the MQTT broker.

  • Type: long
  • Importance: Medium
  • Default: 30000
  • Valid Values: [10000,…,9223372036854775807]
mqtt.retain.enabled

When set to true, the last message sent is retained on the MQTT broker for future subscribers.

  • Type: Boolean
  • Importance: Low
  • Default Value: true
mqtt.clean.session.enabled

Sets whether the client and server should remember state across restarts and reconnects. Note that for unreceived messages to be received after reconnecting, you should set the QOS to 1 or above.

  • Type: Boolean
  • Importance: Low
  • Default Value: true
mqtt.connect.timeout.seconds

Sets the connection timeout value in seconds.

  • Type: Int
  • Importance: Low
  • Default Value: 30
  • Valid Values: [1,…]
mqtt.keepalive.interval.seconds

This value, measured in seconds, defines the maximum time interval between messages sent or received. In the absence of a data-related message during the time period, the client sends a very small ping message for the server to acknowledge.

  • Type: Int
  • Importance: Low
  • Default Value: 60
  • Valid Values: [1,…]

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png