Solace Source Connector for Confluent Cloud

The fully-managed Solace Source connector for Confluent Cloud moves messages from a Solace PubSub+ event broker to an Apache Kafka® topic.

This quick start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see Solace Source Connector for Confluent Platform.

If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.

Features

The Solace Source connector includes the following features:

  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic, with potential duplicates if the connector restarts.

  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance when processing multiple messages.

  • JMS message type support: The connector handles TextMessage and BytesMessage JMS message types from the Solace broker.

  • Message metadata preservation: The connector preserves message metadata including MessageID, timestamp, delivery mode, and message properties using standardized io.confluent.connect.jms schemas.

  • Output data formats: The connector supports Avro, JSON Schema, Protobuf, and JSON (schemaless) output data formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Solace Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events from Solace to Kafka.

Prerequisites

  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.

    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Solace Source connector card.

Solace Source Connector Card

Step 4: Enter the connector details

At the Add Solace Source Connector screen, complete the following:

Note

  • Ensure you have all your prerequisites completed.

  • An asterisk ( * ) designates a required entry.

Select the topic you want to send data to from the Topics list.

To create a new topic, click +Add new topic.

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.

    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.

    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  2. Click Continue.

  1. Configure the authentication properties:

    • Solace Host: Sets the IP address or hostname and optional port of the message broker to connect to. The default port is 55555. Specify multiple hosts as a comma-separated list.

    • Solace Username: Sets the username for connecting to Solace.

    • Solace Password: Sets the password for connecting to Solace.

    • Message VPN: Sets the Message VPN to use when connecting to the Solace message broker. The default is Message VPN default.

    • SSL Keystore: Specifies the SSL keystore for a Solace SSL-enabled VPN.

    • SSL Keystore Password: Specifies the keystore password for a Solace SSL-enabled VPN.

    • SSL Truststore: Specifies the truststore containing the server CA certificate for a Solace SSL-enabled VPN.

    • SSL Truststore Password: Specifies the truststore password for a Solace SSL-enabled VPN.

    • Validate SSL Certificate: Validates SSL certificates.

  2. Click Continue.

  • Dynamic Durables: Enables creation of queues or topic endpoints on the message broker. Set to true if the queue or topic endpoint does not already exist.

  • JMS Destination Name: Sets the name of the JMS destination (queue or topic) to read messages from.

  • JMS Destination Type: Sets the type of JMS destination. Valid entries are queue or topic.

Output messages

  • Select output record value format: Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON.

    Note

    You must have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, or PROTOBUF.

Show advanced configurations
  • Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.

  • Client Description: Sets the application description on the message broker for the data connection.

  • Batch Size: Sets the maximum number of records a connector task can read from the Solace broker before writing to Kafka. The task holds these records until they are acknowledged in Kafka, so this setting may affect memory usage.

  • Unacknowledged Messages Limit: Sets the maximum number of messages per task that can be received from Solace brokers and produced to Kafka before the task acknowledges the Solace session and messages. If the task fails and restarts, this is the maximum number of Solace messages the task may duplicate in Kafka.

  • Maximum Poll Duration (ms): Sets the maximum amount of time each task can spend building a batch. The batch is closed and sent to Kafka if not enough messages are read during the allotted time.

  • Character Encoding: Sets the character encoding to use when receiving the message.

  • Durable Subscription: Specifies whether the connector tasks’ subscription to a Solace topic is durable. Durable subscriptions require a subscription name set via jms.subscription.name.

  • Subscription Name: Sets the name of the Solace subscription. Supported only in durable subscriptions (jms.subscription.durable = true) and applicable only to Solace topics.

  • Message Selector: Sets the message selector to apply to messages in the destination.

Auto-restart policy

  • Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to true, enabling the connector to automatically restart in case of user-actionable errors. Set this property to false to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.

Additional Configs

  • Value Converter Decimal Format: Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals: BASE64 to serialize DECIMAL logical types as base64 encoded binary data and NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Key Converter Schema ID Serializer: The class name of the schema ID serializer for keys. This is used to serialize schema IDs in the message headers.

  • Value Converter Reference Subject Name Strategy: Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Value Converter Connect Meta Data: Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Value Converter Value Subject Name Strategy: Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Key Converter Key Subject Name Strategy: How to construct the subject name for key schema registration.

  • Value Converter Schema ID Serializer: The class name of the schema ID serializer for values. This is used to serialize schema IDs in the message headers.

Transforms

For all property values and definitions, see Configuration Properties.

  1. Click Continue.

Based on the number of topic partitions you select, you receive a recommended number of tasks.

  1. To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.

  2. Click Continue.

  1. Verify the connection details.

  2. Click Launch.

    The status for the connector changes from Provisioning to Running.

Step 5: Check the Kafka topic

After the connector is running, verify that messages are populating your Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "name": "SolaceSource_0",
  "config": {
    "connector.class": "SolaceSource",
    "name": "SolaceSource_0",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "<my-kafka-api-key>",
    "kafka.api.secret": "<my-kafka-api-secret>",
    "kafka.topic": "<topic-name>",
    "output.data.format": "AVRO",
    "solace.host": "<host>",
    "solace.username": "<username>",
    "solace.password": "<password>",
    "jms.destination.name": "<destination-name>",
    "jms.destination.type": "queue",
    "tasks.max": "1"
  }
}

Note the following property definitions:

  • "name": Sets a name for your new connector.

  • "connector.class": Identifies the connector plugin name.

  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "kafka.topic": The Kafka topic name where you want data sent.

  • "output.data.format": Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

  • "solace.host": The IP address or hostname and port (optional) of the message broker to connect to. If no port is specified, the default port number is 55555. To specify a prioritized list of hosts, list each host separated by a comma.

  • "solace.username" and "solace.password": The credentials used to authenticate with the Solace message broker.

  • "jms.destination.name": The name of the JMS destination (queue or topic) to read messages from.

  • "jms.destination.type": The type of JMS destination. Valid entries are queue or topic.

  • "tasks.max": Maximum tasks for the connector to use. More tasks may improve performance.

For information about adding SMTs using the CLI, see Single Message Transforms (SMT).

See Configuration Properties for all property values and descriptions.

Step 4: Load the configuration file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file solace-source-config.json

Example output:

Created connector SolaceSource_0 lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |       Name         | Status  |  Type
+-----------+--------------------+---------+--------+
lcc-ix4dl   | SolaceSource_0     | RUNNING | source

Step 6: Check the Kafka topic

After the connector is running, verify that messages are populating your Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string

  • Valid Values: A string at most 64 characters long

  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode, whenever possible.

  • Type: string

  • Valid Values: SERVICE_ACCOUNT, KAFKA_API_KEY

  • Importance: high

kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string

  • Importance: high

kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

Which topic do you want to send data to?

kafka.topic

Identifies the topic name to write the data to.

  • Type: string

  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string

  • Default: default

  • Importance: medium

Output messages

output.data.format

Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string

  • Default: JSON_SR

  • Importance: high

Solace Connection

solace.host

IP or hostname and port (optional) of the message broker to connect to. If a port is not specified, the default port number is 55555. To specify a prioritized list of hosts, list each host separated by a comma.

  • Type: string

  • Importance: high

solace.username

The username used when connecting to Solace.

  • Type: string

  • Importance: high

solace.password

The password used when connecting to Solace.

  • Type: password

  • Importance: high

solace.vpn

Message VPN to use when connecting to the Solace message broker. Default: default.

  • Type: string

  • Default: default

  • Importance: medium

solace.dynamic.durables

Whether the connector creates queues or topic endpoints on the Solace message broker. Set to true only if the queue does not exist. If the queue already exists, set to false to avoid property mismatch errors. Default: false.

  • Type: boolean

  • Default: false

  • Importance: low

solace.client.description

The application description on the Solace message broker for the data connection. Default: Kafka Connect.

  • Type: string

  • Default: Kafka Connect

  • Importance: low

Solace Session

jms.destination.name

The name of the JMS destination (queue or topic) to read messages from.

  • Type: string

  • Importance: high

jms.destination.type

The type of JMS destination, which is either queue or topic.

  • Type: string

  • Default: queue

  • Importance: high

batch.size

The maximum number of records that a connector task might read from the Solace broker before writing to Kafka. The task holds these records until they are acknowledged in Kafka, so this might affect memory usage. The minimum value is 1 and the maximum is 2048.

  • Type: int

  • Default: 1024

  • Valid Values: [1,…,2048]

  • Importance: medium

max.pending.messages

The maximum number of messages per task that the task can receive from Solace brokers and produce to Kafka before it acknowledges the Solace session or messages. If the task fails and is restarted, this is the maximum number of Solace messages the task might duplicate in Kafka.

  • Type: int

  • Default: 4096

  • Importance: medium

max.poll.duration

The maximum amount of time in milliseconds each task can build a batch. The connector closes the batch and sends it to Kafka if the task does not read enough messages in the allotted time. Default: 60000.

  • Type: int

  • Default: 60000

  • Valid Values: [1,…,120000]

  • Importance: medium

character.encoding

The character encoding to use while receiving the message. Default: UTF-8.

  • Type: string

  • Default: UTF-8

  • Importance: medium

jms.subscription.durable

Whether the subscription of the connector tasks to a Solace topic is durable or not. Durable subscriptions require you to set a subscription name using jms.subscription.name.

  • Type: boolean

  • Default: false

  • Importance: medium

jms.subscription.name

The name of the Solace subscription. This setting applies only to durable subscriptions (jms.subscription.durable = true) and Solace topics.

  • Type: string

  • Importance: medium

jms.message.selector

The message selector for filtering messages in the destination.

  • Type: string

  • Importance: medium

Solace Secure Connection

solace.ssl.keystore.file

The keystore file for the SSL-enabled VPN connection to Solace.

  • Type: password

  • Default: [hidden]

  • Importance: medium

solace.ssl.keystore.password

The keystore password for the SSL-enabled VPN connection to Solace.

  • Type: password

  • Importance: medium

solace.ssl.truststore.file

The truststore containing the server CA certificate for the SSL-enabled VPN connection to Solace.

  • Type: password

  • Default: [hidden]

  • Importance: medium

solace.ssl.truststore.password

The truststore password for the SSL-enabled VPN connection to Solace.

  • Type: password

  • Importance: medium

solace.ssl.validate.certificate

Set to true to validate SSL certificates. Default: false.

  • Type: boolean

  • Default: false

  • Importance: medium

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int

  • Valid Values: [1,…]

  • Importance: high

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean

  • Default: true

  • Importance: medium

Additional Configs

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string

  • Importance: low

producer.override.compression.type

The compression type for all data generated by the producer. Valid values are none, gzip, snappy, lz4, and zstd.

  • Type: string

  • Importance: low

value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean

  • Importance: low

value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean

  • Importance: low

value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean

  • Importance: low

value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

key.converter.key.schema.id.serializer

The class name of the schema ID serializer for keys. This is used to serialize schema IDs in the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.PrefixSchemaIdSerializer

  • Importance: low

key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string

  • Default: BASE64

  • Importance: low

value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string

  • Default: DefaultReferenceSubjectNameStrategy

  • Importance: low

value.converter.value.schema.id.serializer

The class name of the schema ID serializer for values. This is used to serialize schema IDs in the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.PrefixSchemaIdSerializer

  • Importance: low

value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png