Limitations

Refer to the following for usage limitations.

Environment Limitations

Schema Registry-enabled environments support only the default schema subject naming strategy TopicNameStrategy. Subject naming strategies RecordNameStrategy and TopicRecordNameStrategy are not currently supported.

Connector Limitations

Supported connector limitations

See the following limitations for supported connectors.

ActiveMQ Source Connector

There are no current limitations for the ActiveMQ Source Connector for Confluent Cloud.

Amazon CloudWatch Logs Source Connector

The following are limitations for the Amazon CloudWatch Logs Source Connector for Confluent Cloud.

  • The connector can only read from the top 50 log streams (ordered alphabetically).
  • The connector does not support Protobuf.

Amazon CloudWatch Metrics Sink Connector

The Amazon CloudWatch Metrics region must in the same region where your Confluent Cloud cluster is, and where you are running the Amazon CloudWatch Metrics Sink Connector for Confluent Cloud.

Amazon DynamoDB Sink Connector

The following are limitations for the Amazon DynamoDB Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Amazon DynamoDB database and Kafka cluster should be in the same region.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Amazon Kinesis Source Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.ValueToKey

Amazon Redshift Sink Connector

The following are limitations for the Amazon Redshift Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Confluent Cloud cluster and the target Redshift cluster must be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • This connector cannot consume data containing nested structs.

Amazon SQS Source Connector

There are no current limitations for the Amazon SQS Source Connector for Confluent Cloud.

Amazon S3 Sink Connector

The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target S3 bucket must be in the same AWS region.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Amazon S3 Source Connector

The following are limitations for the Amazon S3 Source Connector for Confluent Cloud.

  • For a new bucket, you need to create a new connector with an unused name. If you reconfigure an existing connector to source from the new bucket, or create a connector with a name that is used for another connector, the connector will not source from the beginning of data stored in the bucket. This is because the connector will maintain offsets tied to the connector name.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.HoistField$Key
    • org.apache.kafka.connect.transforms.ValueToKey

AWS Lambda Sink Connector

The following are limitations for the AWS Lambda Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and your AWS Lambda project should be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Azure Blob Storage Sink Connector

The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.

  • The Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Blob storage in different regions.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Cognitive Search Sink Connector

The following are limitations for Azure Cognitive Search Sink Connector for Confluent Cloud.

  • Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 16 megabytes for each API request. For additional details, refer to Size limits per API call.
  • The Azure Cognitive Search service must be in the same region as your Confluent Cloud cluster.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Cosmos Sink Connector

The following are limitations for Azure Cosmos DB Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Azure Cosmos DB must be in the same region as your Confluent Cloud cluster.
  • The Kafka topic must not contain tombstone records. The connector does not handle tombstone or null values.

Azure CosmosDB Source Connector

The following are limitations for the Azure Cosmos DB Source Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Kafka record key is serialized by StringConverter.

Azure Data Lake Storage Gen2 Sink Connector

The following are limitations for the Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.

  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for this connector. For more information about public Internet access to resources, see Networking and DNS Considerations.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Event Hubs Source Connector

The following are limitations for the Azure Event Hubs Source Connector for Confluent Cloud.

  • max.events: 499 is the maximum number of events allowed. Defaults to 50.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.ValueToKey

Azure Functions Sink Connector

There is one limitation for the Azure Functions Sink Connector for Confluent Cloud.

The target Azure Function should be in the same region as your Confluent Cloud cluster.

Azure Service Bus Source Connector

The following are limitations for the Azure Service Bus Source Connector for Confluent Cloud.

  • For JSON, JSON_SR, and PROTOBUF, the message body (messageBody) produced by the connector contains JSON or text in base64 encoded format.

Azure Synapse Analytics Sink Connector

The following are limitations for Azure Synapse Analytics Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • This connector can only insert data into an Azure SQL data warehouse database. Azure Synapse Analytics does not support primary keys. Since updates, upserts, and deletes are all performed on the primary keys, these queries are not supported for this connector.
  • When auto.evolve is enabled, if a new column with a default value is added, that default value is only used for new records. Existing records will have "null" as the value for the new column.

Databricks Delta Lake Sink

The following are limitations for the Databricks Delta Lake Sink Connector for Confluent Cloud.

  • The connector is available only on Amazon Web Services (AWS).
  • The Amazon S3 bucket (where data is staged), the Delta Lake instance, and the Kafka cluster must be in the same region.
  • Data is staged in an Amazon S3 bucket. If you delete any files in this bucket, you will lose exactly-once semantics (EOS).
  • The connector appends data only.
  • The connector uses the UTC timezone.
  • The connector supports running one task per connector instance.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Datadog Metrics Sink Connector

  • Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 3.2 megabytes for each API request. For additional details, refer to Post timeseries points.
  • Metrics Rate Limiting: The API endpoints are rate limited. The rate limit for metrics retrieval is 100 per hour, per organization. These limits can be modified by contacting Datadog support.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Datagen Source Connector

There are no current limitations for the Datagen Source Connector for Confluent Cloud.

Elasticsearch Service Sink Connector

The following are limitations for the Elasticsearch Service Sink Connector for Confluent Cloud.

  • The connector only works with the Elasticsearch Service from Elastic Cloud.
  • The connector supports connecting to Elasticsearch version 7.1 (and later). The connector does not support Elasticsearch version 8.x.
  • The Confluent Cloud cluster and the target Elasticsearch deployment must be in the same region.

GitHub Source Connector

The following is a limitation for the GitHub Source Connector for Confluent Cloud.

  • Because of a GitHub API limitation, only one task per connector is supported.

Google BigQuery Sink Connector

The following are limitations for the Google Cloud BigQuery Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • Source topic names must comply with BigQuery naming conventions even if sanitizeTopics is set to true in the connector configuration.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Configuration properties that are not shown in the Confluent UI use default values. See Google BigQuery Sink Connector Configuration Properties for all connector properties.
  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google BigTable Sink Connector

The following are limitations for the Google Cloud BigTable Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The database and the Kafka cluster should be in the same region.

Google Functions Sink Connector

The following are limitations for the Google Cloud Functions Sink Connector for Confluent Cloud.

  • The target Google Function should be in the same region as your Confluent Cloud cluster.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Pub/Sub Source Connector

There are no current limitations for the Google Pub/Sub Source Connector for Confluent Cloud.

Google Cloud Spanner Sink Connector

The following are limitations for the Google Cloud Spanner Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Confluent Cloud cluster and the target Google Spanner cluster must be in the same GCP region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro, JSON Schema, or Protobuf.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Cloud Storage Sink Connector

The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Cloud Storage (GCS) bucket must be in the same Google Cloud Platform region.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

HTTP Sink Connector

There are no current limitations for the HTTP Sink Connector for Confluent Cloud.

IBM MQ Source Connector

There are no current limitations for the IBM MQ Source Connector for Confluent Cloud.

Jira Source Connector

The following are limitations for the Jira Source Connector for Confluent Cloud.

  • To use a schema-based output format, you must set schema compatibility to NONE in Schema Registry.
  • Resources which do not support fetching records by datetime will have duplicate records and will be fetched repeatedly at a duration specified by the request.interval.ms configuration property.
  • The connector is not able to detect data deletion on Jira.
  • The connector does not guarantee accurate record order in the Apache Kafka® topic.
  • The timezone set by the user (defined in the jira.username configuration property) must match the general setting Jira timezone used for the connector.

InfluxDB 2 Sink Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

InfluxDB Source Connector

There are no current limitations for the InfluxDB 2 Source Connector for Confluent Cloud.

Microsoft SQL Server Sink Connector

The following are limitations for the Microsoft SQL Server Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The database and Kafka cluster should be in the same region. If you use a different region, you may incur additional data transfer charges.
  • For tombstone records, set delete.enabled to true

Microsoft SQL Server CDC Source Connector (Debezium)

The following are limitations for the Microsoft SQL Server CDC Source (Debezium) Connector for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Microsoft SQL Server Source Connector

The following are limitations for the Microsoft SQL Server Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode and should be datetime2

MongoDB Atlas Sink Connector

The following are limitations for the MongoDB Atlas Sink Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.

  • Document post processing configuration properties are not supported. These include:

    • post.processor.chain
    • key.projection.type
    • value.projection.type
    • field.renamer.mapping
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The MongoDB database and Kafka cluster should be in the same region.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.

  • You cannot use a dot in a field name (for example, Client.Email). The error shown below is displayed if a field name includes a dot. You should also not use $ in a field name. For additional information, see Field Names.

    Your record has an invalid BSON field name. Please check Mongo documentation for details.
    

MongoDB Atlas Source Connector

The following are limitations for the MongoDB Atlas Source Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.

MQTT Sink Connector

There are no current limitations for the MQTT Sink Connector for Confluent Cloud.

MQTT Source Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.ValueToKey

MySQL Sink Connector

The following are limitations for the MySQL Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The database and Kafka cluster should be in the same region.
  • For tombstone records, set delete.enabled to true.

MySQL CDC Source Connector (Debezium)

The following are limitations for the MySQL CDC Source (Debezium) Connector for Confluent Cloud.

  • MariaDB is not currently supported. See the Debezium docs for more information.
  • Amazon Aurora doesn’t support binary logging using a multi-master cluster as the binlog master or worker. You can’t use binlog-based CDC tools with multi-master clusters.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

MySQL Source Connector

The following are limitations for the MySQL Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.

Oracle CDC Source Connector

The following are limitations for the Oracle CDC Source Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.

Oracle Database Sink Connector

The following are limitations for the Oracle Database Sink Connector for Confluent Cloud.

  • The Oracle Database version must be 11.2.0.4 or later.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Oracle database and Kafka cluster should be in the same region.
  • See Database considerations for additional information.

Oracle Database Source Connector

The following are limitations for the Oracle Database Source Connector for Confluent Cloud.

  • The Oracle Database version must be 11.2.0.4 or later.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.
  • Configuration properties that are not shown in the Cloud Console use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

PostgreSQL Sink Connector

The following are limitations for the PostgreSQL Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The database and Kafka cluster should be in the same region. If you use a different region, be aware that you may incur additional data transfer charges.
  • For tombstone records, set delete.enabled to true

Pagerduty Sink Connector

There are no current limitations for the PagerDuty Sink Connector for Confluent Cloud.

PostgreSQL CDC Source (Debezium) Connector

The following are limitations for the PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and that Allow access to Azure Services is enabled.
  • The following are the default partition and replication factor properties:
    • topic.creation.default.partitions=1
    • topic.creation.default.replication.factor=3
  • See the After-state only output limitation if you are planning to use the optional property After-state only.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

PostgreSQL Source Connector

The following are limitations for the PostgreSQL Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and enable “Allow access to Azure Services”.
  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.
  • The geometry column type isn’t supported.
  • Configuration properties that are not shown in the Cloud Console use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

RabbitMQ Sink Connector

The following are limitations for the RabbitMQ Sink Connector for Confluent Cloud.

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Redis Sink Connector

The following are limitations for the Redis Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Redis instance and Kafka cluster should be in the same region.

Salesforce Bulk API Source Connector

The following are limitations for the Salesforce Bulk API Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

  • Restarting: When the connector operates, it periodically records the last query time in the Connect offset topic. When the connector is restarted, it may fetch Salesforce objects with a LastModifiedDate that is later than last queried time.

  • API limits: The Salesforce Bulk API Source connector is limited by non-compound fields. For example, Bulk Query doesn’t support address, location fields. The connector discards address and geolocation fields.

  • The following Salesforce object (SObject) error message may be displayed when you are using the Salesforce Bulk API Source connector:

    Entity 'Order' is not supported to use PKChunking.
    

    For these SObjects, set the configuration property Enable Batching to false (CLI property batch.enable=false).

  • Unsupported SObjects: See Supported and Unsupported SObjects for a list of supported and unsupported SObjects.

Salesforce CDC Source Connector

The following are limitations for the Salesforce CDC Source Connector for Confluent Cloud.

Salesforce Platform Event Sink Connector

The following are limitations for the Salesforce Platform Event Sink Connector for Confluent Cloud.

  • The connector is limited to one task only.
  • There are Salesforce streaming allocations and limits that apply to this connector. For example, the number of API calls that can occur within a 24-hour period is capped for free developer org accounts.
  • There are data and file storage limits that are based on the type of organization you use.

Salesforce Platform Event Source Connector

There is one limitation for the Salesforce Platform Event Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Salesforce PushTopic Source Connector

The following are limitations for the Salesforce PushTopic Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • Note the following limitations for at least once delivery:
    • When the connector operates, it periodically records the replay ID of the last record written to Kafka. When the connector is stopped and then restarted within 24 hours, the connector continues consuming the PushTopic where it stopped, with no missed events. However, if the connector stops for more than 24 hours, some events are discarded in Salesforce before the connector can read them.
    • If the connector stops unexpectedly due to a failure, it may not record the replay ID of the last record successfully written to Kafka. When the connector restarts, it resumes from the last recorded replay ID. This means that some events may be duplicated in Kafka.

Salesforce SObject Sink Connector

The following are limitations for the Salesforce SObject Sink Connector for Confluent Cloud.

ServiceNow Sink Connector

There are no current limitations for the ServiceNow Sink Connector for Confluent Cloud.

ServiceNow Source Connector

There are no current limitations for the ServiceNow Source Connector for Confluent Cloud.

SFTP Sink Connector

The following are limitations for the SFTP Sink Connector for Confluent Cloud.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

SFTP Source Connector

The following are limitations for the SFTP Source Connector for Confluent Cloud.

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.ValueToKey
  • Currently the SFTP source connector has a specific limitation where it reads and moves a file only once while processing files from a specific SFTP directory. If a file with the same name is produced later in the same SFTP directory, the connector will not process it.

Snowflake Sink Connector

The following are limitations for the Snowflake Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking and DNS Considerations.
  • The Snowflake database and Kafka cluster must be in the same region.
  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Solace Sink Connector

The following are limitations for the Solace Sink Connector for Confluent Cloud.

  • The connector can create queues, but not durable topic endpoints.
  • A valid schema must be available in Schema Registry to use a Schema Registry-based format, like Avro.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Splunk Sink Connector

The following are limitations for the Splunk Sink Connector for Confluent Cloud.

  • If an invalid index is specified, the connector posts the event to Splunk successfully, but it appears to be discarded in Splunk.

Zendesk Source Connector

The following is a limitation for the Zendesk Source Connector for Confluent Cloud.

  • There is a limit of one task per connector instance.

Preview connector limitations

See the following limitations for preview connectors.

Caution

Preview connectors are not currently supported and are not recommended for production use.

Google Cloud Dataproc Sink Connector

The following are limitations for the Google Cloud Dataproc Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Dataproc cluster must be in a VPC peering configuration.

    Note

    For a non-VPC peered environment, public inbound traffic access (0.0.0.0/0) must be allowed to the VPC where the Dataproc cluster is located. You must also make configuration changes to allow public access to the Dataproc cluster while retaining the private IP addresses for the Dataproc master and worker nodes (HDFS NameNode and DataNodes). For configuration details, see Configuring a non-VPC peering environment. For more information about public Internet access to resources, see Networking and DNS Considerations.

  • The Dataproc image version must be 1.4 (or later). See Cloud Dataproc Image version list.

  • One task can handle up to 100 partitions.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Note

After this connector becomes generally available, Confluent Cloud Enterprise customers should contact their Confluent account executive for more information about using it.

RabbitMQ Source Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.ValueToKey

Note

After this connector becomes generally available, Confluent Cloud Enterprise customers should contact their Confluent account executive for more information about using it.