Limits for Fully-Managed Connectors

Refer to the following for usage limitations.

Schema Registry Enabled Environments

Schema Registry enabled environments support only the default schema subject naming strategy TopicNameStrategy. Subject naming strategies RecordNameStrategy and TopicRecordNameStrategy are not currently supported.

Sink connectors

You cannot add a sink connector’s dead letter queue (DLQ) topic to the list of topics consumed by the same sink connector (to prevent an infinite loop).

Source connectors

An internal Apache Kafka® configuration property (max.request.size) controls the maximum producer request size. This size is set at 8 MB maximum for Basic, Standard, and Enterprise clusters, and 20 MB for Dedicated clusters. If you want to run a source connector making requests larger than 8 MB, you must run the connector in a Dedicated cluster.

Automatic topic creation

If the Kafka broker configuration is set to auto.create.topics.enable=false and you delete a topic that was automatically created, the connector does not automatically create a new topic, even if the Connect worker has the property topic.creation.enable=true. To work around this issue, make a minor modification to the connector configuration so the connector can restart. The connector creates the new topic when it restarts after the configuration change.

Connector-Specific Limitations

Supported connector limitations

See the following limitations for supported connectors.

ActiveMQ Source Connector

There are no current limitations for the ActiveMQ Source Connector for Confluent Cloud.

AlloyDB Sink Connector

The following are limitations for the AlloyDB Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • AlloyDB only provides a private IP, hence cannot be accessed publicly. Setup the AlloyDB Auth Proxy and ensure that it is accessible to the connector.
  • The database and Kafka cluster should be in the same region.
  • For tombstone records, set delete.enabled to true

Amazon CloudWatch Logs Source Connector

The following are limitations for the Amazon CloudWatch Logs Source Connector for Confluent Cloud.

  • The connector can only read from the top 50 log streams (ordered alphabetically).
  • The connector does not support Protobuf.

Amazon CloudWatch Metrics Sink Connector

The Amazon CloudWatch Metrics region must in the same region where your Confluent Cloud cluster is, and where you are running the Amazon CloudWatch Metrics Sink Connector for Confluent Cloud.

Amazon DynamoDB Sink Connector

The following are limitations for the Amazon DynamoDB Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Amazon DynamoDB database and Kafka cluster should be in the same region.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Amazon Kinesis Source Connector

Amazon Redshift Sink Connector

The following are limitations for the Amazon Redshift Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Confluent Cloud cluster and the target Redshift cluster must be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • The connector cannot consume data containing nested structs.
  • The connector does not support the Array data type. For supported data types, see Data types.
  • The connector does not support Avro schemas that contain decimal logical types. For a better understanding of numeric data types, see this blog post: Bytes, Decimals, Numerics and oh my.

Amazon SQS Source Connector

There are no current limitations for the Amazon SQS Source Connector for Confluent Cloud.

Amazon S3 Sink Connector

The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.

  • The data system the sink connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, be aware that you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

    Performing a compatible schema change may cause the connector to flush data prior to whatever is configured for flush.size.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The S3 Sink connector does not allow recursive schema types. Writing to Parquet output format with a recursive schema type results in a StackOverflowError.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Amazon S3 Source Connector

The following are limitations for the Amazon S3 Source Connector for Confluent Cloud.

  • For a new bucket, you need to create a new connector with an unused name. If you reconfigure an existing connector to source from the new bucket, or create a connector with a name that is used for another connector, the connector will not source from the beginning of data stored in the bucket. This is because the connector will maintain offsets tied to the connector name.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.HoistField$Key
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.Filter
    • io.confluent.connect.transforms.Filter$Key
    • io.confluent.connect.transforms.Filter$Value

AWS Lambda Sink Connector

The Confluent Cloud cluster and your AWS Lambda project must be in the same AWS region.

Azure Blob Storage Sink Connector

The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.

  • The Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Blob storage in different regions.

  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Blob Storage Source Connector

The following are limitations for the Azure Blob Storage Source Connector for Confluent Cloud.

  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.

  • The connector ignores any object with a name that does not start with the configured topics.dir directory. This name is topics/ by default.

  • The connector uses the connector name to store offsets that identify how much of the container it has processed. If you delete a connector and then use the same connector name for a new connector, the new connector will not reprocess data from the beginning of the container. The progress for the deleted connector is saved and the new connector starts from where the original connector’s processing ended. The connector can start processing earlier container data if the corresponding entry in the offset topic is cleared.

  • For a new container, you need to create a new connector. If you reconfigure an existing connector to source from the new container, the connector will not source from the beginning of data stored in the container.

  • The connector will not reload data during the following scenarios:

    • Renaming a file that the connector has already read.
    • Uploading a newer version of an existing file with a new record.
  • If a shared access signature (SAS) token is used, the connector requires an account-level SAS token. A service-level (container) SAS token will not work.

  • There are compatibility constraints for certain input data formats.

    Output data format Supported input formats
    PROTOBUF, JSON_SR BYTES, AVRO
    JSON, AVRO, STRING AVRO, JSON, BYTES, STRING
    BYTES STRING, BYTES
  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.HoistField$Key
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.Filter
    • io.confluent.connect.transforms.Filter$Key
    • io.confluent.connect.transforms.Filter$Value

Azure Cognitive Search Sink Connector

The following are limitations for Azure Cognitive Search Sink Connector for Confluent Cloud.

  • Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 16 megabytes for each API request. For additional details, refer to Size limits per API call.
  • The Azure Cognitive Search service must be in the same region as your Confluent Cloud cluster.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Cosmos DB Sink Connector

The following are limitations for Azure Cosmos DB Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Azure Cosmos DB must be in the same region as your Confluent Cloud cluster.
  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.
  • The Kafka topic must not contain tombstone records. The connector does not handle tombstone or null values.

Azure Cosmos DB Source Connector

The following are limitations for the Azure Cosmos DB Source Connector for Confluent Cloud.

Azure Data Lake Storage Gen2 Sink Connector

The following are limitations for the Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.

  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for this connector. For more information about public Internet access to resources, see Networking, DNS, and service endpoints.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • Using a recursive schema type is not allowed and will result in a StackOverflowError.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Azure Event Hubs Source Connector

The following are limitations for the Azure Event Hubs Source Connector for Confluent Cloud.

  • max.events: 499 is the maximum number of events allowed. Defaults to 50.
  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.HoistField$Value

Azure Functions Sink Connector

The following are limitations for Azure Functions Sink Connector for Confluent Cloud.

Azure Log Analytics Sink Connector

The following are limitations for Azure Log Analytics Sink Connector for Confluent Cloud.

  • There is a 30 MB per post size limit when posting to the Azure Monitor Data Collector API. This size limit is for a single post.
  • There is a 32 KB field value size limit. If the field value is greater than 32 KB data will be truncated.
  • The recommended maximum number of fields for a given type is 50. This is a practical limit based on usability testing.
  • Tables in Azure Log Analytics workspaces can support 500 columns maximum.
  • Column names can have a maximum of 45 characters.
  • Table names can have a maximum of 100 characters. Table names must start with letters and can only contain letters, numbers, and the underscore character (_).

Azure Service Bus Source Connector

The following are limitations for the Azure Service Bus Source Connector for Confluent Cloud.

  • For JSON, JSON_SR, AVRO and PROTOBUF, the message body (messageBody) produced by the connector contains JSON or text in base64 encoded format.
  • For using a private DNS zone/server to configure the Azure Service Bus Source connector to a private endpoint, see DNS Support in Manage Networking for Confluent Cloud Connectors.
  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.

Azure Synapse Analytics Sink Connector

The following are limitations for Azure Synapse Analytics Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • This connector can only insert data into an Azure SQL data warehouse database. Azure Synapse Analytics does not support primary keys. Since updates, upserts, and deletes are all performed on the primary keys, these queries are not supported for this connector.
  • When auto.evolve is enabled, if a new column with a default value is added, that default value is only used for new records. Existing records will have "null" as the value for the new column.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Databricks Delta Lake Sink

The following are limitations for the Databricks Delta Lake Sink Connector for Confluent Cloud.

  • The connector is available only on Amazon Web Services (AWS).
  • The Amazon S3 bucket (where data is staged), the Delta Lake instance, and the Kafka cluster must be in the same region.
  • You cannot configure multiple connectors that consume from the same topic and that use the same Amazon S3 staging bucket.
  • Exactly-once semantics functionality is not currently supported.
  • To use multiple tasks, your Databricks workspace must be using Databricks Runtime version 10.5 or later. For earlier Databricks Runtime versions, the connector is limited to a single task per connector.
  • The connector does not support Array, Map, or Struct field schemas.
  • The connector appends data only.
  • The connector uses the UTC timezone.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Datadog Metrics Sink Connector

  • Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 3.2 megabytes for each API request. For additional details, refer to Post timeseries points.
  • Metrics Rate Limiting: The API endpoints are rate limited. The rate limit for metrics retrieval is 100 per hour, per organization. These limits can be modified by contacting Datadog support.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Datagen Source Connector

There are no current limitations for the Datagen Source Connector for Confluent Cloud.

Elasticsearch Service Sink Connector

The following are limitations for the Elasticsearch Service Sink Connector for Confluent Cloud.

  • The connector is tested to work with Elastic Cloud and the official Elasticsearch distribution from Elastic. Open source versions and derivatives like Amazon OpenSearch are not supported.
  • The connector supports data stream types LOGS and METRICS only.
  • The connector has been tested with versions up to Elasticsearch 8.x.
  • The Confluent Cloud cluster and the target Elasticsearch deployment must be in the same region.
  • The batch.size property is limited to a maximum of 4000 records.

GitHub Source Connector

The following is a limitation for the GitHub Source Connector for Confluent Cloud.

  • Because of a GitHub API limitation, only one task per connector is supported.

Google BigTable Sink Connector

The following are limitations for the Google Cloud BigTable Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The database and the Kafka cluster should be in the same region.

Google BigQuery Sink (Legacy) Connector

The following are limitations for the Google Cloud BigQuery Sink (Legacy) Connector for Confluent Cloud.

  • The data system the sink connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, be aware that you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • Source topic names must comply with BigQuery naming conventions even if sanitizeTopics is set to true in the connector configuration.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • The connector does not support schemas with recursion.
  • DLQ routing does not work if Auto update schemas (auto.update.schemas) is enabled and the connector detects that the failure is due to schema mismatch.
  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google BigQuery Sink V2 Connector

The following are limitations for the Google BigQuery Sink V2 Connector for Confluent Cloud.

  • The data system the connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • Source topic names must comply with BigQuery naming conventions even if sanitize.topics is set to true in the connector configuration.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • The connector does not support schemas with recursion.
  • DLQ routing does not work if Auto update schemas (auto.update.schemas) is enabled and the connector detects that the failure is due to schema mismatch.
  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Functions Sink Connector

The following are limitations for the Google Cloud Functions Sink Connector for Confluent Cloud.

  • The target Google Function should be in the same region as your Confluent Cloud cluster.
  • The connector does not currently support Google Cloud Functions (2nd gen).
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Pub/Sub Source Connector

There are no current limitations for the Google Cloud Pub/Sub Source Connector for Confluent Cloud.

Google Cloud Spanner Sink Connector

The following are limitations for the Google Cloud Spanner Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Confluent Cloud cluster and the target Google Spanner cluster must be in the same GCP region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro, JSON Schema, or Protobuf.
  • The connector does not support PostgreSQL dialect.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Cloud Storage Sink Connector

The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.

  • The data system the sink connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, be aware that you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • If output format BYTES is selected, the input message format must also be BYTES.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • Using a recursive schema type is not allowed and will result in a StackOverflowError.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Google Cloud Storage Source Connector

The following are limitations for the Google Cloud Storage Source Connector for Confluent Cloud.

  • The connector ignores any GCS object with a name that does not start with the configured topics.dir directory. This name is topics/ by default.

  • The connector uses the connector name to store offsets that identify how much of the bucket it has processed. If you delete a connector and then use the same connector name for a new connector, the new connector will not reprocess data from the beginning of the bucket. The progress for the deleted connector is saved and the new connector starts from where the original connector’s processing ended. The connector can start processing earlier bucket data if the corresponding entry in the offset topic is cleared.

  • For a new bucket, you need to create a new connector. If you reconfigure an existing connector to source from the new bucket, the connector will not source from the beginning of data stored in the bucket.

  • The connector will not reload data during the following scenarios:

    • Renaming a file that the connector has already read.
    • Uploading a newer version of an existing file with a new record.
  • There are compatibility constraints for certain input data formats.

    Output data format Supported input formats
    PROTOBUF, JSON_SR BYTES, AVRO
    JSON, AVRO, STRING AVRO, JSON, BYTES, STRING
    BYTES STRING, BYTES
  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.HoistField$Value
    • org.apache.kafka.connect.transforms.HoistField$Key
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.Filter
    • io.confluent.connect.transforms.Filter$Key
    • io.confluent.connect.transforms.Filter$Value

HTTP Sink Connector

There is one limitation for the HTTP Sink Connector for Confluent Cloud.

  • The Confluent Cloud Kafka consumer configuration property max.poll.interval.ms is set to 300000 milliseconds (5 minutes). This is a hard-coded property. If the sink connector takes longer than five minutes to complete processing and overshoots the poll interval, the connector is kicked out of the consumer group. This results in a failure to commit offsets and duplicate messages.

HTTP Source Connector

The following are limitations for the HTTP Source Connector for Confluent Cloud.

  • The connector does not support APIs that rely on timestamp range-based queries.
  • The connector cannot parse responses in any format other than JSON.

IBM MQ Source Connector

There are no current limitations for the IBM MQ Source Connector for Confluent Cloud.

Jira Source Connector

The following are limitations for the Jira Source Connector for Confluent Cloud.

  • To use a schema-based output format, you must set schema compatibility to NONE in Schema Registry.
  • For Schema Registry-based output formats, the connector attempts to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.
  • Resources which do not support fetching records by datetime will have duplicate records and will be fetched repeatedly at a duration specified by the request.interval.ms configuration property.
  • The connector is not able to detect data deletion on Jira.
  • The connector does not guarantee accurate record order in the Apache Kafka® topic.
  • The timezone set by the user (defined in the jira.username configuration property) must match the general setting Jira timezone used for the connector.

InfluxDB 2 Sink Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

InfluxDB Source Connector

There are no current limitations for the InfluxDB 2 Source Connector for Confluent Cloud.

Microsoft SQL Server Sink Connector

The following are limitations for the Microsoft SQL Server Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.
  • Active Directory authentication is not currently supported.
  • The database and Kafka cluster should be in the same region. If you use a different region, you may incur additional data transfer charges.
  • For tombstone records, set delete.enabled to true.

Microsoft SQL Server CDC Source Connector (Debezium) [Legacy]

The following are limitations for the Microsoft SQL Server CDC Source (Debezium) [Legacy] Connector for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • Active Directory authentication is not currently supported.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Microsoft SQL Server CDC Source V2 (Debezium) Connector

The following are limitations for the Microsoft SQL Server CDC Source Connector V2 (Debezium) for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • Active Directory authentication is not currently supported.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Microsoft SQL Server Source Connector

The following are limitations for the Microsoft SQL Server Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.

  • You cannot use public egress IP addresses (IP address allowlisting) for the connector. Azure provides secure and direct private service endpoints to Azure services. For more information, see Service and gateway endpoints.

  • Active Directory authentication is not currently supported.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode and should be datetime2.

  • If the connector is making numerous parallel insert operations in a large source table, insert transactions can commit out of order (this is typical). What this means is that a “greater” auto_increment ID (for example, 101) is committed earlier and a “smaller” ID (for example, 100) is committed later. The time difference here may only be a few milliseconds, but the commits are out of order nevertheless.

    Note that using incrementing mode to load data from such tables always results in some data loss. This happens because when the source connector worker reads (polls) the table, the connector gets the row with a greater offset value (with the smaller offset row remaining uncommitted). In the next iteration, although the uncommitted row is committed, the offset position has moved beyond that value, so the row is skipped. Using timestamp+incrementing mode is not a good choice either, because the tables may be very large (five to eight million rows added daily) and there is a high cost for any indexing approach, with the exception of PK indexing.

MongoDB Atlas Sink Connector

The following are limitations for the MongoDB Atlas Sink Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database or MongoDB Atlas Serverless.

  • If your MongoDB database username or password includes any of the following characters: $ : / ? # [ ] @, you must convert the character(s) using percent encoding.

  • Document post processing configuration properties are not supported. These include:

    • post.processor.chain
    • key.projection.type
    • value.projection.type
    • field.renamer.mapping
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The MongoDB database service endpoint and the Kafka cluster must be in the same region.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.

  • You cannot use a dot in a field name (for example, Client.Email). The error shown below is displayed if a field name includes a dot. You should also not use $ in a field name. For additional information, see Field Names.

    Your record has an invalid BSON field name. You can check the MongoDB documentation for details.
    
  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

MongoDB Atlas Source Connector

The following are limitations for the Get Started with the MongoDB Atlas Source Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database or MongoDB Atlas Serverless.
  • MongoDB Time Series Collections are not supported with this connector.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The connector supports running a single task.

MQTT Sink Connector

There are no current limitations for the MQTT Sink Connector for Confluent Cloud.

MQTT Source Connector

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.HoistField$Value

MySQL Sink Connector

The following are limitations for the MySQL Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The database and Kafka cluster should be in the same region.
  • For tombstone records, set delete.enabled to true.

MySQL CDC Source Connector (Debezium) [Legacy]

The following are limitations for the MySQL CDC Source (Debezium) [Legacy] Connector for Confluent Cloud.

  • MariaDB is not currently supported. See the Debezium docs for more information.
  • Amazon Aurora doesn’t support binary logging using a multi-master cluster as the binlog master or worker. You can’t use binlog-based CDC tools with multi-master clusters.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

MySQL CDC Source V2 (Debezium) Connector

The following are limitations for the MySQL CDC Source Connector V2 (Debezium) for Confluent Cloud.

  • MariaDB is not currently supported. See the Debezium docs for more information.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

MySQL Source Connector

The following are limitations for the MySQL Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.

  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.

  • A query produced by the connector may take a very long time to execute when using timestamp+incrementing, as opposed to the query speed when using incrementing.

  • If the connector is making numerous parallel insert operations in a large source table, insert transactions can commit out of order (this is typical). What this means is that a “greater” auto_increment ID (for example, 101) is committed earlier and a “smaller” ID (for example, 100) is committed later. The time difference here may only be a few milliseconds, but the commits are out of order nevertheless.

    Note that using incrementing mode to load data from such tables always results in some data loss. This happens because when the source connector worker reads (polls) the table, the connector gets the row with a greater offset value (with the smaller offset row remaining uncommitted). In the next iteration, although the uncommitted row is committed, the offset position has moved beyond that value, so the row is skipped. Using timestamp+incrementing mode is not a good choice either, because the tables may be very large (five to eight million rows added daily) and there is a high cost for any indexing approach, with the exception of PK indexing.

New Relic Metrics Sink Connector

The following are limitations for New Relic Metrics Sink Connector for Confluent Cloud.

Opensearch Sink Connector

The following are limitations for the OpenSearch Sink Connector for Confluent Cloud:

  • The connector only allows you to create and manage up to 5 indexes.
  • Batch inserts are not currently supported. The connector can only insert records one at a time.
  • The connector only supports HTTP POST requests.

Oracle CDC Source Connector

The following are limitations for the Oracle CDC Source Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • If you change the connector configuration property oracle.date.mapping from date to timestamp the connector will not work, since this results in a breaking schema change. You must create a new connector if you want to change to the timestamp option.

Oracle Database Sink Connector

The following are limitations for the Oracle Database Sink (JDBC) Connector for Confluent Cloud.

  • The Oracle Database version must be 11.2.0.4 or later.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Oracle database and Kafka cluster should be in the same region.
  • See Database considerations for additional information.

Oracle Database Source Connector

The following are limitations for the Oracle Database Source (JDBC) Connector for Confluent Cloud.

  • The Oracle Database version must be 11.2.0.4 or later.
  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.
  • Configuration properties that are not shown in the Cloud Console use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

PostgreSQL Sink Connector

The following are limitations for the PostgreSQL Sink (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The database and Kafka cluster should be in the same region. If you use a different region, be aware that you may incur additional data transfer charges.
  • For tombstone records, set delete.enabled to true

Pagerduty Sink Connector

There are no current limitations for the PagerDuty Sink Connector for Confluent Cloud.

PostgreSQL CDC Source (Debezium) [Legacy] Connector

The following are limitations for the PostgreSQL CDC Source Connector (Debezium) [Legacy] for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • CockroachDB is not supported.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Make sure your Azure Virtual Network is correctly configured and that Allow access to Azure Services is enabled.
  • The following are the default partition and replication factor properties:
    • topic.creation.default.partitions=1
    • topic.creation.default.replication.factor=3
  • See the After-state only output limitation if you are planning to use the optional property After-state only.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

PostgreSQL CDC Source V2 (Debezium) Connector

The following are limitations for the PostgreSQL CDC Source Connector V2 (Debezium) for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • CockroachDB is not supported.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Make sure your Azure Virtual Network is correctly configured and that Allow access to Azure Services is enabled.
  • The following are the default partition and replication factor properties:
    • topic.creation.default.partitions=1
    • topic.creation.default.replication.factor=3
  • See the After-state only output limitation if you are planning to use the optional property after.state.only.
  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

PostgreSQL Source Connector

The following are limitations for the PostgreSQL Source (JDBC) Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • Clients from Azure Virtual Networks are not allowed to access the server by default. Make sure your Azure Virtual Network is correctly configured and enable “Allow access to Azure Services”.

  • CockroachDB is not supported.

  • A timestamp column must not be nullable for the timestamp or timestamp+incrementing mode.

  • If the connector is making numerous parallel insert operations in a large source table, insert transactions can commit out of order (this is typical). What this means is that a “greater” auto_increment ID (for example, 101) is committed earlier and a “smaller” ID (for example, 100) is committed later. The time difference here may only be a few milliseconds, but the commits are out of order nevertheless.

    Note that using incrementing mode to load data from such tables always results in some data loss. This happens because when the source connector worker reads (polls) the table, the connector gets the row with a greater offset value (with the smaller offset row remaining uncommitted). In the next iteration, although the uncommitted row is committed, the offset position has moved beyond that value, so the row is skipped. Using timestamp+incrementing mode is not a good choice either, because the tables may be very large (five to eight million rows added daily) and there is a high cost for any indexing approach, with the exception of PK indexing.

  • The connector does not support all data types. When it encounters an unknown data type, the data type is dropped from the source output. The following lists known unsupported data types.

RabbitMQ Sink Connector

The following are limitations for the RabbitMQ Sink Connector for Confluent Cloud.

  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Redis Sink Connector

The following are limitations for the Redis Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The Redis instance and Kafka cluster should be in the same region.
  • This connector does not support the ValueToKey (org.apache.kafka.connect.transforms.ValueToKey) Single Message Transform.

Salesforce Bulk API Source Connector

The following are limitations for the Salesforce Bulk API Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

  • Restarting: When the connector operates, it periodically records the last query time in the Connect offset topic. When the connector is restarted, it may fetch Salesforce objects with a LastModifiedDate that is later than last queried time.

  • API limits: The Salesforce Bulk API Source connector is limited by non-compound fields. For example, Bulk Query doesn’t support address, location fields. The connector discards address and geolocation fields.

  • The following Salesforce object (SObject) error message may be displayed when you are using the Salesforce Bulk API Source connector:

    Entity 'Order' is not supported to use PKChunking.
    

    For these SObjects, set the configuration property Enable Batching to false (CLI property batch.enable=false).

  • Unsupported SObjects: See Supported and Unsupported SObjects for a list of supported and unsupported SObjects.

Salesforce Bulk API 2.0 Sink Connector

The following are limitations for the Salesforce Bulk API 2.0 Sink Connector for Confluent Cloud.

  • Salesforce imposes API limits over a 24-hour window. Exceeding Salesforce API limits will result in connector failure.
  • The connector is subject to daily limits on the number of records handled, the number of batches created (internal to Salesforce), and the total size of data. For detailed limitations, see Bulk API Limits.
  • There are Salesforce data and file storage limitations based on the type of organization used.

Salesforce Bulk API 2.0 Source Connector

The following are limitations for the Salesforce Bulk API 2.0 Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • Restarting: When the connector operates, it periodically records the last query time in the Connect offset topic. When the connector is restarted, it may fetch Salesforce objects with a LastModifiedDate that is later than last queried time.
  • API limits: The Salesforce Bulk API Source connector is limited by non-compound fields. For example, Bulk Query doesn’t support address, location fields. The connector discards address and geolocation fields.

Salesforce CDC Source Connector

The following are limitations for the Salesforce CDC Source Connector for Confluent Cloud.

  • When you pause a connector, the connector continues to fetch records from the Salesforce endpoint. These records are not sent to the Kafka topic until the connector resumes.

Salesforce Platform Event Sink Connector

The following are limitations for the Salesforce Platform Event Sink Connector for Confluent Cloud.

Salesforce Platform Event Source Connector

There is one limitation for the Salesforce Platform Event Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • When you pause a connector, the connector continues to fetch records from the Salesforce endpoint. These records are not sent to the Kafka topic until the connector resumes.

Salesforce PushTopic Source Connector

The following are limitations for the Salesforce PushTopic Source Connector for Confluent Cloud.

  • Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • Note the following limitations for at least once delivery:
    • When the connector operates, it periodically records the replay ID of the last record written to Kafka. When the connector is stopped and then restarted within 24 hours, the connector continues consuming the PushTopic where it stopped, with no missed events. However, if the connector stops for more than 24 hours, some events are discarded in Salesforce before the connector can read them.
    • If the connector stops unexpectedly due to a failure, it may not record the replay ID of the last record successfully written to Kafka. When the connector restarts, it resumes from the last recorded replay ID. This means that some events may be duplicated in Kafka.
  • When you pause a connector, the connector continues to fetch records from the Salesforce endpoint. These records are not sent to the Kafka topic until the connector resumes.

Salesforce SObject Sink Connector

The following are limitations for the Salesforce SObject Sink Connector for Confluent Cloud.

ServiceNow Sink Connector

There are no current limitations for the ServiceNow Sink Connector for Confluent Cloud.

ServiceNow Source Connector

The following are limitations for the ServiceNow Source Connector for Confluent Cloud.

  • The connector does not support the following table types:
    • Sys Audit (sys_audit)
    • Audit Relationship Change (sys_audit_relation)

SFTP Sink Connector

The following are limitations for the SFTP Sink Connector for Confluent Cloud.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

SFTP Source Connector

The following are limitations for the SFTP Source Connector for Confluent Cloud.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.HoistField$Value
  • Currently, the SFTP source connector reads and moves a file only once while processing files from a specific SFTP directory. If a file with the same name is produced later in the same SFTP directory, the connector will not process it.

  • If the following error occurs, you must provide WRITE access to the SFTP server directory where the connector is accessing files.

    There were some errors with your configuration:\ninput.path: Could not
    write to the specified location configured in %s config. Check that that
    the user has write permissions for the specified location
    

Snowflake Sink Connector

The following are limitations for the Snowflake Sink Connector for Confluent Cloud.

  • Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. For details, see Networking, DNS, and service endpoints.
  • The data system the sink connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, be aware that you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.
  • The connector does not support the word privatelink in the Snowflake URL. For example, the connector fails if the following URL is used: https://<account-name>.privatelink.snowflakecomputing.com.
  • The connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • Note that Snowpipe Streaming for Kafka supports insert-only operations. For additional information, see the Snowflake documentation.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Solace Sink Connector

The following are limitations for the Solace Sink Connector for Confluent Cloud.

  • The connector can create queues, but not durable topic endpoints.
  • A valid schema must be available in Schema Registry to use a Schema Registry-based format, like Avro.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

Splunk Sink Connector

The following are limitations for the Splunk Sink Connector for Confluent Cloud.

  • If an invalid index is specified, the connector posts the event to Splunk successfully, but it appears to be discarded in Splunk.

Zendesk Source Connector

The following are limitations for the Zendesk Source Connector for Confluent Cloud.

  • There is a limit of one task per connector instance.
  • For Schema Registry-based output formats, the connector attempts to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.

Preview connector limitations

See the following limitations for preview connectors.

Caution

Preview connectors are not currently supported and are not recommended for production use.

Google Cloud Dataproc Sink Connector

The following are limitations for the Google Cloud Dataproc Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Dataproc cluster must be in a VPC peering configuration.

    Note

    For a non-VPC peered environment, public inbound traffic access (0.0.0.0/0) must be allowed to the VPC where the Dataproc cluster is located. You must also make configuration changes to allow public access to the Dataproc cluster while retaining the private IP addresses for the Dataproc master and worker nodes (HDFS NameNode and DataNodes). For configuration details, see Configuring a non-VPC peering environment. For more information about public Internet access to resources, see Networking, DNS, and service endpoints.

  • The Dataproc image version must be 1.4 (or later). See Cloud Dataproc Image version list.

  • One task can handle up to 100 partitions.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

  • The connector does not currently support the following Single Message Transformations (SMTs):

    • org.apache.kafka.connect.transforms.TimestampRouter
    • io.confluent.connect.transforms.MessageTimestampRouter
    • io.confluent.connect.transforms.ExtractTopic$Header
    • io.confluent.connect.transforms.ExtractTopic$Key
    • io.confluent.connect.transforms.ExtractTopic$Value
    • io.confluent.connect.cloud.transforms.TopicRegexRouter

RabbitMQ Source Connector

The following are limitations for the RabbitMQ Source Connector for Confluent Cloud.

  • When paused, this connector continues to consume messages from RabbitMQ until the consumer times out. These messages remain in system memory while the connector is paused. There is no data loss when the connector resumes, since messages are acknowledged after they are flushed from memory and sent to Kafka. However, if you plan to keep this connector paused for an extended time, consider removing the connector, since message will continue to accumulate in system memory.
  • The connector does not currently support the following Single Message Transformations (SMTs):
    • org.apache.kafka.connect.transforms.ValueToKey
    • org.apache.kafka.connect.transforms.HoistField$Value