Cloud Connector Limitations

Refer to the following for specific Confluent Cloud connector limitations.

Supported Connectors

Amazon S3 Sink Connector

The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target S3 bucket must be in the same AWS region.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro.
  • For Confluent Cloud Enterprise, contact your Confluent account manager to use this connector.

Azure Blob Storage Sink Connector

The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.

  • The Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Blob storage in different regions.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.

Google Cloud Storage Sink Connector

The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Cloud Storage (GCS) bucket must be in the same Google Cloud Platform region.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • For Confluent Cloud Enterprise customers, contact your Confluent account manager to use this connector.

Preview Connectors

Caution

Preview connectors are not currently supported and are not recommended for production use.

Amazon Kinesis Source Connector

The following are limitations for the Amazon Kinesis Source Connector for Confluent Cloud.

  • Configuration properties that are not shown in the Confluent UI use default values. For default values and property definitions, see Kinesis Source Connector Configuration Properties.
  • The number of connector tasks must be the same as the number of shards.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one shard. If you require more shards for testing contact ccloud-connect-preview@confluent.io.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Azure DataLake Storage Gen2 Sink Connector

The following are limitations for the Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.

  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in storage after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Amazon Redshift Sink Connector

The following are limitations for the Amazon Redshift Sink Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • The Confluent Cloud cluster and the target Redshift cluster must be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and connector. Use of this connector is free for a limited time.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Azure Event Hubs Source Connector

The following are limitations for the Azure Event Hubs Source Connector for Confluent Cloud.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Google BigQuery Sink Connector

The following are limitations for the Google BigQuery Sink Connector for Confluent Cloud.

  • One task can handle up to 100 partitions.

  • Configuration properties that are not shown in the Confluent UI use default values. The following lists a few examples of these properties:

    • autoUpdateSchemas=false
    • bigQueryMessageTimePartitioning=false
    • autoCreateTables=false
    • sanitizeTopics=false

    See Kafka Connect BigQuery Configuration Properties for additional connector properties.

  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.

  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and connector. Use of this connector is free for a limited time.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Google Cloud Dataproc Sink Connector

The following are limitations for the Google Cloud Dataproc Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Dataproc cluster must be in a VPC peering configuration.
  • The Dataproc image version must be 1.4 (or later). See Cloud Dataproc Image version list.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the Dataproc cluster is located.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Google Pub/Sub Source Connector

Note the following limitation for the Google Pub/Sub Source Connector for Confluent Cloud.

For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one connector and one task.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Google Cloud Spanner Sink Connector

The following are limitations for the Google Cloud Spanner Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Spanner cluster must be in the same GCP region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro.
  • Primary key mode is limited to pk.mode=kafka and pk.fields=kafka.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Microsoft SQL Server Source Connector

The following are limitations for the Microsoft SQL Server Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

MySQL Source Connector

The following are limitations for the MySQL Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • SSL is not supported and should be turned off.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Oracle Database Source Connector

The following are limitations for the Oracle Database Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

PostgresSQL Source Connector

The following are limitations for the PostgreSQL Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • SSL is not supported and should be turned off.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

Once this connector moves from Preview to Generally Availability (GA), it will require a subscription for Confluent Cloud commitment for Confluent Cloud Enterprise customers. Without a Confluent Cloud commitment, Confluent Cloud Enterprise customers will not have access to these connectors in GA. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.