Cloud Connector Limitations

Refer to the following for specific Confluent Cloud connector limitations.

Supported Connectors

Amazon S3 Sink Connector

The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target S3 bucket must be in the same AWS region.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro.
  • For Confluent Cloud Enterprise, contact your Confluent account manager to use this connector.

Google Cloud Storage Sink Connector

The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Cloud Storage (GCS) bucket must be in the same Google Cloud Platform region.
  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.

Preview Connectors

Important

Preview connectors are not currently supported and are not recommended for production use. For specific connector limitations, see Cloud connector limitations.

Amazon Kinesis Source Connector

The following are limitations for the Amazon Kinesis Source Connector for Confluent Cloud.

  • Configuration properties that are not shown in the Confluent UI use default values. For default values and property definitions, see Kinesis Source Connector Configuration Properties.
  • The number of connector tasks must be the same as the number of shards.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one shard. If you require more shards for testing contact ccloud-connect-preview@confluent.io.

Azure Blob Storage Sink Connector

The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.

  • One task can handle up to 100 partitions.
  • Partitioning (hourly or daily) is based on Kafka record time.
  • flush.size defaults to 1000. For example, if you use the default setting of 1000 and your topic has six partitions, files start to be created in the storage bucket after more than 1000 records exist in each partition. Note that the default value of 1000 can be increased if needed.
  • schema.compatibility is set to NONE.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one connector. Use of this connector is free for a limited time.

Google BigQuery Sink Connector

The following are limitations for the Google BigQuery Sink Connector for Confluent Cloud.

  • One task can handle up to 100 partitions.

  • Configuration properties that are not shown in the Confluent UI use default values. The following lists a few examples of these properties:

    • autoUpdateSchemas=false
    • bigQueryMessageTimePartitioning=false
    • autoCreateTables=false
    • sanitizeTopics=false

    See Kafka Connect BigQuery Configuration Properties for additional connector properties.

  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.

  • For Confluent Cloud, organizations are limited to one connector. Use of this connector is free for a limited time.

  • For Confluent Cloud Enterprise, contact your Confluent account manager to participate in the Confluent Cloud connector preview.

MySQL Source Connector

The following are limitations for the MySQL Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task.

PostgresSQL Source Connector

The following are limitations for the PostgreSQL Source Connector for Confluent Cloud.

  • Public access must be enabled for your database.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Source Connector Configuration Properties for property definitions and default values.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task.