Custom Connectors for Confluent Cloud Limitations and Support

Make sure to review all the information here before starting.


The following are limitations for custom connectors:

  • Custom connectors can only be created in AWS regions supported by Confluent Cloud.
  • The cluster must use public internet endpoints. Currently, there is no support for using a set of public egress IP addresses.
  • Custom Connect creates three or four role bindings when you provision a custom connector. These provide the connector with permissions required to operate. These role bindings count toward your Confluent Cloud organization role binding quota.
  • You can only run Java-based Apache Kafka® custom connectors in Confluent Cloud.
  • The custom connector framework currently supports JDK 11 only.
  • Organizations are limited to 30 custom connectors and 100 plugins.
  • Up to 5 tasks are supported per custom connector. Note that a custom connector is allocated 2 GB of memory for all tasks.
  • Custom connectors cannot write data to a local file system in Confluent Cloud. If you configure your custom connector to write to the local file system, it will fail.
  • Adhere to the connector naming conventions:
    • Do not exceed 64 characters.
    • A connector name can contain Unicode letters, numbers, marks, and the following special characters: . , & _ + | []  - . Note that you can use spaces, but using dashes (-) makes it easier to reference the cluster in a programmatic way.


Confluent regularly scans connectors uploaded through the Custom Connector product to detect if uploaded connectors are interacting with malicious endpoints or otherwise present a security risk to Confluent. If malicious activity is detected, Confluent may immediately delete the connector.

Shared responsibility

Confluent supports the Kafka Connect infrastructure in Confluent Cloud only. It is your responsibility to troubleshoot custom connector issues for connectors you build or that were provided to you from others. The following provides additional details about shared support responsibilities.

Shared responsibilities matrix

Shared responsibilities matrix

  • Customer Managed: The customer is responsible for self-managing these services. Confluent does not provide any support for customer-managed services and features within Custom Connectors clusters.
  • Confluent Managed: Confluent is responsible for managing these services and providing support.

Confluent and Partner support

You are responsible for the connectors you upload through the Custom Connector feature.

  • Customers that upload connectors to Confluent Cloud through the Custom Connector feature are responsible for management and support of the connector. Confluent does not provide support for custom connectors.
  • Partner-built connectors that are published on Confluent Hub may not be tested or certified for Custom Connector functionality on Confluent Cloud. Confluent does not provide Enterprise support for customers who choose to provision a custom connector using a Partner-built connector that is not certified by Confluent.
  • Confluent-built connectors are not tested or certified for Custom Connector functionality in Confluent Cloud. Confluent does not provide Enterprise support for customers who choose to provision a custom connector using a Confluent-built plugin.

The following table provides additional details about enterprise support.

Enterprise Support for Custom Connectors

Connector Type Confluent Cloud Infrastructure Support Confluent Connector Support
Customer-built plug-in/connector Confluent Not supported by Confluent.
Partner-built plug-in/connector Confluent See Certified Partner-built connectors.
Confluent-built plug-in/connector Confluent Not supported by Confluent. When used for Custom Connectors, Confluent-built plugins and connectors (proprietary and community) are not supported by Confluent.

Certified Partner-built connectors

The Connect plugins below are tested and supported by the listed Confluent Partner.

Certified Partner Connectors

Confluent Partner Confluent Hub Link Documentation and Support
Ably Ably Kafka Sink connector Ably docs and support
ClickHouse ClickHouse Kafka Sink connector ClickHouse docs and support
Neo4j Neo4j Kafka Sink and Source connectors Neo4j docs and support

Supported AWS Regions

Custom connectors can be created in all AWS regions supported by Confluent Cloud. For a list of supported AWS regions, see Cloud Providers and Regions for Confluent Cloud.

Schema Registry integration

Schema Registry must be enabled for the environment, for the custom connector to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). Schema Registry can be enabled as a fully-managed service in Confluent Cloud or as a self-managed service.

Managed Schema Registry

When Schema Registry is enabled as a fully-managed service for your Confluent Cloud environment, you can automatically add the Schema Registry configuration properties to your custom connector using the UI. The following example shows the required configuration properties added to the custom connector configuration when using the Avro data format for both the key and value:

 "": "true",
 "key.converter": "io.confluent.connect.avro.AvroConverter",
 "value.converter": "io.confluent.connect.avro.AvroConverter"

Self-managed Schema Registry

If you plan to use a self-managed Schema Registry configuration, the self-managed Schema Registry instance must be accessible over the public internet. You also must add all Schema Registry configuration entries, including credentials, in the connector configuration. For example:

  "key.converter": "io.confluent.connect.protobuf.ProtobufConverter",
  "key.converter.basic.auth.credentials.source": "USER_INFO",
  "": "<username>:<password>",
  "key.converter.schema.registry.url": "",
  "value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
  "value.converter.basic.auth.credentials.source": "USER_INFO",
  "": "<username>:<password>",
  "value.converter.schema.registry.url": ""

App log topic

Note the following usage details for the app log topic:

  • Customers are responsible for all charges related to using the app log topic with a custom connector. For all other billing details, see Manage Billing in Confluent Cloud.
  • Kafka topics have a default retention period of seven days. You must consume the logs from the app log topic within seven days. If you need logs for a longer period, change the log retention period configuration property. In the UI, open the appropriate log topic and change the Retention time.
Log retention period

Log retention period