Docker Image Configuration Reference for Confluent Platform
This topic describes how to configure the Docker images when starting Confluent Platform.
You can dynamically specify configuration values in the Confluent Platform Docker images with environment variables. You can use the
Docker -e or --env flags to specify various configurations.
Ready to build?
Sign up for Confluent Cloud with your cloud marketplace account and unlock $1000 in free credits: AWS Marketplace, Google Cloud Marketplace, or Microsoft Azure Marketplace.
Kafka configuration
The sections helps configure the following images that include Kafka:
cp-kafka- includes Kafka.cp-server- includes role-based access control, self-balancing clusters, and more, in addition to Kafka.confluent-local- includes Kafka and Confluent Rest Proxy . Defaults to running in KRaft mode.
Convert the properties file variables according to the following rules and use them as Docker environment variables:
Prefix Kafka component properties with
KAFKA_. For example,metric.reportersis a Kafka property so should be converted toKAFKA_METRIC_REPORTERS.Prefix Confluent component properties for
cp-serverwithCONFLUENT_for For example,confluent.metrics.reporter.bootstrap.serversis a Confluent Enterprise feature property, so, it should be converted toCONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Required Kafka configurations for KRaft mode
As of Confluent Platform 8.0, ZooKeeper has been removed. You must use KRaft mode for metadata management. For steps to migrate from ZooKeeper to KRaft, see Migrate from ZooKeeper to KRaft on Confluent Platform. For links to ZooKeeper topics in older versions of the documentation, see ZooKeeper Topic Guide.
Important
For more information about configuring KRaft in production, see KRaft Configuration for Confluent Platform.
KAFKA_PROCESS_ROLESThe role of this server (
controller,brokerorbroker,controller).KAFKA_NODE_IDThe unique identifier for this server.
KAFKA_CONTROLLER_QUORUM_VOTERSA comma-separated list of quorum voters. Each controller is identified with their ID, host and port information in the format of {id}@{host}:{port}.
KAFKA_CONTROLLER_LISTENER_NAMESA comma-separated list of the names of the listeners used by the controller. On a node with
process.roles=broker, only the first listener in the list will be used by the broker. ZooKeeper-mode brokers should not set this value. For KRaft controllers in isolated or combined mode, the node will listen as a KRaft controller on all listeners that are listed for this property, and each must appear in thelistenersproperty.
Optional cp-server setting
The license key is an optional setting for the Enterprise Kafka (cp-server)
image for 30 days. For a complete list of Confluent Server configuration settings, see
Kafka Configuration Reference for Confluent Platform.
KAFKA_CONFLUENT_LICENSEThe Enterprise Kafka license key. Without the license key, Confluent Server can be used for a 30-day trial period. Note that
confluentis part of the property name and not the component name. For example, useKAFKA_CONFLUENT_LICENSEto specify the Kafka license.
Example configurations
The following examples show how to set environmental variables and run the Kafka images that Confluent provides. These examples:
Set the
KAFKA_ADVERTISED_LISTENERSvariable tolocalhost:29092. This makes Kafka accessible from outside of the container by advertising its location on the Docker host.Set
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTORto1. This is required when you are running with a single-node cluster. If you have three or more nodes, you can use the default.Each KRaft cluster must have a unique cluster ID assigned to its CLUSTER_ID variable. To learn how to use the
kafka-storagetool to generate a unique ID, see Generate and format IDs.These examples show KRaft combined mode, which is intended for local experimentation only and is not supported by Confluent. For production configuration, see Production Configuration Options.
cp-kafka example
The following examples show how to start cp-kafka in KRaft combined mode for brevity.
Combined mode is designed for local experimentation and is not supported by Confluent.
For production configuration recommendations, see Production Configuration Options.
Generate a random-uuid using the kafka-storage tool:
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable as shown in the following example:
docker run -d \
--name=kafka-kraft \
-h kafka-kraft \
-p 9101:9101 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \
-e KAFKA_JMX_PORT=9101 \
-e KAFKA_JMX_HOSTNAME=localhost \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='MkU3OEVBNTcwNTJENDM2Qk' \
confluentinc/cp-kafka:8.0.2
cp-server example
The following examples show how to start cp-server in KRaft combined mode.
Note that KRaft combined mode is for local experimentation only and is not supported by Confluent.
For production configuration recommendations, see Production Configuration Options.
Generate a random-uuid using the kafka-storage tool:
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable:
docker run -d \
--name=kafka-kraft \
-h kafka-kraft \
-p 9101:9101 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='q1Sh-9_ISia_zwGINzRvyQ' \
confluentinc/cp-server:8.0.2
confluent-local example
The confluent-local is configured to run in KRaft combined mode by default, and requires no configuration to start using it.
Note that KRaft combined mode is for local experimentation only and is not supported by Confluent. For production configuration recommendations, see Production Configuration Options.
Following is an example of how to run confluent-local with the default configurations:
docker run -d -P --name=kafka-kraft confluentinc/confluent-local:8.0.2
For a list of the default configurations for this image, see the configureDefaults file on GitHub.
The following example shows how generate a cluster ID and specify the ID when you run the image.
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable:
docker run -d -P \
--name=kafka-kraft \
-e CLUSTER_ID=q1Sh-9_ISia_zwGINzRvyQ
confluentinc/confluent-local:8.0.2
Confluent Schema Registry configuration
For the Schema Registry (cp-schema-registry) image, convert the property
variables as below and use them as environment variables:
Prefix with
SCHEMA_REGISTRY_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run the following to set kafkastore.bootstrap.servers,
host.name, listeners and debug:
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=SSL://hostname2:9092 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
-e SCHEMA_REGISTRY_DEBUG=true \
confluentinc/cp-schema-registry:8.0.2
Required Schema Registry configurations
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERSA list of Kafka brokers to connect to.
SCHEMA_REGISTRY_HOST_NAMEThis is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. Hostname must be resolvable because secondary nodes serve registration requests indirectly by simply forwarding them to the current primary, and returning the response supplied by the primary. For more information, see the Schema Registry documentation on Single Primary Architecture.
Kafka Connect configuration
For the Kafka Connect (cp-kafka-connect) image, convert the property
variables as below and use them as environment variables:
Prefix with
CONNECT_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run this command to set the required properties like
bootstrap.servers, the topic names for config, offsets and
status as well the key or value converter:
docker run -d \
--name=kafka-connect \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_PLUGIN_PATH=/usr/share/java \
confluentinc/cp-kafka-connect:8.0.2
Required Kafka Connect configurations
The following configurations must be passed to run the Kafka Connect Docker image.
CONNECT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONNECT_GROUP_IDA unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPICThe name of the topic in which to store connector and task configuration data. This must be the same for all workers with the same
group.idCONNECT_OFFSET_STORAGE_TOPICThe name of the topic in which to store offset data for connectors. This must be the same for all workers with the same
group.idCONNECT_STATUS_STORAGE_TOPICThe name of the topic in which to store state for connectors. This must be the same for all workers with the same
group.idCONNECT_KEY_CONVERTERConverter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTERConverter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_REST_ADVERTISED_HOST_NAMEThe hostname that is given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
CONNECT_PLUGIN_PATHThe location from which to load Connect plugins in class loading isolation. If using the
confluent-hubclient, include/usr/share/confluent-hub-components, the default path thatconfluent-hubinstalls to.
Optional Kafka Connect configurations
The images marked as Commercially licensed include proprietary components and must be licensed from Confluent when deployed.
CONNECT_CONFLUENT_LICENSERequired if using
cp-server-connectorcp-server-connect-baseDocker images. Without a license key, Kafka Connect can be used for a 30-day trial period. See Docker Image Reference for Confluent Platform for more information.
Note
To configure the JVM for Connect, you must use the KAFKA_HEAP_OPTS
setting instead of CONNECT_KAFKA_HEAP_OPTS.
ksqlDB Server configuration
For a complete list of ksqlDB parameters, see ksqlDB Configuration Parameter Reference.
For the ksqlDB Server image (cp-ksqldb-server), convert the property variables as
following and use them as environment variables:
Prefix with
KSQL_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Enterprise support
For enterprise support, you must provide a license key for the ksqlDB Server
and set the ksql.resource.extension.class property in the
ksql-server.properties file to the following value. For more information,
see Configure component licenses.
ksql.resource.extension.class=io.confluent.ksql.security.license.KsqlLicenseValidatorExtension
To set this property in a ksqlDB Server container, pass the following
environment variable with the docker run command by using the -e or
--env flags.
KSQL_KSQL_RESOURCE_EXTENSION_CLASSThe class that validates the license key.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2 \
-e KSQL_KSQL_RESOURCE_EXTENSION_CLASS=io.confluent.ksql.security.license.KsqlLicenseValidatorExtension \
confluentinc/cp-ksqldb-server:8.0.2
Failing to assign this property causes ksqlDB to throw a warning saying that no license was found and no enterprise features or support will be provided. For more information, see Configure component licenses.
ksqlDB Headless Server configurations
Run a standalone ksqlDB Server instance in a container.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_KSQL_QUERIES_FILEA file that specifies predefined SQL queries.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \
-e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \
confluentinc/cp-ksqldb-server:8.0.2
Interactive Server configuration
Run a ksqlDB Server that enables manual interaction by using the ksqlDB CLI.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERSA list of URIs, including the protocol, that the broker listens on.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2 \
confluentinc/cp-ksqldb-server:8.0.2
Connect to a secure Kafka cluster, like Confluent Cloud
Run a ksqlDB Server that uses a secure connection to a Kafka cluster. Learn about Configure Security for ksqlDB.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERSA list of URIs, including the protocol, that the broker listens on.
KSQL_KSQL_SINK_REPLICASThe default number of replicas for the topics created by ksqlDB. The default is one.
KSQL_KSQL_STREAMS_REPLICATION_FACTORThe replication factor for internal topics, the command topic, and output topics.
KSQL_SECURITY_PROTOCOLThe protocol that your Kafka cluster uses for security.
KSQL_SASL_MECHANISMThe SASL mechanism that your Kafka cluster uses for security.
KSQL_SASL_JAAS_CONFIGThe Java Authentication and Authorization Service (JAAS) configuration.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=REMOVED_SERVER1:9092,REMOVED_SERVER2:9093,REMOVED_SERVER3:9094 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=default_ \
-e KSQL_KSQL_SINK_REPLICAS=3 \
-e KSQL_KSQL_STREAMS_REPLICATION_FACTOR=3 \
-e KSQL_SECURITY_PROTOCOL=SASL_SSL \
-e KSQL_SASL_MECHANISM=PLAIN \
-e KSQL_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<username>\" password=\"<strong-password>\";" \
confluentinc/cp-ksqldb-server:8.0.2
Configure a ksqlDB Server by using Java system properties
Run a ksqlDB Server with a configuration that’s defined by Java properties.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_OPTSA space-separated list of Java options.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dksql.queries.file=/path/in/container/queries.sql" \
confluentinc/cp-ksqldb-server:8.0.2
View logs
Use the docker logs command to view ksqlDB logs that are generated from
within the container.
docker logs -f <container-id>
[2018-05-24 23:43:05,591] INFO stream-thread [_confluent-ksql-default_transient_1507119262168861890_1527205385485-71c8a94c-abe9-45ba-91f5-69a762ec5c1d-StreamThread-17] Starting (org.apache.kafka.streams.processor.internals.StreamThread:713)
...
ksqlDB CLI configuration
For the ksqlDB CLI image (cp-ksqldb-cli), convert the property variables as
following and use them as environment variables:
Prefix with
KSQL_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Connect to a Dockerized ksqlDB Server
Run a ksqlDB CLI instance in a container and connect to a ksqlDB Server that’s running in a container.
The Docker network created by ksqlDB Server enables you to connect to a Dockerized ksqlDB server.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_OPTSA space-separated list of Java options.
# Run ksqlDB Server.
docker run -d -p 10.0.0.11:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dlisteners=http://0.0.0.0:8088/" \
confluentinc/cp-ksqldb-server:8.0.2
# Connect the ksqlDB CLI to the server.
docker run -it confluentinc/cp-ksqldb-cli http://10.0.0.11:8088
...
Copyright 2017 Confluent Inc.
CLI v8.0.2-SNAPSHOT, Server v8.0.2-SNAPSHOT located at http://10.0.0.11:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Provide a configuration file
Set up a a ksqlDB CLI instance by using a configuration file, and run it in a container.
# Assume ksqlDB Server is running.
# Ensure that the configuration file exists.
ls /path/on/host/ksql-cli.properties
docker run -it \
-v /path/on/host/:/path/in/container \
confluentinc/cp-ksqldb-cli:8.0.2 http://10.0.0.11:8088 \
--config-file /path/in/container/ksql-cli.properties
Connect to a ksqlDB Server running on another host, like AWS
Run a ksqlDB CLI instance in a container and connect to a remote ksqlDB Server host.
docker run -it confluentinc/cp-ksqldb-cli:8.0.2 \
http://ec2-etc.us-etc.compute.amazonaws.com:8080
...
Copyright 2017 Confluent Inc.
CLI v8.0.2-SNAPSHOT, Server v8.0.2-SNAPSHOT located at http://ec2-blah.us-blah.compute.amazonaws.com:8080
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Confluent REST Proxy configuration
The Confluent REST Proxy (cp-kafka-rest) image uses the REST Proxy configuration
setting names. Convert the REST Proxy configurations to environment variables as below:
Prefix with
KAFKA_REST_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, use the
KAFKA_REST_SCHEMA_REGISTRY_URL environment variable to set
schema.registry.url.
See Standalone REST Proxy Configuration Options for the configuration settings that REST Proxy supports.
The following command sets the listeners, schema.registry.url
docker run -d \
--net=host \
--name=kafka-rest \
-e KAFKA_REST_BOOTSTRAP_SERVERS=localhost:29092 \
-e KAFKA_REST_LISTENERS=http://localhost:8082 \
-e KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081 \
confluentinc/cp-kafka-rest:8.0.2
Required Confluent REST Proxy configurations
The following configurations must be passed to run the REST Proxy Docker image.
KAFKA_REST_HOST_NAMEThe hostname used to generate absolute URLs in responses. Hostname may be required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. For more information, see the Confluent Platform documentation on REST proxy deployment.
KAFKA_REST_BOOTSTRAP_SERVERSA list of Kafka brokers to connect to. To learn about the corresponding
bootstrap.serverREST Proxy setting, see Standalone REST Proxy Configuration Options.
Confluent Control Center configuration
For the Confluent Control Center (cp-control-center) image, you must convert property variables used by
Confluent Control Center to environment variables using the format described in the following table. In general, you:
Replace
confluent.controlcenterin the property variable withCONTROL_CENTER_.Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Some of the properties have specific conversions where more general rules do not apply. Also, note some properties are required and some are optional. For more information, see Control Center Configuration.
Property variable |
Environment variable replacement or pattern |
|---|---|
|
|
|
|
All other |
|
|
|
|
|
All other |
|
|
|
|
|
For example, confluent.controlcenter.mail.from property variable should be
converted to the CONTROL_CENTER_MAIL_FROM environment variable.
The following example command runs Control Center, passing in Kafka, and Connect configuration parameters.
docker run -d \
--net=host \
--name=control-center \
--ulimit nofile=16384:16384 \
-e CONTROL_CENTER_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONTROL_CENTER_REPLICATION_FACTOR=1 \
-e CONTROL_CENTER_CONNECT_connect-cluster-name_CLUSTER=http://localhost:28082 \
-v /mnt/control-center/data:/var/lib/confluent-control-center \
confluentinc/cp-enterprise-control-center:8.0.2
Control Center required configurations
The following configurations must be passed to run the Confluent Control Center image.
CONTROL_CENTER_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONTROL_CENTER_REPLICATION_FACTORReplication factor for Control Center topics. We recommend setting this to 3 in a production environment.
Control Center optional configurations
CONTROL_CENTER_CONNECT_<connect cluster name>_CLUSTERTo enable Control Center to interact with a Kafka Connect cluster, set this parameter to the REST endpoint URL for the Kafka Connect cluster.
CONTROL_CENTER_CONFLUENT_LICENSEThe Confluent Control Center license key. Without the license key, Confluent Control Center can be used for a 30-day trial period.
CONTROL_CENTER_KAFKA_<name>_BOOTSTRAP_SERVERSTo list bootstrap servers for any additional Kafka cluster being monitored, replace
<name>with the name Control Center should use to identify this cluster. For example, usingCONTROL_CENTER_KAFKA_production-nyc_BOOTSTRAP_SERVER, Control Center will show the additional cluster with the nameproduction-nycin the cluster list.CONTROL_CENTER_LICENSEThe Confluent Control Center license key. Without the license key, Confluent Control Center can be used for a 30-day trial period.
CONTROL_CENTER_REST_LISTENERSSet this to the HTTP or HTTPS of Control Center UI. If not set, you may see the following warning message:
WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
Control Center Docker options
File descriptor limit: Control Center may require many open files, so we recommend setting the file descriptor limit to at least 16384.
Data persistence: the Control Center image stores its data in the
/var/lib/confluent-control-centerdirectory. We recommend that you bind this to a volume on the host machine so that data is persisted across runs.
Confluent Replicator configuration
Confluent Replicator is a Kafka connector and runs on a Kafka Connect cluster.
For the Confluent Replicator image (cp-enterprise-replicator), convert the property
variables as following and use them as environment variables:
Prefix with
CONNECT_.Convert to upper-case.
Separate each word with
_.Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run the following commands to set the properties, such as
bootstrap.servers, confluent.license, the topic names for config,
offsets, and status:
docker run -d \
--name=cp-enterprise-replicator \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_CONFLUENT_LICENSE="ABC123XYZ737BVT" \
confluentinc/cp-enterprise-replicator:8.0.2
The following example shows how to create a Confluent Replicator connector which replicates topic “confluent” from source Kafka cluster (src) to a destination Kafka cluster (dest).
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name": "confluent-src-to-dest",
"config": {
"connector.class":"io.confluent.connect.replicator.ReplicatorSourceConnector",
"key.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"value.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"src.kafka.bootstrap.servers": "kafka-src:9082",
"topic.whitelist": "confluent",
"topic.rename.format": "${topic}.replica"}}' \
http://localhost:28082/connectors
Required Confluent Replicator configurations
The following configurations must be passed to run the Confluent Replicator Docker image:
CONNECT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONNECT_GROUP_IDA unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPICThe name of the topic where connector and task configuration data is stored. This must be the same for all workers with the same
group.idCONNECT_OFFSET_STORAGE_TOPICThe name of the topic where offset data for connectors is stored. This must be the same for all workers with the same
group.idCONNECT_STATUS_STORAGE_TOPICThe name of the topic where state for connectors is stored. This must be the same for all workers with the same
group.idCONNECT_KEY_CONVERTERConverter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTERConverter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_INTERNAL_KEY_CONVERTERConverter class for internal keys that implements the
Converterinterface.CONNECT_INTERNAL_VALUE_CONVERTERConverter class for internal values that implements the
Converterinterface.CONNECT_REST_ADVERTISED_HOST_NAMEThe hostname that will be given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
Optional Confluent Replicator configurations
CONNECT_CONFLUENT_LICENSEThe Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
Confluent Replicator Executable configuration
Confluent Replicator Executable (cp-enterprise-replicator-executable)
provides another way to run Replicator by consolidating configuration properties
and abstracting Connect details.
The image depends on input files that can be passed by mounting a directory with the expected input files or by mounting each file individually. Additionally, the image supports passing command line parameters to the Replicator executable via environment variables as well.
The following example will start Replicator given that the local directory
/mnt/replicator/config, that will be mounted under /etc/replicator on
the Docker image, contains the required files consumer.properties,
producer.properties and the optional but often necessary file
replication.properties.
docker run -d \
--name=ReplicatorX \
--net=host \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:8.0.2
In a similar example, we start Replicator by omitting to add a
replication.properties and by specifying the replication properties by using
environment variables.
For a complete list of the expected environment variables see the list of configurations in the next sections.
docker run -d \
--name=ReplicatorX \
--net=host \
-e CLUSTER_ID=replicator-east-to-west \
-e WHITELIST=confluent \
-e TOPIC_RENAME_FORMAT='${topic}.replica' \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:8.0.2
Required Confluent Replicator Executable configurations
The following files must be passed to run the Replicator Executable Docker image:
CONSUMER_CONFIGA file that contains the configuration settings for the consumer reading from the origin cluster. Default location is
/etc/replicator/consumer.propertiesin the Docker image.PRODUCER_CONFIGA file that contains the configuration settings for the producer writing to the destination cluster. Default location is
/etc/replicator/producer.propertiesin the Docker image.CLUSTER_IDA string that specifies the unique identifier for the Replicator cluster. Default value is
replicator.
Optional Confluent Replicator Executable configurations
Additional configurations that are optional and maybe passed to Replicator Executable via environment variable instead of files are:
REPLICATION_CONFIGA file that contains the configuration settings for the replication from the origin cluster. Default location is
/etc/replicator/replication.propertiesin the Docker image.CONSUMER_MONITORING_CONFIGA file that contains the configuration settings of the producer writing monitoring information related to Replicator’s consumer. Default location is
/etc/replicator/consumer-monitoring.propertiesin the Docker image.PRODUCER_MONITORING_CONFIGA file that contains the configuration settings of the producer writing monitoring information related to Replicator’s producer. Default location is
/etc/replicator/producer-monitoring.propertiesin the Docker image.BLACKLISTA comma-separated list of topics that should not be replicated, even if they are included in the whitelist or matched by the regular expression.
WHITELISTA comma-separated list of the names of topics that should be replicated. Any topic that is in this list and not in the blacklist will be replicated.
CLUSTER_THREADSThe total number of threads across all workers in the Replicator cluster.
CONFLUENT_LICENSEThe Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
TOPIC_AUTO_CREATEWhether to automatically create topics in the destination cluster if required. If you disable automatic topic creation, Kafka Streams and ksqlDB applications continue to work. Kafka Streams and ksqlDB applications use the Admin Client, so topics are still created.
TOPIC_CONFIG_SYNCWhether to periodically sync topic configuration to the destination cluster.
TOPIC_CONFIG_SYNC_INTERVAL_MSSpecifies how frequently to check for configuration changes when
topic.config.syncis enabled.TOPIC_CREATE_BACKOFF_MSTime to wait before retrying auto topic creation or expansion.
TOPIC_POLL_INTERVAL_MSSpecifies how frequently to poll the source cluster for new topics matching the whitelist or regular expression.
TOPIC_PRESERVE_PARTITIONSWhether to automatically increase the number of partitions in the destination cluster to match the source cluster and ensure that messages replicated from the source cluster use the same partition in the destination cluster.
TOPIC_REGEXA regular expression that matches the names of the topics to be replicated. Any topic that matches this expression (or is listed in the whitelist) and not in the blacklist will be replicated.
TOPIC_RENAME_FORMATA format string for the topic name in the destination cluster, which may contain ${topic} as a placeholder for the originating topic name.
TOPIC_TIMESTAMP_TYPEThe timestamp type for the topics in the destination cluster.
Kafka MQTT Proxy configuration
For the Confluent MQTT Proxy Docker image, convert the property variables as following and use them as environment variables:
Prefix with
KAFKA_MQTT_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Required Kafka MQTT Proxy configurations
The following configurations must be passed to run the Confluent MQTT Proxy Docker image.
KAFKA_MQTT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KAFKA_MQTT_TOPIC_REGEX_LISTA comma-separated list of pairs of type ‘<kafka topic>:<regex>’ that is used to map MQTT topics to Kafka topics.