Docker Image Configuration Reference for Confluent Platform
This topic describes how to configure the Docker images when starting Confluent Platform.
You can dynamically specify configuration values in the Confluent Platform Docker images with environment variables. You can use the
Docker -e or --env flags to specify various configurations.
Kafka configuration
The sections helps configure the following images that include Kafka:
cp-kafka- includes Kafka.cp-server- includes role-based access control, self-balancing clusters, and more, in addition to Kafka.confluent-local- includes Kafka and Confluent Rest Proxy . Defaults to running in KRaft mode.
Convert the properties file variables according to the following rules and use them as Docker environment variables:
Prefix Kafka component properties with
KAFKA_. For example,metric.reportersis a Kafka property so should be converted toKAFKA_METRIC_REPORTERS.Prefix Confluent component properties for
cp-serverwithCONFLUENT_for For example,confluent.metrics.reporter.bootstrap.serversis a Confluent Enterprise feature property, so, it should be converted toCONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
The required configurations vary depending on whether you are running Kafka in KRaft or ZooKeeper mode.
Required Kafka configurations for KRaft mode
Important
For more information about configuring KRaft in production, see KRaft Configuration Reference for Confluent Platform.
KAFKA_PROCESS_ROLESThe role of this server (
controller,brokerorbroker,controller).KAFKA_NODE_IDThe unique identifier for this server.
KAFKA_CONTROLLER_QUORUM_VOTERSA comma-separated list of quorum voters. Each controller is identified with their ID, host and port information in the format of {id}@{host}:{port}.
KAFKA_CONTROLLER_LISTENER_NAMESA comma-separated list of the names of the listeners used by the controller. On a node with
process.roles=broker, only the first listener in the list will be used by the broker. ZooKeeper-mode brokers should not set this value. For KRaft controllers in isolated or combined mode, the node will listen as a KRaft controller on all listeners that are listed for this property, and each must appear in thelistenersproperty.
Required Kafka configurations for ZooKeeper mode
As of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments. For more information, see KRaft Overview for Confluent Platform.
KAFKA_ZOOKEEPER_CONNECTInstructs Kafka how to get in touch with ZooKeeper.
KAFKA_ADVERTISED_LISTENERSRequired for ZooKeeper mode. Describes how the host name that is advertised and can be reached by clients. The value is published to ZooKeeper for clients to use.
If using the TLS/SSL or SASL protocol, the endpoint value must specify the protocols in the following formats:
SSL:
SSL://orSASL_SSL://SASL:
SASL_PLAINTEXT://orSASL_SSL://
Optional cp-server setting
The license key is an optional setting for the Enterprise Kafka (cp-server)
image for 30 days. For a complete list of Confluent Server configuration settings, see
Kafka Configuration Reference for Confluent Platform.
KAFKA_CONFLUENT_LICENSEThe Enterprise Kafka license key. Without the license key, Confluent Server can be used for a 30-day trial period. Note that
confluentis part of the property name and not the component name. For example, useKAFKA_CONFLUENT_LICENSEto specify the Kafka license.
Example configurations
The following examples show how to set environmental variables and run the Kafka images that Confluent provides. These examples:
Set the
KAFKA_ADVERTISED_LISTENERSvariable tolocalhost:29092. This makes Kafka accessible from outside of the container by advertising its location on the Docker host.Set
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTORto1. This is required when you are running with a single-node cluster. If you have three or more nodes, you can use the default.Each KRaft cluster must have a unique cluster ID assigned to its CLUSTER_ID variable. To learn how to use the
kafka-storagetool to generate a unique ID, see Generate and format IDs.These examples show KRaft combined mode, which is not supported for production. For production configuration, see Production Configuration Options.
cp-kafka example
The following examples show how to start cp-kafka both in KRaft combined mode and in ZooKeeper mode.
Combined mode is not supported for production clusters. For production configuration recommendations, see Production Configuration Options.
Generate a random-uuid using the kafka-storage tool:
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable:
docker run -d \
--name=kafka-kraft \
-h kafka-kraft \
-p 9101:9101 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \
-e KAFKA_JMX_PORT=9101 \
-e KAFKA_JMX_HOSTNAME=localhost \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='MkU3OEVBNTcwNTJENDM2Qk' \
confluentinc/cp-kafka:7.6.8
First start ZooKeeper, then start the Kafka broker.
As of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments. For more information, see KRaft Overview for Confluent Platform.
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_BROKER_ID=2 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:7.6.8
cp-server example
The following examples show how to start cp-server both in KRaft combined mode and in ZooKeeper mode.
Combined mode is not supported for production clusters. For production configuration recommendations, see Production Configuration Options.
Generate a random-uuid using the kafka-storage tool:
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable:
docker run -d \
--name=kafka-kraft \
-h kafka-kraft \
-p 9101:9101 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='q1Sh-9_ISia_zwGINzRvyQ' \
confluentinc/cp-server:7.6.8
First start ZooKeeper, then start the Kafka broker.
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_BROKER_ID=2 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e CONFLUENT_SUPPORT_CUSTOMER_ID=c0 \
-e KAFKA_CONFLUENT_LICENSE="ABC123XYZ737BVT" \
confluentinc/cp-server:7.6.8
If you want to use Confluent Auto Data Balancing features, see Quick Start for Auto Data Balancing in Confluent Platform.
confluent-local example
The confluent-local is configured to run in KRaft combined mode by default, and requires no configuration to start using it.
Combined mode is not supported for production. For production configuration recommendations, see Production Configuration Options.
Following is an example of how to run confluent-local with the default configurations:
docker run -d -P --name=kafka-kraft confluentinc/confluent-local:7.4.0
For a list of the default configurations for this image, see the configureDefaults file on GitHub.
The following example shows how generate a cluster ID and specify the ID when you run the image.
/bin/kafka-storage random-uuid
Assign the output to the CLUSTER_ID variable:
docker run -d -P \
--name=kafka-kraft \
-e CLUSTER_ID=q1Sh-9_ISia_zwGINzRvyQ
confluentinc/confluent-local:7.6.8
Confluent Schema Registry configuration
For the Schema Registry (cp-schema-registry) image, convert the property
variables as below and use them as environment variables:
Prefix with
SCHEMA_REGISTRY_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run the following to set kafkastore.connection.url,
host.name, listeners and debug:
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=SSL://hostname2:9092 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
-e SCHEMA_REGISTRY_DEBUG=true \
confluentinc/cp-schema-registry:7.6.8
Required Schema Registry configurations
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERSA list of Kafka brokers to connect to.
SCHEMA_REGISTRY_HOST_NAMEThis is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. Hostname must be resolveable because secondary nodes serve registration requests indirectly by simply forwarding them to the current primary, and returning the response supplied by the primary. For more information, see the Schema Registry documentation on Single Primary Architecture.
Kafka Connect configuration
For the Kafka Connect (cp-kafka-connect) image, convert the property
variables as below and use them as environment variables:
Prefix with
CONNECT_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run this command to set the required properties like
bootstrap.servers, the topic names for config, offsets and
status as well the key or value converter:
docker run -d \
--name=kafka-connect \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_PLUGIN_PATH=/usr/share/java \
confluentinc/cp-kafka-connect:7.6.8
Required Kafka Connect configurations
The following configurations must be passed to run the Kafka Connect Docker image.
CONNECT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONNECT_GROUP_IDA unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPICThe name of the topic in which to store connector and task configuration data. This must be the same for all workers with the same
group.idCONNECT_OFFSET_STORAGE_TOPICThe name of the topic in which to store offset data for connectors. This must be the same for all workers with the same
group.idCONNECT_STATUS_STORAGE_TOPICThe name of the topic in which to store state for connectors. This must be the same for all workers with the same
group.idCONNECT_KEY_CONVERTERConverter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTERConverter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_REST_ADVERTISED_HOST_NAMEThe hostname that is given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
CONNECT_PLUGIN_PATHThe location from which to load Connect plugins in class loading isolation. If using the
confluent-hubclient, include/usr/share/confluent-hub-components, the default path thatconfluent-hubinstalls to.
Optional Kafka Connect configurations
The images marked as Commercially licensed include proprietary components and must be licensed from Confluent when deployed.
CONNECT_CONFLUENT_LICENSERequired if using
cp-server-connectorcp-server-connect-baseDocker images. Without a license key, Kafka Connect can be used for a 30-day trial period. See Docker Image Reference for Confluent Platform for more information.
Note
To configure the JVM for Connect, you must use the KAFKA_HEAP_OPTS
setting instead of CONNECT_KAFKA_HEAP_OPTS.
ksqlDB Server configuration
For a complete list of ksqlDB parameters, see ksqlDB Configuration Parameter Reference.
For the ksqlDB Server image (cp-ksqldb-server), convert the property variables as
following and use them as environment variables:
Prefix with
KSQL_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
ksqlDB Headless Server configurations
Run a standalone ksqlDB Server instance in a container.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_KSQL_QUERIES_FILEA file that specifies predefined SQL queries.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \
-e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \
confluentinc/cp-ksqldb-server:7.6.8
ksqlDB Headless Server with Interceptors configurations
Run a standalone ksqlDB Server with specified interceptor classes in a container. For more info on interceptor classes, see Confluent Monitoring Interceptors.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_KSQL_QUERIES_FILEA file that specifies predefined SQL queries.
KSQL_PRODUCER_INTERCEPTOR_CLASSESA list of fully qualified class names for producer interceptors.
KSQL_CONSUMER_INTERCEPTOR_CLASSESA list of fully qualified class names for consumer interceptors.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \
-e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \
-e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \
-e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \
confluentinc/cp-ksqldb-server:7.6.8
Interactive Server configuration
Run a ksqlDB Server that enables manual interaction by using the ksqlDB CLI.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERSA list of URIs, including the protocol, that the broker listens on.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2 \
confluentinc/cp-ksqldb-server:7.6.8
Interactive Server configuration with Interceptors
Run a ksqlDB Server with interceptors that enables manual interaction by using the ksqlDB CLI. For more info on interceptor classes, see Confluent Monitoring Interceptors.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERSA list of URIs, including the protocol, that the broker listens on.
KSQL_PRODUCER_INTERCEPTOR_CLASSESA list of fully qualified class names for producer interceptors.
KSQL_CONSUMER_INTERCEPTOR_CLASSESA list of fully qualified class names for consumer interceptors.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2_ \
-e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \
-e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \
confluentinc/cp-ksqldb-server:7.6.8
In interactive mode, the CLI instance running outside Docker can connect to the server running in Docker.
./bin/ksql
...
CLI v7.6.8, Server v7.6.8-SNAPSHOT located at http://localhost:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Connect to a secure Kafka cluster, like Confluent Cloud
Run a ksqlDB Server that uses a secure connection to a Kafka cluster. Learn about Configure Security for ksqlDB.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_KSQL_SERVICE_IDThe service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERSA list of URIs, including the protocol, that the broker listens on.
KSQL_KSQL_SINK_REPLICASThe default number of replicas for the topics created by ksqlDB. The default is one.
KSQL_KSQL_STREAMS_REPLICATION_FACTORThe replication factor for internal topics, the command topic, and output topics.
KSQL_SECURITY_PROTOCOLThe protocol that your Kafka cluster uses for security.
KSQL_SASL_MECHANISMThe SASL mechanism that your Kafka cluster uses for security.
KSQL_SASL_JAAS_CONFIGThe Java Authentication and Authorization Service (JAAS) configuration.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=REMOVED_SERVER1:9092,REMOVED_SERVER2:9093,REMOVED_SERVER3:9094 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=default_ \
-e KSQL_KSQL_SINK_REPLICAS=3 \
-e KSQL_KSQL_STREAMS_REPLICATION_FACTOR=3 \
-e KSQL_SECURITY_PROTOCOL=SASL_SSL \
-e KSQL_SASL_MECHANISM=PLAIN \
-e KSQL_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<username>\" password=\"<strong-password>\";" \
confluentinc/cp-ksqldb-server:7.6.8
Configure a ksqlDB Server by using Java system properties
Run a ksqlDB Server with a configuration that’s defined by Java properties.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_OPTSA space-separated list of Java options.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dksql.queries.file=/path/in/container/queries.sql" \
confluentinc/cp-ksqldb-server:7.6.8
View logs
Use the docker logs command to view ksqlDB logs that are generated from
within the container.
docker logs -f <container-id>
[2018-05-24 23:43:05,591] INFO stream-thread [_confluent-ksql-default_transient_1507119262168861890_1527205385485-71c8a94c-abe9-45ba-91f5-69a762ec5c1d-StreamThread-17] Starting (org.apache.kafka.streams.processor.internals.StreamThread:713)
...
ksqlDB CLI configuration
For the ksqlDB CLI image (cp-ksqldb-cli), convert the property variables as
following and use them as environment variables:
Prefix with
KSQL_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Connect to a Dockerized ksqlDB Server
Run a ksqlDB CLI instance in a container and connect to a ksqlDB Server that’s running in a container.
The Docker network created by ksqlDB Server enables you to connect to a Dockerized ksqlDB server.
KSQL_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KSQL_OPTSA space-separated list of Java options.
# Run ksqlDB Server.
docker run -d -p 10.0.0.11:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dlisteners=http://0.0.0.0:8088/" \
confluentinc/cp-ksqldb-server:7.6.8
# Connect the ksqlDB CLI to the server.
docker run -it confluentinc/cp-ksqldb-cli http://10.0.0.11:8088
...
Copyright 2017 Confluent Inc.
CLI v7.6.8-SNAPSHOT, Server v7.6.8-SNAPSHOT located at http://10.0.0.11:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Provide a configuration file
Set up a a ksqlDB CLI instance by using a configuration file, and run it in a container.
# Assume ksqlDB Server is running.
# Ensure that the configuration file exists.
ls /path/on/host/ksql-cli.properties
docker run -it \
-v /path/on/host/:/path/in/container \
confluentinc/cp-ksqldb-cli:7.6.8 http://10.0.0.11:8088 \
--config-file /path/in/container/ksql-cli.properties
Connect to a ksqlDB Server running on another host, like AWS
Run a ksqlDB CLI instance in a container and connect to a remote ksqlDB Server host.
docker run -it confluentinc/cp-ksqldb-cli:7.6.8 \
http://ec2-etc.us-etc.compute.amazonaws.com:8080
...
Copyright 2017 Confluent Inc.
CLI v7.6.8-SNAPSHOT, Server v7.6.8-SNAPSHOT located at http://ec2-blah.us-blah.compute.amazonaws.com:8080
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Confluent REST Proxy configuration
The Confluent REST Proxy (cp-kafka-rest) image uses the REST Proxy configuration
setting names. Convert the REST Proxy configurations to environment variables as below:
Prefix with
KAFKA_REST_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, use the
KAFKA_REST_SCHEMA_REGISTRY_URL environment variable to set
schema.registry.url.
See REST Proxy Configuration Options for the configuration settings that REST Proxy supports.
The following command sets the listeners, schema.registry.url
and zookeeper.connect (ZooKeeper mode only):
docker run -d \
--net=host \
--name=kafka-rest \
-e KAFKA_REST_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_REST_LISTENERS=http://localhost:8082 \
-e KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081 \
-e KAFKA_REST_BOOTSTRAP_SERVERS=localhost:29092 \
confluentinc/cp-kafka-rest:7.6.8
Required Confluent REST Proxy configurations
The following configurations must be passed to run the REST Proxy Docker image.
KAFKA_REST_HOST_NAMEThe hostname used to generate absolute URLs in responses. Hostname may be required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. For more information, see the Confluent Platform documentation on REST proxy deployment.
KAFKA_REST_BOOTSTRAP_SERVERSA list of Kafka brokers to connect to. To learn about the corresponding
bootstrap.serverREST Proxy setting, see REST Proxy Configuration Options.KAFKA_REST_ZOOKEEPER_CONNECTThis variable is deprecated in REST Proxy v2. Use the variable if using REST Proxy v1 and if not using
KAFKA_REST_BOOTSTRAP_SERVERS.The ZooKeeper connection string in the form
hostname:portwhere host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the formhostname1:port1,hostname2:port2,hostname3:port3.The server may also have a ZooKeeper
chrootpath as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so, the consumer should use the same chroot path in its connection string. For example to give a chroot path of/chroot/path, you would use the connection stringhostname1:port1,hostname2:port2,hostname3:port3/chroot/path.
Confluent Control Center (Legacy) configuration
For the Confluent Control Center (Legacy) (cp-control-center) image, you must convert property variables used by
Confluent Control Center (Legacy) to environment variables using the format described in the following table. In general, you:
Replace
confluent.controlcenterin the property variable withCONTROL_CENTER_.Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Some of the properties have specific conversions where more general rules do not apply. Also, note some properties are required and others are optional.
Property variable |
Environment variable replacement or pattern |
|---|---|
|
|
|
|
|
|
All other |
|
|
|
All other |
|
|
|
|
|
All other |
|
|
|
|
|
For example, confluent.controlcenter.mail.from property variable should be
converted to the CONTROL_CENTER_MAIL_FROM environment variable.
The following example command runs Control Center (Legacy), passing in Kafka, and Connect configuration parameters.
docker run -d \
--net=host \
--name=control-center \
--ulimit nofile=16384:16384 \
-e CONTROL_CENTER_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONTROL_CENTER_REPLICATION_FACTOR=1 \
-e CONTROL_CENTER_CONNECT_connect-cluster-name_CLUSTER=http://localhost:28082 \
-v /mnt/control-center/data:/var/lib/confluent-control-center \
confluentinc/cp-enterprise-control-center:7.6.8
Control Center (Legacy) required configurations
The following configurations must be passed to run the Confluent Control Center (Legacy) image.
CONTROL_CENTER_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONTROL_CENTER_REPLICATION_FACTORReplication factor for Control Center (Legacy) topics. We recommend setting this to 3 in a production environment.
Control Center (Legacy) optional configurations
CONTROL_CENTER_CONNECT_<connect cluster name>_CLUSTERTo enable Control Center (Legacy) to interact with a Kafka Connect cluster, set this parameter to the REST endpoint URL for the Kafka Connect cluster.
CONTROL_CENTER_CONFLUENT_LICENSEThe Confluent Control Center (Legacy) license key. Without the license key, Confluent Control Center (Legacy) can be used for a 30-day trial period.
CONTROL_CENTER_KAFKA_<name>_BOOTSTRAP_SERVERSTo list bootstrap servers for any additional Kafka cluster being monitored, replace
<name>with the name Control Center (Legacy) should use to identify this cluster. For example, usingCONTROL_CENTER_KAFKA_production-nyc_BOOTSTRAP_SERVER, Control Center (Legacy) will show the addtional cluster with the nameproduction-nycin the cluster list.CONTROL_CENTER_LICENSEThe Confluent Control Center (Legacy) license key. Without the license key, Confluent Control Center (Legacy) can be used for a 30-day trial period.
CONTROL_CENTER_REST_LISTENERSSet this to the HTTP or HTTPS of Control Center (Legacy) UI. If not set, you may see the following warning message:
WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
Control Center (Legacy) Docker options
File descriptor limit: Control Center (Legacy) may require many open files, so we recommend setting the file descriptor limit to at least 16384.
Data persistence: the Control Center (Legacy) image stores its data in the
/var/lib/confluent-control-centerdirectory. We recommend that you bind this to a volume on the host machine so that data is persisted across runs.
Confluent Replicator configuration
Confluent Replicator is a Kafka connector and runs on a Kafka Connect cluster.
For the Confluent Replicator image (cp-enterprise-replicator), convert the property
variables as following and use them as environment variables:
Prefix with
CONNECT_.Convert to upper-case.
Separate each word with
_.Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, run the following commands to set the properties, such as
bootstrap.servers, confluent.license, the topic names for config,
offsets, and status:
docker run -d \
--name=cp-enterprise-replicator \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_CONFLUENT_LICENSE="ABC123XYZ737BVT" \
confluentinc/cp-enterprise-replicator:7.6.8
The following example shows how to create a Confluent Replicator connector which replicates topic “confluent” from source Kafka cluster (src) to a destination Kafka cluster (dest).
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name": "confluent-src-to-dest",
"config": {
"connector.class":"io.confluent.connect.replicator.ReplicatorSourceConnector",
"key.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"value.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"src.kafka.bootstrap.servers": "kafka-src:9082",
"topic.whitelist": "confluent",
"topic.rename.format": "${topic}.replica"}}' \
http://localhost:28082/connectors
Required Confluent Replicator configurations
The following configurations must be passed to run the Confluent Replicator Docker image:
CONNECT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....CONNECT_GROUP_IDA unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPICThe name of the topic where connector and task configuration data is stored. This must be the same for all workers with the same
group.idCONNECT_OFFSET_STORAGE_TOPICThe name of the topic where offset data for connectors is stored. This must be the same for all workers with the same
group.idCONNECT_STATUS_STORAGE_TOPICThe name of the topic where state for connectors is stored. This must be the same for all workers with the same
group.idCONNECT_KEY_CONVERTERConverter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTERConverter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_INTERNAL_KEY_CONVERTERConverter class for internal keys that implements the
Converterinterface.CONNECT_INTERNAL_VALUE_CONVERTERConverter class for internal values that implements the
Converterinterface.CONNECT_REST_ADVERTISED_HOST_NAMEThe hostname that will be given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
Optional Confluent Replicator configurations
CONNECT_CONFLUENT_LICENSEThe Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
Confluent Replicator Executable configuration
Confluent Replicator Executable (cp-enterprise-replicator-executable)
provides another way to run Replicator by consolidating configuration properties
and abstracting Connect details.
The image depends on input files that can be passed by mounting a directory with the expected input files or by mounting each file individually. Additionally, the image supports passing command line parameters to the Replicator executable via environment variables as well.
The following example will start Replicator given that the local directory
/mnt/replicator/config, that will be mounted under /etc/replicator on
the Docker image, contains the required files consumer.properties,
producer.properties and the optional but often necessary file
replication.properties.
docker run -d \
--name=ReplicatorX \
--net=host \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:7.6.8
In a similar example, we start Replicator by omitting to add a
replication.properties and by specifying the replication properties by using
environment variables.
For a complete list of the expected environment variables see the list of configurations in the next sections.
docker run -d \
--name=ReplicatorX \
--net=host \
-e CLUSTER_ID=replicator-east-to-west \
-e WHITELIST=confluent \
-e TOPIC_RENAME_FORMAT='${topic}.replica' \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:7.6.8
Required Confluent Replicator Executable configurations
The following files must be passed to run the Replicator Executable Docker image:
CONSUMER_CONFIGA file that contains the configuration settings for the consumer reading from the origin cluster. Default location is
/etc/replicator/consumer.propertiesin the Docker image.PRODUCER_CONFIGA file that contains the configuration settings for the producer writing to the destination cluster. Default location is
/etc/replicator/producer.propertiesin the Docker image.CLUSTER_IDA string that specifies the unique identifier for the Replicator cluster. Default value is
replicator.
Optional Confluent Replicator Executable configurations
Additional configurations that are optional and maybe passed to Replicator Executable via environment variable instead of files are:
REPLICATION_CONFIGA file that contains the configuration settings for the replication from the origin cluster. Default location is
/etc/replicator/replication.propertiesin the Docker image.CONSUMER_MONITORING_CONFIGA file that contains the configuration settings of the producer writing monitoring information related to Replicator’s consumer. Default location is
/etc/replicator/consumer-monitoring.propertiesin the Docker image.PRODUCER_MONITORING_CONFIGA file that contains the configuration settings of the producer writing monitoring information related to Replicator’s producer. Default location is
/etc/replicator/producer-monitoring.propertiesin the Docker image.BLACKLISTA comma-separated list of topics that should not be replicated, even if they are included in the whitelist or matched by the regular expression.
WHITELISTA comma-separated list of the names of topics that should be replicated. Any topic that is in this list and not in the blacklist will be replicated.
CLUSTER_THREADSThe total number of threads across all workers in the Replicator cluster.
CONFLUENT_LICENSEThe Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
TOPIC_AUTO_CREATEWhether to automatically create topics in the destination cluster if required. If you disable automatic topic creation, Kafka Streams and ksqlDB applications continue to work. Kafka Streams and ksqlDB applications use the Admin Client, so topics are still created.
TOPIC_CONFIG_SYNCWhether to periodically sync topic configuration to the destination cluster.
TOPIC_CONFIG_SYNC_INTERVAL_MSSpecifies how frequently to check for configuration changes when
topic.config.syncis enabled.TOPIC_CREATE_BACKOFF_MSTime to wait before retrying auto topic creation or expansion.
TOPIC_POLL_INTERVAL_MSSpecifies how frequently to poll the source cluster for new topics matching the whitelist or regular expression.
TOPIC_PRESERVE_PARTITIONSWhether to automatically increase the number of partitions in the destination cluster to match the source cluster and ensure that messages replicated from the source cluster use the same partition in the destination cluster.
TOPIC_REGEXA regular expression that matches the names of the topics to be replicated. Any topic that matches this expression (or is listed in the whitelist) and not in the blacklist will be replicated.
TOPIC_RENAME_FORMATA format string for the topic name in the destination cluster, which may contain ${topic} as a placeholder for the originating topic name.
TOPIC_TIMESTAMP_TYPEThe timestamp type for the topics in the destination cluster.
Kafka MQTT Proxy configuration
For the Confluent MQTT Proxy Docker image, convert the property variables as following and use them as environment variables:
Prefix with
KAFKA_MQTT_.Convert to upper-case.
Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
Required Kafka MQTT Proxy configurations
The following configurations must be passed to run the Confluent MQTT Proxy Docker image.
KAFKA_MQTT_BOOTSTRAP_SERVERSA host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3....KAFKA_MQTT_TOPIC_REGEX_LISTA comma-separated list of pairs of type ‘<kafka topic>:<regex>’ that is used to map MQTT topics to Kafka topics.
ZooKeeper configuration
For the ZooKeeper (cp-zookeeper) image, convert the zookeeper.properties file
variables as below and use them as environment variables:
Prefix with
ZOOKEEPER_.Convert to upper-case.
Separate each word with
_.Replace a period (
.) with a single underscore (_).Replace an underscore (
_) with double underscores (__).Replace a dash (
-) with triple underscores (___).
For example, to set clientPort, tickTime, and syncLimit, run the
following command:
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
-e ZOOKEEPER_SYNC_LIMIT=2 \
confluentinc/cp-zookeeper:7.6.8
Required ZooKeeper configurations
ZOOKEEPER_CLIENT_PORTInstructs ZooKeeper where to listen for connections by clients such as Apache Kafka®.
ZOOKEEPER_SERVER_IDRequired when running an ensemble. An environment ID that uniquely identifies each ZooKeeper server in the ensemble. The ID must be unique within the ensemble and should have a value between 1 and 255. These are stored in the
myidfile, which consists of a single line that contains only the text of that machine’s ID. For example, themyidof server 1 would only contain the text"1".ZOOKEEPER_SERVERSRequired when running an ensemble. Used to specify the list of ZooKeeper servers in an ensemble that work together to provide a highly available and fault-tolerant service. This should be a semicolon-separated list of
hostname:clientport:electionportentries. For example:zk1:2181:2888;zk2:2181:2888;zk3:2181:2888
For more about running ZooKeeper as an ensemble, see Multi-node Setup. For an example of using Docker with Kafka and ZooKeeper in a multi-node setup, see Tutorial: Multi-Region Clusters on Confluent Platform.