Docker Configuration Parameters for Confluent Platform¶
This topic describes how to configure the Docker images when starting Confluent Platform.
You can dynamically specify configuration values in the Confluent Platform Docker images with environment variables. You can use the
Docker -e
or --env
flags for to specify various settings.
See also
For an example that shows this in action, see the Confluent Platform demo. Refer to the demo’s docker-compose.yml file for a configuration reference.
ZooKeeper configuration¶
For the ZooKeeper (cp-zookeeper
) image, convert the zookeeper.properties
file
variables as below and use them as environment variables:
- Prefix with
ZOOKEEPER_
. - Convert to upper-case.
- Separate each word with
_
. - Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, to set clientPort
, tickTime
, and syncLimit
, run the
following command:
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
-e ZOOKEEPER_SYNC_LIMIT=2 \
confluentinc/cp-zookeeper:6.2.15
Required ZooKeeper settings¶
ZOOKEEPER_CLIENT_PORT
- Instructs ZooKeeper where to listen for connections by clients such as Apache Kafka®.
ZOOKEEPER_SERVER_ID
- This is only required when running in clustered mode. Sets the server ID in the
myid
file, which consists of a single line that contains only the text of that machine’s ID. For example, themyid
of server 1 would only contain the text"1"
. The ID must be unique within the ensemble and should have a value between 1 and 255.
Confluent Kafka configuration¶
For the Kafka (cp-kafka
) image, convert the kafka.properties
file
variables as below and use them as environment variables:
- Prefix with
KAFKA_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, run the following commands to set broker.id
,
advertised.listeners
, zookeeper.connect
, and
offsets.topic.replication.factor
:
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_BROKER_ID=2 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:6.2.15
Note
The KAFKA_ADVERTISED_LISTENERS
variable is set to localhost:29092
. This makes Kafka accessible from
outside the container by advertising its location on the Docker host.
Also notice that KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
is set to 1
. This is required when you are running with
a single-node cluster. If you have three or more nodes, you can use the default.
Required Confluent Kafka settings¶
KAFKA_ZOOKEEPER_CONNECT
- Instructs Kafka how to get in touch with ZooKeeper.
KAFKA_ADVERTISED_LISTENERS
Describes how the host name that is advertised and can be reached by clients. The value is published to ZooKeeper for clients to use.
If using the SSL or SASL protocol, the endpoint value must specify the protocols in the following formats:
- SSL:
SSL://
orSASL_SSL://
- SASL:
SASL_PLAINTEXT://
orSASL_SSL://
- SSL:
Confluent Enterprise Kafka configuration¶
The Enterprise Kafka (cp-server
) image includes the packages for Confluent Auto Data Balancer
and Health+ in addition to Kafka.
For the Enterprise Kafka (cp-server
) image, convert the kafka.properties
file variables as below and use them as environment variables:
- Prefix with
KAFKA_
for Apache Kafka. - Prefix with
CONFLUENT_
for Confluent components. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, run this command to set broker.id
, advertised.listeners
,
zookeeper.connect
, offsets.topic.replication.factor
, and
confluent.support.customer.id
:
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_BROKER_ID=2 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e CONFLUENT_SUPPORT_CUSTOMER_ID=c0 \
-e KAFKA_CONFLUENT_LICENSE="ABC123XYZ737BVT" \
confluentinc/cp-server:6.2.15
Note
The KAFKA_ADVERTISED_LISTENERS
variable is set to localhost:29092
. It makes Kafka accessible from outside
of the container by advertising its location on the Docker host.
If you want to use Confluent Auto Data Balancing features, see Auto Data Balancing.
Also notice that KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
is set to 1
. This is required when you are running with
a single-node cluster. If you have three or more nodes, you can use the default.
Required Confluent Enterprise Kafka settings¶
KAFKA_ZOOKEEPER_CONNECT
- Tells Kafka how to get in touch with ZooKeeper.
KAFKA_ADVERTISED_LISTENERS
Describes how the host name that is advertised and can be reached by clients. The value is published to ZooKeeper for clients to use.
If using the SSL or SASL protocol, the endpoint value must specify the protocols in the following formats:
- SSL:
SSL://
orSASL_SSL://
- SASL:
SASL_PLAINTEXT://
orSASL_SSL://
- SSL:
Optional Confluent Enterprise Kafka settings¶
The following is an optional setting for the Enterprise Kafka (cp-server
)
image. For a complete list of Confluent Server configuration settings, see
Kafka Configuration Reference for Confluent Platform.
KAFKA_CONFLUENT_LICENSE
- The Enterprise Kafka license key. Without the license key, Confluent Server can be used for a 30-day trial period.
Confluent Schema Registry configuration¶
For the Schema Registry (cp-schema-registry
) image, convert the property
variables as below and use them as environment variables:
- Prefix with
SCHEMA_REGISTRY_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, run the following to set kafkastore.connection.url
,
host.name
, listeners
and debug
:
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=SSL://hostname2:9092 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
-e SCHEMA_REGISTRY_DEBUG=true \
confluentinc/cp-schema-registry:6.2.15
Required Schema Registry settings¶
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
- A list of Kafka brokers to connect to.
SCHEMA_REGISTRY_HOST_NAME
- The hostname advertised in ZooKeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. Hostname must be resolveable because secondary nodes serve registration requests indirectly by simply forwarding them to the current primary, and returning the response supplied by the primary. For more information, see the Schema Registry documentation on Single Primary Architecture.
Confluent REST Proxy configuration¶
The Confluent REST Proxy (cp-kafka-rest
) image uses the REST Proxy configuration
setting names. Convert the REST Proxy settings to environment variables as below:
- Prefix with
KAFKA_REST_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, use the
KAFKA_REST_SCHEMA_REGISTRY_URL
environment variable to set
schema.registry.url
.
See REST Proxy Configuration Options for the configuration settings that REST Proxy supports.
The following command sets the listeners
, schema.registry.url
and zookeeper.connect
:
docker run -d \
--net=host \
--name=kafka-rest \
-e KAFKA_REST_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_REST_LISTENERS=http://localhost:8082 \
-e KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081 \
-e KAFKA_REST_BOOTSTRAP_SERVERS=localhost:29092 \
confluentinc/cp-kafka-rest:6.2.15
Required Confluent REST Proxy settings¶
The following settings must be passed to run the REST Proxy Docker image.
KAFKA_REST_HOST_NAME
- The hostname used to generate absolute URLs in responses. Hostname may be required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. For more information, see the Confluent Platform documentation on REST proxy deployment.
KAFKA_REST_BOOTSTRAP_SERVERS
- A list of Kafka brokers to connect to. To learn about the corresponding
bootstrap.server
REST Proxy setting, see REST Proxy Configuration Options. KAFKA_REST_ZOOKEEPER_CONNECT
This variable is deprecated in REST Proxy v2. Use the variable if using REST Proxy v1 and if not using
KAFKA_REST_BOOTSTRAP_SERVERS
.The ZooKeeper connection string in the form
hostname:port
where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the formhostname1:port1,hostname2:port2,hostname3:port3
.The server may also have a ZooKeeper
chroot
path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so, the consumer should use the same chroot path in its connection string. For example to give a chroot path of/chroot/path
, you would use the connection stringhostname1:port1,hostname2:port2,hostname3:port3/chroot/path
.
Kafka Connect configuration¶
For the Kafka Connect (cp-kafka-connect
) image, convert the property
variables as below and use them as environment variables:
- Prefix with
CONNECT_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, run this command to set the required properties like
bootstrap.servers
, the topic names for config
, offsets
and
status
as well the key
or value
converter:
docker run -d \
--name=kafka-connect \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_PLUGIN_PATH=/usr/share/java \
confluentinc/cp-kafka-connect:6.2.15
Required Kafka Connect settings¶
The following settings must be passed to run the Kafka Connect Docker image.
CONNECT_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. CONNECT_GROUP_ID
- A unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPIC
- The name of the topic in which to store connector and task configuration data. This must be the same for all workers with
the same
group.id
CONNECT_OFFSET_STORAGE_TOPIC
- The name of the topic in which to store offset data for connectors. This must be the same for all workers with the same
group.id
CONNECT_STATUS_STORAGE_TOPIC
- The name of the topic in which to store state for connectors. This must be the same for all workers with the same
group.id
CONNECT_KEY_CONVERTER
- Converter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTER
- Converter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_INTERNAL_KEY_CONVERTER
- Converter class for internal keys that implements the
Converter
interface. CONNECT_INTERNAL_VALUE_CONVERTER
- Converter class for internal values that implements the
Converter
interface. CONNECT_REST_ADVERTISED_HOST_NAME
- The hostname that is given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
CONNECT_PLUGIN_PATH
- The location from which to load Connect plugins in class loading isolation.
If using the
confluent-hub
client, include/usr/share/confluent-hub-components
, the default path thatconfluent-hub
installs to.
Optional Kafka Connect settings¶
CONNECT_CONFLUENT_LICENSE
- The Confluent license key. Without the license key, Kafka Connect can be used for a 30-day trial period.
Confluent Control Center configuration¶
For the Confluent Control Center (cp-control-center
) image, convert the property variables as
following and use them as environment variables:
- Replace
confluent.controlcenter
in the property variable withCONTROL_CENTER_
. - Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, confluent.controlcenter.mail.from
property variable should be
converted to the CONTROL_CENTER_MAIL_FROM
environment variable.
The following example command runs Control Center, passing in its ZooKeeper, Kafka, and Connect configuration parameters.
docker run -d \
--net=host \
--name=control-center \
--ulimit nofile=16384:16384 \
-e CONTROL_CENTER_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONTROL_CENTER_REPLICATION_FACTOR=1 \
-e CONTROL_CENTER_CONNECT_CLUSTER=http://localhost:28082 \
-v /mnt/control-center/data:/var/lib/confluent-control-center \
confluentinc/cp-enterprise-control-center:6.2.15
Control Center Docker options¶
- File descriptor limit: Control Center may require many open files so we recommend setting the file descriptor limit to at least 16384
- Data persistence: the Control Center image stores its data in the
/var/lib/confluent-control-center
directory. We recommend that you bind this to a volume on the host machine so that data is persisted across runs.
Control Center required settings¶
The following settings must be passed to run the Confluent Control Center image.
CONTROL_CENTER_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. CONTROL_CENTER_REPLICATION_FACTOR
- Replication factor for Control Center topics. We recommend setting this to 3 in a production environment.
Control Center optional settings¶
CONTROL_CENTER_CONNECT_CLUSTER
- To enable Control Center to interact with a Kafka Connect cluster, set this parameter to the REST endpoint URL for the Kafka Connect cluster.
CONTROL_CENTER_CONFLUENT_LICENSE
- The Confluent Control Center license key. Without the license key, Confluent Control Center can be used for a 30-day trial period.
CONTROL_CENTER_KAFKA_<name>_BOOTSTRAP_SERVERS
- To list bootstrap servers for any additional Kafka cluster being monitored, replace
<name>
with the name Control Center should use to identify this cluster. For example, usingCONTROL_CENTER_KAFKA_production-nyc_BOOTSTRAP_SERVER
, Control Center will show the addtional cluster with the nameproduction-nyc
in the cluster list. CONTROL_CENTER_LICENSE
- The Confluent Control Center license key. Without the license key, Confluent Control Center can be used for a 30-day trial period.
CONTROL_CENTER_REST_LISTENERS
Set this to the HTTP or HTTPS of Control Center UI. If not set, you may see the following warning message:
WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
Confluent Replicator configuration¶
Confluent Replicator is a Kafka connector and runs on a Kafka Connect cluster.
For the Confluent Replicator image (cp-enterprise-replicator
), convert the property
variables as following and use them as environment variables:
- Prefix with
CONNECT_
. - Convert to upper-case.
- Separate each word with
_
. - Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
For example, run the following commands to set the properties, such as
bootstrap.servers
, confluent.license
, the topic names for config
,
offsets
, and status
:
docker run -d \
--name=cp-enterprise-replicator \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_CONFLUENT_LICENSE="ABC123XYZ737BVT" \
confluentinc/cp-enterprise-replicator:6.2.15
The following example shows how to create a Confluent Replicator connector which replicates topic “confluent” from source Kafka cluster (src) to a destination Kafka cluster (dest).
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name": "confluent-src-to-dest",
"config": {
"connector.class":"io.confluent.connect.replicator.ReplicatorSourceConnector",
"key.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"value.converter": "io.confluent.connect.replicator.util.ByteArrayConverter",
"src.kafka.bootstrap.servers": "kafka-src:9082",
"topic.whitelist": "confluent",
"topic.rename.format": "${topic}.replica"}}' \
http://localhost:28082/connectors
Required Confluent Replicator settings¶
The following settings must be passed to run the Confluent Replicator Docker image:
CONNECT_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. CONNECT_GROUP_ID
- A unique string that identifies the Connect cluster group this worker belongs to.
CONNECT_CONFIG_STORAGE_TOPIC
- The name of the topic where connector and task configuration data is stored. This must be the same for all workers with the same
group.id
CONNECT_OFFSET_STORAGE_TOPIC
- The name of the topic where offset data for connectors is stored. This must be the same for all workers with the same
group.id
CONNECT_STATUS_STORAGE_TOPIC
- The name of the topic where state for connectors is stored. This must be the same for all workers with the same
group.id
CONNECT_KEY_CONVERTER
- Converter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_VALUE_CONVERTER
- Converter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors.
CONNECT_INTERNAL_KEY_CONVERTER
- Converter class for internal keys that implements the
Converter
interface. CONNECT_INTERNAL_VALUE_CONVERTER
- Converter class for internal values that implements the
Converter
interface. CONNECT_REST_ADVERTISED_HOST_NAME
- The hostname that will be given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client.
Optional Confluent Replicator settings¶
CONNECT_CONFLUENT_LICENSE
- The Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
Confluent Replicator Executable configuration¶
Confluent Replicator Executable (cp-enterprise-replicator-executable
)
provides another way to run Replicator by consolidating configuration properties
and abstracting Connect details.
The image depends on input files that can be passed by mounting a directory with the expected input files or by mounting each file individually. Additionally, the image supports passing command line parameters to the Replicator executable via environment variables as well.
The following example will start Replicator given that the local directory
/mnt/replicator/config
, that will be mounted under /etc/replicator
on
the Docker image, contains the required files consumer.properties
,
producer.properties
and the optional but often necessary file
replication.properties
.
docker run -d \
--name=ReplicatorX \
--net=host \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:6.2.15
In a similar example, we start Replicator by omitting to add a
replication.properties
and by specifying the replication properties by using
environment variables.
For a complete list of the expected environment variables see the list of settings in the next sections.
docker run -d \
--name=ReplicatorX \
--net=host \
-e CLUSTER_ID=replicator-east-to-west \
-e WHITELIST=confluent \
-e TOPIC_RENAME_FORMAT='${topic}.replica' \
-e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /mnt/replicator/config:/etc/replicator \
confluentinc/cp-enterprise-replicator-executable:6.2.15
Required Confluent Replicator Executable settings¶
The following files must be passed to run the Replicator Executable Docker image:
CONSUMER_CONFIG
- A file that contains the configuration settings for the consumer reading from the origin cluster. Default location is
/etc/replicator/consumer.properties
in the Docker image. PRODUCER_CONFIG
- A file that contains the configuration settings for the producer writing to the destination cluster. Default location is
/etc/replicator/producer.properties
in the Docker image. CLUSTER_ID
- A string that specifies the unique identifier for the Replicator cluster. Default value is
replicator
.
Optional Confluent Replicator Executable settings¶
Additional settings that are optional and maybe passed to Replicator Executable via environment variable instead of files are:
REPLICATION_CONFIG
- A file that contains the configuration settings for the replication from the origin cluster. Default location is
/etc/replicator/replication.properties
in the Docker image. CONSUMER_MONITORING_CONFIG
- A file that contains the configuration settings of the producer writing monitoring information related to Replicator’s consumer. Default location is
/etc/replicator/consumer-monitoring.properties
in the Docker image. PRODUCER_MONITORING_CONFIG
- A file that contains the configuration settings of the producer writing monitoring information related to Replicator’s producer. Default location is
/etc/replicator/producer-monitoring.properties
in the Docker image. BLACKLIST
- A comma-separated list of topics that should not be replicated, even if they are included in the whitelist or matched by the regular expression.
WHITELIST
- A comma-separated list of the names of topics that should be replicated. Any topic that is in this list and not in the blacklist will be replicated.
CLUSTER_THREADS
- The total number of threads across all workers in the Replicator cluster.
CONFLUENT_LICENSE
- The Confluent license key. Without the license key, Replicator can be used for a 30-day trial period.
TOPIC_AUTO_CREATE
- Whether to automatically create topics in the destination cluster if required. If you disable automatic topic creation, Kafka Streams and ksqlDB applications continue to work. Kafka Streams and ksqlDB applications use the Admin Client, so topics are still created.
TOPIC_CONFIG_SYNC
- Whether to periodically sync topic configuration to the destination cluster.
TOPIC_CONFIG_SYNC_INTERVAL_MS
- Specifies how frequently to check for configuration changes when
topic.config.sync
is enabled. TOPIC_CREATE_BACKOFF_MS
- Time to wait before retrying auto topic creation or expansion.
TOPIC_POLL_INTERVAL_MS
- Specifies how frequently to poll the source cluster for new topics matching the whitelist or regular expression.
TOPIC_PRESERVE_PARTITIONS
- Whether to automatically increase the number of partitions in the destination cluster to match the source cluster and ensure that messages replicated from the source cluster use the same partition in the destination cluster.
TOPIC_REGEX
- A regular expression that matches the names of the topics to be replicated. Any topic that matches this expression (or is listed in the whitelist) and not in the blacklist will be replicated.
TOPIC_RENAME_FORMAT
- A format string for the topic name in the destination cluster, which may contain ${topic} as a placeholder for the originating topic name.
TOPIC_TIMESTAMP_TYPE
- The timestamp type for the topics in the destination cluster.
Kafka MQTT Proxy configuration¶
For the Confluent MQTT Proxy Docker image, convert the property variables as following and use them as environment variables:
- Prefix with
KAFKA_MQTT_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
Required Kafka MQTT Proxy Settings¶
The following settings must be passed to run the Confluent MQTT Proxy Docker image.
KAFKA_MQTT_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KAFKA_MQTT_TOPIC_REGEX_LIST
- A comma-separated list of pairs of type ‘<kafka topic>:<regex>’ that is used to map MQTT topics to Kafka topics.
ksqlDB Server configuration¶
For a complete list of ksqlDB parameters, see ksqlDB Configuration Parameter Reference.
For the ksqlDB Server image (cp-ksqldb-server
), convert the property variables as
following and use them as environment variables:
- Prefix with
KSQL_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
ksqlDB Headless Server settings¶
Run a standalone ksqlDB Server instance in a container.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_KSQL_SERVICE_ID
- The service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_KSQL_QUERIES_FILE
- A file that specifies predefined SQL queries.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \
-e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \
confluentinc/cp-ksqldb-server:6.2.15
ksqlDB Headless Server with Interceptors settings¶
Run a standalone ksqlDB Server with specified interceptor classes in a container. For more info on interceptor classes, see Confluent Monitoring Interceptors.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_KSQL_SERVICE_ID
- The service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_KSQL_QUERIES_FILE
- A file that specifies predefined SQL queries.
KSQL_PRODUCER_INTERCEPTOR_CLASSES
- A list of fully qualified class names for producer interceptors.
KSQL_CONSUMER_INTERCEPTOR_CLASSES
- A list of fully qualified class names for consumer interceptors.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \
-e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \
-e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \
-e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \
confluentinc/cp-ksqldb-server:6.2.15
Interactive Server configuration¶
Run a ksqlDB Server that enables manual interaction by using the ksqlDB CLI.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_KSQL_SERVICE_ID
- The service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERS
- A list of URIs, including the protocol, that the broker listens on.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2 \
confluentinc/cp-ksqldb-server:6.2.15
Interactive Server configuration with Interceptors¶
Run a ksqlDB Server with interceptors that enables manual interaction by using the ksqlDB CLI. For more info on interceptor classes, see Confluent Monitoring Interceptors.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_KSQL_SERVICE_ID
- The service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERS
- A list of URIs, including the protocol, that the broker listens on.
KSQL_PRODUCER_INTERCEPTOR_CLASSES
- A list of fully qualified class names for producer interceptors.
KSQL_CONSUMER_INTERCEPTOR_CLASSES
- A list of fully qualified class names for consumer interceptors.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=confluent_test_2_ \
-e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \
-e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \
confluentinc/cp-ksqldb-server:6.2.15
In interactive mode, the CLI instance running outside Docker can connect to the server running in Docker.
./bin/ksql
...
CLI v6.2.15, Server v6.2.15-SNAPSHOT located at http://localhost:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Connect to a secure Kafka cluster, like Confluent Cloud¶
Run a ksqlDB Server that uses a secure connection to a Kafka cluster. Learn about Configure Security for ksqlDB.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_KSQL_SERVICE_ID
- The service ID of the ksqlDB server, which is used as the prefix for the internal topics created by ksqlDB.
KSQL_LISTENERS
- A list of URIs, including the protocol, that the broker listens on.
KSQL_KSQL_SINK_REPLICAS
- The default number of replicas for the topics created by ksqlDB. The default is one.
KSQL_KSQL_STREAMS_REPLICATION_FACTOR
- The replication factor for internal topics, the command topic, and output topics.
KSQL_SECURITY_PROTOCOL
- The protocol that your Kafka cluster uses for security.
KSQL_SASL_MECHANISM
- The SASL mechanism that your Kafka cluster uses for security.
KSQL_SASL_JAAS_CONFIG
- The Java Authentication and Authorization Service (JAAS) configuration.
docker run -d \
-p 127.0.0.1:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=REMOVED_SERVER1:9092,REMOVED_SERVER2:9093,REMOVED_SERVER3:9094 \
-e KSQL_LISTENERS=http://0.0.0.0:8088/ \
-e KSQL_KSQL_SERVICE_ID=default_ \
-e KSQL_KSQL_SINK_REPLICAS=3 \
-e KSQL_KSQL_STREAMS_REPLICATION_FACTOR=3 \
-e KSQL_SECURITY_PROTOCOL=SASL_SSL \
-e KSQL_SASL_MECHANISM=PLAIN \
-e KSQL_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<username>\" password=\"<strong-password>\";" \
confluentinc/cp-ksqldb-server:6.2.15
Configure a ksqlDB Server by using Java system properties¶
Run a ksqlDB Server with a configuration that’s defined by Java properties.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_OPTS
- A space-separated list of Java options.
docker run -d \
-v /path/on/host:/path/in/container/ \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dksql.queries.file=/path/in/container/queries.sql" \
confluentinc/cp-ksqldb-server:6.2.15
View logs¶
Use the docker logs
command to view ksqlDB logs that are generated from
within the container.
docker logs -f <container-id>
[2018-05-24 23:43:05,591] INFO stream-thread [_confluent-ksql-default_transient_1507119262168861890_1527205385485-71c8a94c-abe9-45ba-91f5-69a762ec5c1d-StreamThread-17] Starting (org.apache.kafka.streams.processor.internals.StreamThread:713)
...
ksqlDB CLI configuration¶
For the ksqlDB CLI image (cp-ksqldb-cli
), convert the property variables as
following and use them as environment variables:
- Prefix with
KSQL_
. - Convert to upper-case.
- Replace a period (
.
) with a single underscore (_
). - Replace a dash (
-
) with double underscores (__
). - Replace an underscore (
_
) with triple underscores (___
).
Connect to a Dockerized ksqlDB Server¶
Run a ksqlDB CLI instance in a container and connect to a ksqlDB Server that’s running in a container.
The Docker network created by ksqlDB Server enables you to connect to a Dockerized ksqlDB server.
KSQL_BOOTSTRAP_SERVERS
- A host:port pair for establishing the initial connection to the Kafka cluster. Multiple bootstrap servers can be used in the form
host1:port1,host2:port2,host3:port3...
. KSQL_OPTS
- A space-separated list of Java options.
# Run ksqlDB Server.
docker run -d -p 10.0.0.11:8088:8088 \
-e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \
-e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dlisteners=http://0.0.0.0:8088/" \
confluentinc/cp-ksqldb-server:6.2.15
# Connect the ksqlDB CLI to the server.
docker run -it confluentinc/cp-ksqldb-cli http://10.0.0.11:8088
...
Copyright 2017 Confluent Inc.
CLI v6.2.15-SNAPSHOT, Server v6.2.15-SNAPSHOT located at http://10.0.0.11:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
Provide a configuration file¶
Set up a a ksqlDB CLI instance by using a configuration file, and run it in a container.
# Assume ksqlDB Server is running.
# Ensure that the configuration file exists.
ls /path/on/host/ksql-cli.properties
docker run -it \
-v /path/on/host/:/path/in/container \
confluentinc/cp-ksqldb-cli:6.2.15 http://10.0.0.11:8088 \
--config-file /path/in/container/ksql-cli.properties
Connect to a ksqlDB Server running on another host, like AWS¶
Run a ksqlDB CLI instance in a container and connect to a remote ksqlDB Server host.
docker run -it confluentinc/cp-ksqldb-cli:6.2.15 \
http://ec2-etc.us-etc.compute.amazonaws.com:8080
...
Copyright 2017 Confluent Inc.
CLI v6.2.15-SNAPSHOT, Server v6.2.15-SNAPSHOT located at http://ec2-blah.us-blah.compute.amazonaws.com:8080
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>