.. _config_reference: Docker Image Configuration Reference for |cp| ############################################# This topic describes how to configure the Docker images when starting |cp|. You can dynamically specify configuration values in the |cp| Docker images with environment variables. You can use the Docker ``-e`` or ``--env`` flags to specify various configurations. .. _kafka-config-docker: |ak| configuration ****************** The sections helps configure the following images that include |ak|: - ``cp-kafka`` - includes |ak|. - ``cp-server`` - includes role-based access control, self-balancing clusters, and more, in addition to |ak|. - ``confluent-local`` - includes |ak| and :ref:`Confluent Rest Proxy ` . Defaults to running in |kraft| mode. Convert the properties file variables according to the following rules and use them as Docker environment variables: * Prefix |ak| component properties with ``KAFKA_`` * Prefix Confluent component properties for ``cp-server`` with ``CONFLUENT_`` for For example, ``metrics.reported.bootstrap.server`` is a Confluent Enterprise feature property, so, it should be converted to ``CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). The required configurations vary depending on whether you are running |ak| in |kraft| or |zk| mode. Required |ak| configurations for |kraft| mode """"""""""""""""""""""""""""""""""""""""""""" .. important:: For more information about configuring |kraft| in production, see :ref:`configure-kraft`. .. include:: includes/kraft-config-shared.rst Required |ak| configurations for |zk| mode """""""""""""""""""""""""""""""""""""""""" .. include:: ../../includes/zk-deprecation.rst ``KAFKA_ZOOKEEPER_CONNECT`` Instructs |ak| how to get in touch with |zk|. .. include:: includes/config-shared.rst :start-line: 7 :end-line: 15 Optional ``cp-server`` setting """""""""""""""""""""""""""""" The license key is an optional setting for the Enterprise |ak| (``cp-server``) image for 30 days. For a complete list of |cs| configuration settings, see :ref:`cp-config-reference`. ``KAFKA_CONFLUENT_LICENSE`` The Enterprise |ak| license key. Without the license key, |cs| can be used for a 30-day trial period. Note that ``confluent`` is part of the property name and not the component name. For example, use ``KAFKA_CONFLUENT_LICENSE`` to specify the |ak| license. Example configurations """""""""""""""""""""" The following examples show how to set environmental variables and run the |ak| images that Confluent provides. These examples: - Set the ``KAFKA_ADVERTISED_LISTENERS`` variable to ``localhost:29092``. This makes |ak| accessible from outside of the container by advertising its location on the Docker host. - Set ``KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR`` to ``1``. This is required when you are running with a single-node cluster. If you have three or more nodes, you can use the default. - Each |kraft| cluster must have a unique cluster ID assigned to its CLUSTER_ID variable. To learn how to use the ``kafka-storage`` tool to generate a unique ID, see :ref:`generate-format-ids`. - These examples show |kraft| combined mode, which is not supported for production. For production configuration, see :ref:`cp-production-parameters`. ``cp-kafka`` example -------------------- The following examples show how to start ``cp-kafka`` both in |kraft| combined mode and in |zk| mode. Combined mode is not supported for production clusters. For production configuration recommendations, see :ref:`cp-production-parameters`. .. tabs:: .. tab:: |kraft| mode Generate a ``random-uuid`` using the kafka-storage tool: .. code:: bash /bin/kafka-storage random-uuid Assign the output to the ``CLUSTER_ID`` variable: .. codewithvars:: bash docker run -d \ --name=kafka-kraft \ -h kafka-kraft \ -p 9101:9101 \ -e KAFKA_NODE_ID=1 \ -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \ -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \ -e KAFKA_JMX_PORT=9101 \ -e KAFKA_JMX_HOSTNAME=localhost \ -e KAFKA_PROCESS_ROLES='broker,controller' \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ -e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \ -e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \ -e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \ -e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \ -e CLUSTER_ID='MkU3OEVBNTcwNTJENDM2Qk' \ confluentinc/cp-kafka:|release| .. tab:: |zk| mode First :ref:`start ZooKeeper `, then start the |ak| broker. .. include:: ../../includes/zk-deprecation.rst .. codewithvars:: bash docker run -d \ --net=host \ --name=kafka \ -e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \ -e KAFKA_BROKER_ID=2 \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ confluentinc/cp-kafka:|release| ``cp-server`` example --------------------- The following examples show how to start ``cp-server`` both in |kraft| combined mode and in |zk| mode. Combined mode is not supported for production clusters. For production configuration recommendations, see :ref:`cp-production-parameters`. .. tabs:: .. tab:: |kraft| mode Generate a ``random-uuid`` using the kafka-storage tool: .. code:: bash /bin/kafka-storage random-uuid Assign the output to the ``CLUSTER_ID`` variable: .. codewithvars:: bash docker run -d \ --name=kafka-kraft \ -h kafka-kraft \ -p 9101:9101 \ -e KAFKA_NODE_ID=1 \ -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \ -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka-kraft:29092,PLAINTEXT_HOST://localhost:9092' \ -e KAFKA_PROCESS_ROLES='broker,controller' \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ -e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka-kraft:29093' \ -e KAFKA_LISTENERS='PLAINTEXT://kafka-kraft:29092,CONTROLLER://kafka-kraft:29093,PLAINTEXT_HOST://0.0.0.0:9092' \ -e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \ -e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \ -e CLUSTER_ID='q1Sh-9_ISia_zwGINzRvyQ' \ confluentinc/cp-server:|release| .. tab:: |zk| mode First :ref:`start ZooKeeper `, then start the |ak| broker. .. codewithvars:: bash docker run -d \ --net=host \ --name=kafka \ -e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \ -e KAFKA_BROKER_ID=2 \ -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \ -e CONFLUENT_SUPPORT_CUSTOMER_ID=c0 \ -e KAFKA_CONFLUENT_LICENSE="ABC123XYZ737BVT" \ confluentinc/cp-server:|release| If you want to use Confluent Auto Data Balancing features, see :ref:`rebalancer`. .. _confluent-local-example: ``confluent-local`` example --------------------------- The ``confluent-local`` is configured to run in |kraft| combined mode by default, and requires no configuration to start using it. Combined mode is not supported for production. For production configuration recommendations, see :ref:`cp-production-parameters`. Following is an example of how to run ``confluent-local`` with the default configurations: .. codewithvars:: bash docker run -d -P --name=kafka-kraft confluentinc/confluent-local:7.4.0 For a list of the default configurations for this image, see the `configureDefaults file on GitHub `__. The following example shows how generate a cluster ID and specify the ID when you run the image. .. code:: bash /bin/kafka-storage random-uuid Assign the output to the ``CLUSTER_ID`` variable: .. codewithvars:: bash docker run -d -P \ --name=kafka-kraft \ -e CLUSTER_ID=q1Sh-9_ISia_zwGINzRvyQ confluentinc/confluent-local:|release| |sr-long| configuration *********************** For the |sr| (``cp-schema-registry``) image, convert the property variables as below and use them as environment variables: * Prefix with ``SCHEMA_REGISTRY_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). For example, run the following to set ``kafkastore.connection.url``, ``host.name``, ``listeners`` and ``debug``: .. codewithvars:: bash docker run -d \ --net=host \ --name=schema-registry \ -e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=SSL://hostname2:9092 \ -e SCHEMA_REGISTRY_HOST_NAME=localhost \ -e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \ -e SCHEMA_REGISTRY_DEBUG=true \ confluentinc/cp-schema-registry:|release| Required |sr| configurations """""""""""""""""""""""""""" ``SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS`` A list of |ak| brokers to connect to. ``SCHEMA_REGISTRY_HOST_NAME`` This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. Hostname must be resolveable because secondary nodes serve registration requests indirectly by simply forwarding them to the current primary, and returning the response supplied by the primary. For more information, see the Schema Registry documentation on :ref:`Single Primary Architecture `. |kconnect-long| configuration ***************************** For the |kconnect-long| (``cp-kafka-connect``) image, convert the property variables as below and use them as environment variables: * Prefix with ``CONNECT_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). For example, run this command to set the required properties like ``bootstrap.servers``, the topic names for ``config``, ``offsets`` and ``status`` as well the ``key`` or ``value`` converter: .. codewithvars:: bash docker run -d \ --name=kafka-connect \ --net=host \ -e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \ -e CONNECT_REST_PORT=28082 \ -e CONNECT_GROUP_ID="quickstart" \ -e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \ -e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \ -e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \ -e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \ -e CONNECT_PLUGIN_PATH=/usr/share/java \ confluentinc/cp-kafka-connect:|release| Required |kconnect-long| configurations """"""""""""""""""""""""""""""""""""""" The following configurations must be passed to run the |kconnect-long| Docker image. ``CONNECT_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``CONNECT_GROUP_ID`` A unique string that identifies the Connect cluster group this worker belongs to. ``CONNECT_CONFIG_STORAGE_TOPIC`` The name of the topic in which to store connector and task configuration data. This must be the same for all workers with the same ``group.id`` ``CONNECT_OFFSET_STORAGE_TOPIC`` The name of the topic in which to store offset data for connectors. This must be the same for all workers with the same ``group.id`` ``CONNECT_STATUS_STORAGE_TOPIC`` The name of the topic in which to store state for connectors. This must be the same for all workers with the same ``group.id`` ``CONNECT_KEY_CONVERTER`` Converter class for keys. This controls the format of the data that will be written to |ak| for source connectors or read from |ak| for sink connectors. ``CONNECT_VALUE_CONVERTER`` Converter class for values. This controls the format of the data that will be written to |ak| for source connectors or read from |ak| for sink connectors. ``CONNECT_REST_ADVERTISED_HOST_NAME`` The hostname that is given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client. ``CONNECT_PLUGIN_PATH`` The location from which to load Connect plugins in class loading isolation. If using the ``confluent-hub`` client, include ``/usr/share/confluent-hub-components``, the default path that ``confluent-hub`` installs to. Optional |kconnect-long| configurations """"""""""""""""""""""""""""""""""""""" The images marked as Commercially licensed include proprietary components and must be licensed from Confluent when deployed. ``CONNECT_CONFLUENT_LICENSE`` Required if using ``cp-server-connect`` or ``cp-server-connect-base`` Docker images. Without a license key, |kconnect-long| can be used for a 30-day trial period. See :ref:`image_reference` for more information. .. note:: To configure the JVM for |kconnect|, you must use the ``KAFKA_HEAP_OPTS`` setting instead of ``CONNECT_KAFKA_HEAP_OPTS``. |ksqldb| Server configuration ***************************** For a complete list of |ksqldb| parameters, see :ksqldb-docs:`ksqlDB Configuration Parameter Reference|operate-and-deploy/installation/server-config/config-reference/`. For the |ksqldb| Server image (``cp-ksqldb-server``), convert the property variables as following and use them as environment variables: * Prefix with ``KSQL_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). |ksqldb| Headless Server configurations """"""""""""""""""""""""""""""""""""""" Run a standalone |ksqldb| Server instance in a container. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_KSQL_SERVICE_ID`` The service ID of the |ksqldb| server, which is used as the prefix for the internal topics created by |ksqldb|. ``KSQL_KSQL_QUERIES_FILE`` A file that specifies predefined SQL queries. .. codewithvars:: bash docker run -d \ -v /path/on/host:/path/in/container/ \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \ -e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \ confluentinc/cp-ksqldb-server:|release| |ksqldb| Headless Server with Interceptors configurations """"""""""""""""""""""""""""""""""""""""""""""""""""""""" Run a standalone |ksqldb| Server with specified interceptor classes in a container. For more info on interceptor classes, see :ref:`Confluent Monitoring Interceptors `. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_KSQL_SERVICE_ID`` The service ID of the |ksqldb| server, which is used as the prefix for the internal topics created by |ksqldb|. ``KSQL_KSQL_QUERIES_FILE`` A file that specifies predefined SQL queries. ``KSQL_PRODUCER_INTERCEPTOR_CLASSES`` A list of fully qualified class names for producer interceptors. ``KSQL_CONSUMER_INTERCEPTOR_CLASSES`` A list of fully qualified class names for consumer interceptors. .. codewithvars:: bash docker run -d \ -v /path/on/host:/path/in/container/ \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_KSQL_SERVICE_ID=confluent_standalone_2_ \ -e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \ -e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \ -e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \ confluentinc/cp-ksqldb-server:|release| Interactive Server configuration """""""""""""""""""""""""""""""" Run a |ksqldb| Server that enables manual interaction by using the |ksqldb| CLI. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_KSQL_SERVICE_ID`` The service ID of the |ksqldb| server, which is used as the prefix for the internal topics created by |ksqldb|. ``KSQL_LISTENERS`` A list of URIs, including the protocol, that the broker listens on. .. codewithvars:: bash docker run -d \ -p 127.0.0.1:8088:8088 \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_LISTENERS=http://0.0.0.0:8088/ \ -e KSQL_KSQL_SERVICE_ID=confluent_test_2 \ confluentinc/cp-ksqldb-server:|release| Interactive Server configuration with Interceptors """""""""""""""""""""""""""""""""""""""""""""""""" Run a |ksqldb| Server with interceptors that enables manual interaction by using the |ksqldb| CLI. For more info on interceptor classes, see :ref:`Confluent Monitoring Interceptors `. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_KSQL_SERVICE_ID`` The service ID of the |ksqldb| server, which is used as the prefix for the internal topics created by |ksqldb|. ``KSQL_LISTENERS`` A list of URIs, including the protocol, that the broker listens on. ``KSQL_PRODUCER_INTERCEPTOR_CLASSES`` A list of fully qualified class names for producer interceptors. ``KSQL_CONSUMER_INTERCEPTOR_CLASSES`` A list of fully qualified class names for consumer interceptors. .. codewithvars:: bash docker run -d \ -p 127.0.0.1:8088:8088 \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_LISTENERS=http://0.0.0.0:8088/ \ -e KSQL_KSQL_SERVICE_ID=confluent_test_2_ \ -e KSQL_PRODUCER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor \ -e KSQL_CONSUMER_INTERCEPTOR_CLASSES=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor \ confluentinc/cp-ksqldb-server:|release| In interactive mode, the CLI instance running outside Docker can connect to the server running in Docker. .. codewithvars:: bash ./bin/ksql ... CLI v|release|, Server v|release|-SNAPSHOT located at http://localhost:8088 Having trouble? Type 'help' (case-insensitive) for a rundown of how things work! ksql> Connect to a secure |ak| cluster, like |ccloud| """"""""""""""""""""""""""""""""""""""""""""""" Run a |ksqldb| Server that uses a secure connection to a |ak| cluster. Learn about :ksqldb-docs:`Configure Security for ksqlDB|operate-and-deploy/installation/server-config/security/`. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_KSQL_SERVICE_ID`` The service ID of the |ksqldb| server, which is used as the prefix for the internal topics created by |ksqldb|. ``KSQL_LISTENERS`` A list of URIs, including the protocol, that the broker listens on. ``KSQL_KSQL_SINK_REPLICAS`` The default number of replicas for the topics created by |ksqldb|. The default is one. ``KSQL_KSQL_STREAMS_REPLICATION_FACTOR`` The replication factor for internal topics, the command topic, and output topics. ``KSQL_SECURITY_PROTOCOL`` The protocol that your |ak| cluster uses for security. ``KSQL_SASL_MECHANISM`` The SASL mechanism that your |ak| cluster uses for security. ``KSQL_SASL_JAAS_CONFIG`` The Java Authentication and Authorization Service (JAAS) configuration. .. codewithvars:: bash docker run -d \ -p 127.0.0.1:8088:8088 \ -e KSQL_BOOTSTRAP_SERVERS=REMOVED_SERVER1:9092,REMOVED_SERVER2:9093,REMOVED_SERVER3:9094 \ -e KSQL_LISTENERS=http://0.0.0.0:8088/ \ -e KSQL_KSQL_SERVICE_ID=default_ \ -e KSQL_KSQL_SINK_REPLICAS=3 \ -e KSQL_KSQL_STREAMS_REPLICATION_FACTOR=3 \ -e KSQL_SECURITY_PROTOCOL=SASL_SSL \ -e KSQL_SASL_MECHANISM=PLAIN \ -e KSQL_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"\" password=\"\";" \ confluentinc/cp-ksqldb-server:|release| Configure a |ksqldb| Server by using Java system properties """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" Run a |ksqldb| Server with a configuration that's defined by Java properties. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_OPTS`` A space-separated list of Java options. .. codewithvars:: bash docker run -d \ -v /path/on/host:/path/in/container/ \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dksql.queries.file=/path/in/container/queries.sql" \ confluentinc/cp-ksqldb-server:|release| View logs """"""""" Use the ``docker logs`` command to view |ksqldb| logs that are generated from within the container. .. codewithvars:: bash docker logs -f [2018-05-24 23:43:05,591] INFO stream-thread [_confluent-ksql-default_transient_1507119262168861890_1527205385485-71c8a94c-abe9-45ba-91f5-69a762ec5c1d-StreamThread-17] Starting (org.apache.kafka.streams.processor.internals.StreamThread:713) ... |ksqldb| CLI configuration ************************** For the |ksqldb| CLI image (``cp-ksqldb-cli``), convert the property variables as following and use them as environment variables: * Prefix with ``KSQL_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). Connect to a Dockerized |ksqldb| Server """"""""""""""""""""""""""""""""""""""" Run a |ksqldb| CLI instance in a container and connect to a |ksqldb| Server that's running in a container. The Docker network created by |ksqldb| Server enables you to connect to a Dockerized |ksqldb| server. ``KSQL_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KSQL_OPTS`` A space-separated list of Java options. .. codewithvars:: bash # Run ksqlDB Server. docker run -d -p 10.0.0.11:8088:8088 \ -e KSQL_BOOTSTRAP_SERVERS=localhost:9092 \ -e KSQL_OPTS="-Dksql.service.id=confluent_test_3_ -Dlisteners=http://0.0.0.0:8088/" \ confluentinc/cp-ksqldb-server:|release| # Connect the ksqlDB CLI to the server. docker run -it confluentinc/cp-ksqldb-cli http://10.0.0.11:8088 ... Copyright 2017 Confluent Inc. CLI v|release|-SNAPSHOT, Server v|release|-SNAPSHOT located at http://10.0.0.11:8088 Having trouble? Type 'help' (case-insensitive) for a rundown of how things work! ksql> Provide a configuration file """""""""""""""""""""""""""" Set up a a |ksqldb| CLI instance by using a configuration file, and run it in a container. .. codewithvars:: bash # Assume ksqlDB Server is running. # Ensure that the configuration file exists. ls /path/on/host/ksql-cli.properties docker run -it \ -v /path/on/host/:/path/in/container \ confluentinc/cp-ksqldb-cli:|release| http://10.0.0.11:8088 \ --config-file /path/in/container/ksql-cli.properties Connect to a |ksqldb| Server running on another host, like AWS """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" Run a |ksqldb| CLI instance in a container and connect to a remote |ksqldb| Server host. .. codewithvars:: bash docker run -it confluentinc/cp-ksqldb-cli:|release| \ http://ec2-etc.us-etc.compute.amazonaws.com:8080 ... Copyright 2017 Confluent Inc. CLI v|release|-SNAPSHOT, Server v|release|-SNAPSHOT located at http://ec2-blah.us-blah.compute.amazonaws.com:8080 Having trouble? Type 'help' (case-insensitive) for a rundown of how things work! ksql> .. _cp-kafka-rest-config: |crest-long| configuration ************************** The |crest-long| (``cp-kafka-rest``) image uses the |crest| configuration setting names. Convert the |crest| configurations to environment variables as below: * Prefix with ``KAFKA_REST_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). For example, use the ``KAFKA_REST_SCHEMA_REGISTRY_URL`` environment variable to set ``schema.registry.url``. See :ref:`kafkarest_config` for the configuration settings that |crest| supports. The following command sets the ``listeners``, ``schema.registry.url`` and ``zookeeper.connect`` (|zk| mode only): .. codewithvars:: bash docker run -d \ --net=host \ --name=kafka-rest \ -e KAFKA_REST_ZOOKEEPER_CONNECT=localhost:32181 \ -e KAFKA_REST_LISTENERS=http://localhost:8082 \ -e KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081 \ -e KAFKA_REST_BOOTSTRAP_SERVERS=localhost:29092 \ confluentinc/cp-kafka-rest:|release| Required |crest-long| configurations """""""""""""""""""""""""""""""""""" The following configurations must be passed to run the REST Proxy Docker image. ``KAFKA_REST_HOST_NAME`` The hostname used to generate absolute URLs in responses. Hostname may be required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment. For more information, see the |cp| documentation on :ref:`REST proxy deployment `. ``KAFKA_REST_BOOTSTRAP_SERVERS`` A list of |ak| brokers to connect to. To learn about the corresponding ``bootstrap.server`` |crest| setting, see :ref:`kafkarest_config`. ``KAFKA_REST_ZOOKEEPER_CONNECT`` This variable is deprecated in |crest| v2. Use the variable if using |crest| v1 and if not using ``KAFKA_REST_BOOTSTRAP_SERVERS``. The |zk| connection string in the form ``hostname:port`` where host and port are the host and port of a |zk| server. To allow connecting through other |zk| nodes when that |zk| machine is down you can also specify multiple hosts in the form ``hostname1:port1,hostname2:port2,hostname3:port3``. The server may also have a |zk| ``chroot`` path as part of its |zk| connection string which puts its data under some path in the global |zk| namespace. If so, the consumer should use the same chroot path in its connection string. For example to give a chroot path of ``/chroot/path``, you would use the connection string ``hostname1:port1,hostname2:port2,hostname3:port3/chroot/path``. |c3| configuration ****************** For the |c3| (``cp-control-center``) image, you must convert property variables used by |c3| to environment variables using the format described in the following table. In general, you: * Replace ``confluent.controlcenter`` in the property variable with ``CONTROL_CENTER_``. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). Some of the properties have specific conversions where more general rules do not apply. Also, note some properties are :ref:`required ` and others are :ref:`optional `. .. list-table:: :widths: 50 50 :header-rows: 1 * - Property variable - Environment variable replacement or pattern * - ``bootstrap.servers`` - ``CONTROL_CENTER_BOOTSTRAP_SERVERS`` * - ``zookeeper.connect`` (|zk| mode only) - ``CONTROL_CENTER_ZOOKEEPER_CONNECT`` * - ``confluent.monitoring.interceptor.topic.replication`` - ``CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_REPLICATION`` * - All other ``confluent.monitoring.interceptor.topic.*`` entries - ``CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_*`` * - ``confluent.metrics.topic.replication`` - ``CONTROL_CENTER_METRICS_TOPIC_REPLICATION`` * - All other ``confluent.metrics.topic.*`` entries - ``CONTROL_CENTER_METRICS_TOPIC_*`` * - ``confluent.license`` - ``CONTROL_CENTER_LICENSE`` * - ``public.key.path`` - ``PUBLIC_KEY_PATH`` * - All other ``confluent.controlcenter.*`` entries - ``CONTROL_CENTER_*`` * - ``confluent.metadata.*`` - ``CONFLUENT_METADATA_*`` * - ``confluent.support.*`` - ``CONFLUENT_SUPPORT_*`` For example, ``confluent.controlcenter.mail.from`` property variable should be converted to the ``CONTROL_CENTER_MAIL_FROM`` environment variable. The following example command runs |c3-short|, passing in |ak|, and |kconnect| configuration parameters. .. codewithvars:: bash docker run -d \ --net=host \ --name=control-center \ --ulimit nofile=16384:16384 \ -e CONTROL_CENTER_BOOTSTRAP_SERVERS=localhost:29092 \ -e CONTROL_CENTER_REPLICATION_FACTOR=1 \ -e CONTROL_CENTER_CONNECT_connect-cluster-name_CLUSTER=http://localhost:28082 \ -v /mnt/control-center/data:/var/lib/confluent-control-center \ confluentinc/cp-enterprise-control-center:|release| .. _c3-required: |c3-short| required configurations """""""""""""""""""""""""""""""""" The following configurations must be passed to run the |c3| image. ``CONTROL_CENTER_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``CONTROL_CENTER_REPLICATION_FACTOR`` Replication factor for |c3-short| topics. We recommend setting this to 3 in a production environment. .. _c3-optional: |c3-short| optional configurations """""""""""""""""""""""""""""""""" ``CONTROL_CENTER_CONNECT__CLUSTER`` To enable |c3-short| to interact with a |kconnect-long| cluster, set this parameter to the REST endpoint URL for the |kconnect-long| cluster. ``CONTROL_CENTER_CONFLUENT_LICENSE`` The |c3| license key. Without the license key, |c3| can be used for a 30-day trial period. ``CONTROL_CENTER_KAFKA__BOOTSTRAP_SERVERS`` To list bootstrap servers for any additional |ak| cluster being monitored, replace ```` with the name |c3-short| should use to identify this cluster. For example, using ``CONTROL_CENTER_KAFKA_production-nyc_BOOTSTRAP_SERVER``, |c3-short| will show the addtional cluster with the name ``production-nyc`` in the cluster list. ``CONTROL_CENTER_LICENSE`` The |c3| license key. Without the license key, |c3| can be used for a 30-day trial period. ``CONTROL_CENTER_REST_LISTENERS`` Set this to the HTTP or HTTPS of |c3-short| UI. If not set, you may see the following warning message: :: WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application) |c3-short| Docker options """"""""""""""""""""""""" * File descriptor limit: |c3-short| may require many open files, so we recommend setting the file descriptor limit to at least 16384. * Data persistence: the |c3-short| image stores its data in the ``/var/lib/confluent-control-center`` directory. We recommend that you bind this to a volume on the host machine so that data is persisted across runs. |crep-full| configuration ************************* |crep-full| is a |ak| connector and runs on a |kconnect-long| cluster. For the |crep-full| image (``cp-enterprise-replicator``), convert the property variables as following and use them as environment variables: * Prefix with ``CONNECT_``. * Convert to upper-case. * Separate each word with ``_``. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). For example, run the following commands to set the properties, such as ``bootstrap.servers``, ``confluent.license``, the topic names for ``config``, ``offsets``, and ``status``: .. codewithvars:: bash docker run -d \ --name=cp-enterprise-replicator \ --net=host \ -e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \ -e CONNECT_REST_PORT=28082 \ -e CONNECT_GROUP_ID="quickstart" \ -e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \ -e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \ -e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \ -e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \ -e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \ -e CONNECT_CONFLUENT_LICENSE="ABC123XYZ737BVT" \ confluentinc/cp-enterprise-replicator:|release| The following example shows how to create a |crep-full| connector which replicates topic "confluent" from source |ak| cluster (src) to a destination |ak| cluster (dest). .. codewithvars:: bash curl -X POST \ -H "Content-Type: application/json" \ --data '{ "name": "confluent-src-to-dest", "config": { "connector.class":"io.confluent.connect.replicator.ReplicatorSourceConnector", "key.converter": "io.confluent.connect.replicator.util.ByteArrayConverter", "value.converter": "io.confluent.connect.replicator.util.ByteArrayConverter", "src.kafka.bootstrap.servers": "kafka-src:9082", "topic.whitelist": "confluent", "topic.rename.format": "${topic}.replica"}}' \ http://localhost:28082/connectors Required |crep-full| configurations """"""""""""""""""""""""""""""""""" The following configurations must be passed to run the |crep-full| Docker image: ``CONNECT_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``CONNECT_GROUP_ID`` A unique string that identifies the Connect cluster group this worker belongs to. ``CONNECT_CONFIG_STORAGE_TOPIC`` The name of the topic where connector and task configuration data is stored. This must be the same for all workers with the same ``group.id`` ``CONNECT_OFFSET_STORAGE_TOPIC`` The name of the topic where offset data for connectors is stored. This must be the same for all workers with the same ``group.id`` ``CONNECT_STATUS_STORAGE_TOPIC`` The name of the topic where state for connectors is stored. This must be the same for all workers with the same ``group.id`` ``CONNECT_KEY_CONVERTER`` Converter class for keys. This controls the format of the data that will be written to |ak| for source connectors or read from |ak| for sink connectors. ``CONNECT_VALUE_CONVERTER`` Converter class for values. This controls the format of the data that will be written to |ak| for source connectors or read from |ak| for sink connectors. ``CONNECT_INTERNAL_KEY_CONVERTER`` Converter class for internal keys that implements the ``Converter`` interface. ``CONNECT_INTERNAL_VALUE_CONVERTER`` Converter class for internal values that implements the ``Converter`` interface. ``CONNECT_REST_ADVERTISED_HOST_NAME`` The hostname that will be given out to other workers to connect to. In a Docker environment, your clients must be able to connect to the Connect and other services. Advertised hostname is how Connect gives out a hostname that can be reached by the client. Optional |crep-full| configurations """"""""""""""""""""""""""""""""""" ``CONNECT_CONFLUENT_LICENSE`` The Confluent license key. Without the license key, |crep| can be used for a 30-day trial period. |crep-full| Executable configuration ************************************ |crep-full| Executable (``cp-enterprise-replicator-executable``) provides another way to run |crep| by consolidating configuration properties and abstracting |kconnect| details. The image depends on input files that can be passed by mounting a directory with the expected input files or by mounting each file individually. Additionally, the image supports passing command line parameters to the |crep| executable via environment variables as well. The following example will start |crep| given that the local directory ``/mnt/replicator/config``, that will be mounted under ``/etc/replicator`` on the Docker image, contains the required files ``consumer.properties``, ``producer.properties`` and the optional but often necessary file ``replication.properties``. .. codewithvars:: bash docker run -d \ --name=ReplicatorX \ --net=host \ -e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \ -v /mnt/replicator/config:/etc/replicator \ confluentinc/cp-enterprise-replicator-executable:|release| In a similar example, we start |crep| by omitting to add a ``replication.properties`` and by specifying the replication properties by using environment variables. For a complete list of the expected environment variables see the list of configurations in the next sections. .. codewithvars:: bash docker run -d \ --name=ReplicatorX \ --net=host \ -e CLUSTER_ID=replicator-east-to-west \ -e WHITELIST=confluent \ -e TOPIC_RENAME_FORMAT='${topic}.replica' \ -e REPLICATOR_LOG4J_ROOT_LOGLEVEL=DEBUG \ -v /mnt/replicator/config:/etc/replicator \ confluentinc/cp-enterprise-replicator-executable:|release| Required |crep-full| Executable configurations """""""""""""""""""""""""""""""""""""""""""""" The following files must be passed to run the |crep| Executable Docker image: ``CONSUMER_CONFIG`` A file that contains the configuration settings for the consumer reading from the origin cluster. Default location is ``/etc/replicator/consumer.properties`` in the Docker image. ``PRODUCER_CONFIG`` A file that contains the configuration settings for the producer writing to the destination cluster. Default location is ``/etc/replicator/producer.properties`` in the Docker image. ``CLUSTER_ID`` A string that specifies the unique identifier for the |crep| cluster. Default value is ``replicator``. Optional |crep-full| Executable configurations """""""""""""""""""""""""""""""""""""""""""""" Additional configurations that are optional and maybe passed to |crep| Executable via environment variable instead of files are: ``REPLICATION_CONFIG`` A file that contains the configuration settings for the replication from the origin cluster. Default location is ``/etc/replicator/replication.properties`` in the Docker image. ``CONSUMER_MONITORING_CONFIG`` A file that contains the configuration settings of the producer writing monitoring information related to |crep|'s consumer. Default location is ``/etc/replicator/consumer-monitoring.properties`` in the Docker image. ``PRODUCER_MONITORING_CONFIG`` A file that contains the configuration settings of the producer writing monitoring information related to |crep|'s producer. Default location is ``/etc/replicator/producer-monitoring.properties`` in the Docker image. ``BLACKLIST`` A comma-separated list of topics that should not be replicated, even if they are included in the whitelist or matched by the regular expression. ``WHITELIST`` A comma-separated list of the names of topics that should be replicated. Any topic that is in this list and not in the blacklist will be replicated. ``CLUSTER_THREADS`` The total number of threads across all workers in the |crep| cluster. ``CONFLUENT_LICENSE`` The Confluent license key. Without the license key, |crep| can be used for a 30-day trial period. ``TOPIC_AUTO_CREATE`` Whether to automatically create topics in the destination cluster if required. If you disable automatic topic creation, |kstreams| and |ksqldb| applications continue to work. |kstreams| and |ksqldb| applications use the Admin Client, so topics are still created. ``TOPIC_CONFIG_SYNC`` Whether to periodically sync topic configuration to the destination cluster. ``TOPIC_CONFIG_SYNC_INTERVAL_MS`` Specifies how frequently to check for configuration changes when ``topic.config.sync`` is enabled. ``TOPIC_CREATE_BACKOFF_MS`` Time to wait before retrying auto topic creation or expansion. ``TOPIC_POLL_INTERVAL_MS`` Specifies how frequently to poll the source cluster for new topics matching the whitelist or regular expression. ``TOPIC_PRESERVE_PARTITIONS`` Whether to automatically increase the number of partitions in the destination cluster to match the source cluster and ensure that messages replicated from the source cluster use the same partition in the destination cluster. ``TOPIC_REGEX`` A regular expression that matches the names of the topics to be replicated. Any topic that matches this expression (or is listed in the whitelist) and not in the blacklist will be replicated. ``TOPIC_RENAME_FORMAT`` A format string for the topic name in the destination cluster, which may contain ${topic} as a placeholder for the originating topic name. ``TOPIC_TIMESTAMP_TYPE`` The timestamp type for the topics in the destination cluster. |ak| MQTT Proxy configuration ***************************** For the |cmqtt-full| Docker image, convert the property variables as following and use them as environment variables: * Prefix with ``KAFKA_MQTT_``. * Convert to upper-case. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). Required |ak| MQTT Proxy configurations """"""""""""""""""""""""""""""""""""""" The following configurations must be passed to run the |cmqtt-full| Docker image. ``KAFKA_MQTT_BOOTSTRAP_SERVERS`` A host:port pair for establishing the initial connection to the |ak| cluster. Multiple bootstrap servers can be used in the form ``host1:port1,host2:port2,host3:port3...``. ``KAFKA_MQTT_TOPIC_REGEX_LIST`` A comma-separated list of pairs of type ':' that is used to map MQTT topics to |ak| topics. .. _zk-config: |zk| configuration ****************** For the |zk| (``cp-zookeeper``) image, convert the ``zookeeper.properties`` file variables as below and use them as environment variables: * Prefix with ``ZOOKEEPER_``. * Convert to upper-case. * Separate each word with ``_``. * Replace a period (``.``) with a single underscore (``_``). * Replace a dash (``-``) with double underscores (``__``). * Replace an underscore (``_``) with triple underscores (``___``). For example, to set ``clientPort``, ``tickTime``, and ``syncLimit``, run the following command: .. codewithvars:: bash docker run -d \ --net=host \ --name=zookeeper \ -e ZOOKEEPER_CLIENT_PORT=32181 \ -e ZOOKEEPER_TICK_TIME=2000 \ -e ZOOKEEPER_SYNC_LIMIT=2 \ confluentinc/cp-zookeeper:|release| Required |zk| configurations """""""""""""""""""""""""""" ``ZOOKEEPER_CLIENT_PORT`` Instructs |zk| where to listen for connections by clients such as |ak-tm|. ``ZOOKEEPER_SERVER_ID`` Required when running an ensemble. An environment ID that uniquely identifies each |zk| server in the ensemble. The ID must be unique within the ensemble and should have a value between 1 and 255. These are stored in the ``myid`` file, which consists of a single line that contains only the text of that machine's ID. For example, the ``myid`` of server 1 would only contain the text ``"1"``. ``ZOOKEEPER_SERVERS`` Required when running an ensemble. Used to specify the list of |zk| servers in an ensemble that work together to provide a highly available and fault-tolerant service. This should be a semicolon-separated list of ``hostname:clientport:electionport`` entries. For example: .. code-block:: bash zk1:2181:2888;zk2:2181:2888;zk3:2181:2888 For more about running |zk| as an ensemble, see :ref:`ensemble-zk`. For an example of using Docker with |ak| and |zk| in a multi-node setup, see :ref:`mrc-tutorial`. Related content *************** - :ref:`cpdocker_intro` - :ref:`image_reference` - :ref:`docker_operations_logging` - :ref:`use-jmx-monitor-docker-deployments` - :ref:`external_volumes` - :ref:`cp-multi-node`