.. _controlcenterquickstart: Quick Start =========== With this quick start you will set up and configure, `ZooKeeper `_, `Kafka `_, `Kafka Connect `_, and Control Center. You will then read and write data to and from Kafka. .. contents:: :depth: 3 :local: **Prerequisites:** * :ref:`Confluent Platform Enterprise Edition`. You can get up and running with the full Confluent platform quickly on a single server or you can :ref:`Deploy using Docker`. .. _controlcenter_metrics_reporter_config: Configure Kafka ^^^^^^^^^^^^^^^ Before starting Control Center, you must configure Metrics Reporter, Kafka, and Kafka Connect. #. Configure Confluent Metrics Reporter. #. Optional: Copy the default Kafka server configuration (``/etc/kafka/server.properties``). .. sourcecode:: bash $ cp /etc/kafka/server.properties /server.properties #. Uncomment these Metrics Reporter values:: ##################### Confluent Metrics Reporter ####################### # Confluent Control Center and Confluent Auto Data Balancer integration # # Uncomment the following lines to publish monitoring data for # Confluent Control Center and Confluent Auto Data Balancer # If you are using a dedicated metrics cluster, also adjust the settings # to point to your metrics kakfa cluster. metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter confluent.metrics.reporter.bootstrap.servers=localhost:9092 # # Uncomment the following line if the metrics cluster has a single broker confluent.metrics.reporter.topic.replicas=1 **Tip:** You can do this using sed commands: .. sourcecode:: bash $ sed -i 's/#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter/metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter/g' && \ sed -i 's/#confluent.metrics.reporter.bootstrap.servers=localhost:9092/confluent.metrics.reporter.bootstrap.servers=localhost:9092/g' && \ sed -i 's/#confluent.metrics.reporter.zookeeper.connect=localhost:2181/confluent.metrics.reporter.zookeeper.connect=localhost:2181/g' && \ sed -i 's/#confluent.metrics.reporter.topic.replicas=1/confluent.metrics.reporter.topic.replicas=1/g' /tmp/server.properties #. Optional: Copy the settings for Kafka Connect (``connect-distributed.properties``). .. sourcecode:: bash $ cp /etc/kafka/connect-distributed.properties /connect-distributed.properties #. Add support for the interceptors to the Connect properties (``connect-distributed.properties``) file. .. sourcecode:: bash $ cat <> connect-distributed.properties # Interceptor setup consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor EOF #. Start Confluent Platform. .. sourcecode:: bash $ confluent start Configure and start Control Center ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Start Control Center in its own terminal (set to run with one replica): #. Optional: Copy the ``control-center.properties`` file and save the original. .. sourcecode:: bash $ cp ./etc/confluent-control-center/control-center.properties /control-center.properties #. Define these values in your properties file. .. sourcecode:: bash $ cat <> control-center.properties # Quick start partition and replication values confluent.controlcenter.internal.topics.partitions=1 confluent.controlcenter.internal.topics.replication=1 confluent.controlcenter.command.topic.replication=1 confluent.monitoring.interceptor.topic.partitions=1 confluent.monitoring.interceptor.topic.replication=1 confluent.metrics.topic.partition=1 confluent.metrics.topic.replication=1 EOF #. Start Control Center. .. sourcecode:: bash $ /bin/control-center-start /control-center.properties You can navigate to the Control Center web interface at http://localhost:9021/. Setup stream monitoring ^^^^^^^^^^^^^^^^^^^^^^^ Now that you have all of the services running, you can start building a data pipeline. As an example, you can create a small job to create data. #. Open an editor and enter the following text (our apologies to William Carlos Williams), and save this as ``totail.sh``. .. sourcecode:: bash cat <> totail.sh #!/usr/bin/env bash file=totail.txt while true; do echo This is just to say >> \${file} echo >> \${file} echo I have eaten >> \${file} echo the plums >> \${file} echo that were in >> \${file} echo the icebox >> \${file} echo >> \${file} echo and which >> \${file} echo you were probably >> \${file} echo saving >> \${file} echo for breakfast >> \${file} echo >> \${file} echo Forgive me >> \${file} echo they were delicious >> \${file} echo so sweet >> \${file} echo and so cold >> \${file} sleep 1 done EOF #. Start this script. It writes the poem to ``/tmp/totail.txt`` once per second. Kafka Connect is used to load that into a Kafka topic. #. Run ``chmod`` to grant user execution permissions. .. sourcecode:: bash $ sudo chmod u+x totail.sh #. Run the script. .. sourcecode:: bash $ totail.sh #. Use the Kafka Topics tool to create a new topic: .. sourcecode:: bash $ /bin/kafka-topics --zookeeper localhost:2181 --create --topic poem \ --partitions 1 --replication-factor 1 #. From the Control Center web interface `http://localhost:9021/ `__, click on the **Kafka Connect** button on the left side of the web interface. On this page you can see a list of sources that have been configured - by default it will be empty. Click the **New source** button. .. figure:: images/c3newsource.png #. From the **Connector Class** drop-down menu select `FileStreamSourceConnector`. Specify the **Connection Name** as `Poem File Source`. Once you have specified a name for the connection a set of other configuration options will appear. .. figure:: images/c3filestreamsourceconnector.png #. In the *General* section specify the **file** as ``/totail.txt`` and the **topic** as `poem`. .. figure:: images/c3specifypoem.png #. Click **Continue**, verify your settings, and then **Save & Finish** to apply the new configuration. .. figure:: images/c3verifyfinal.png #. Create a new sink. #. From the Kafka Connect tab, click the **Sinks** tab and then **New sink**. .. figure:: images/c3newsink.png #. From the **Topics** drop-down list, choose **poem** and click **Continue**. .. figure:: images/c3poemsink.png #. In the **Sinks** tab, set the Connection Class to ``FileStreamSinkConnector``, specify the Connection Name as `Poem File Sink`, and in the *General* section specify the **file** as ``sunk.txt``. .. figure:: images/filestreamsinkconnector.png #. Click **Continue**, verify your settings, and then **Save & Finish** to apply the new configuration. .. figure:: images/c3poemfilesink.png Now that you have data flowing into and out of Kafka, let's monitor what's going on! #. Click the **Data Streams** tab and you will see a chart that shows the total number of messages produced and consumed on the cluster. If you scroll down, you will see more details on the consumer group for your sink. Depending on your machine, this chart may take a few minutes to populate. .. figure:: images/c3qsfinalview.png This quick start described Kafka, Kafka Connect, and Control Center. For component-specific quick start guides, see the documentation: * :ref:`Kafka Streams Quick Start` * :ref:`Kafka Connect Quick Start` * :ref:`Kafka REST Proxy Quick Start`