Confluent Platform Quick StartΒΆ

You can get up and running with the full Confluent platform quickly on a single server. In this quick start you will run ZooKeeper, Kafka, and the Schema Registry and then write and read some Avro data to and from Kafka.


To start a data pipeline using Control Center, see Control Center Quick Start.

  1. Download and install the Confluent Platform using one of the installation options. This quick start uses the ZIP archive.

    Here is a high-level view of the contents of the package:

    <path-to-confluent>/bin/        # Driver scripts for starting/stopping services
    <path-to-confluent>/etc/        # Configuration files
    <path-to-confluent>/share/java/ # Jars

    If you installed from deb or rpm packages, the contents are installed globally and you’ll need to adjust the paths used below:

    /usr/bin/                  # Confluent CLI and individual driver scripts for starting/stopping services, prefixed with <package> names
    /etc/<package>/            # Configuration files
    /usr/share/java/<package>/ # Jars
  2. Start Confluent ZooKeeper, Kafka, and Schema Registry services using the Command Line Interface.


    If not already in your PATH, add Confluent’s bin directory by running: export PATH=<path-to-confluent>/bin:$PATH

    To start just ZooKeeper, Kafka and Schema Registry run:

    $ confluent start schema-registry

    Each service reads its configuration from its property files under etc. The Confluent Platform quick starts use default properties unless stated otherwise. After issuing the command above, the services start in order, printing a status message as follows:

    Starting zookeeper
    zookeeper is [UP]
    Starting kafka
    kafka is [UP]
    Starting schema-registry
    schema-registry is [UP]


    Alternatively, to manually start each service in its own terminal, the equivalent commands are:

    $ ./bin/zookeeper-server-start ./etc/kafka/
    $ ./bin/kafka-server-start ./etc/kafka/
    $ ./bin/schema-registry-start ./etc/schema-registry/

    Now that you have all of the services running, you can send some Avro data to a Kafka topic. Although you would normally do this from within your applications, this quick start uses the Kafka Avro Console producer utility (kafka-avro-console-producer) to send the data without having to write any code.

  3. Start the Kafka Avro Console Producer utility. It is directed at your local Kafka cluster and is configured to write to topic test, read each line of input as an Avro message, validate the schema against the Schema Registry at the specified URL, and finally indicate the format of the data.

    $ <path-to-confluent>/bin/kafka-avro-console-producer \
             --broker-list localhost:9092 --topic test \
             --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'

    Tip: After the producer is started, the process will wait for you to enter messages and your terminal may appear idle.

    Enter a single message per line and press the Enter key to send them immediately. Try entering a couple of messages:

    {"f1": "value1"}
    {"f1": "value2"}
    {"f1": "value3"}

    When you’re done, use Ctrl+C to shut down the process.


    If you press Enter with an empty line, it will be interpreted as a null value and cause an error. You can simply start the console producer again to continue sending messages.

  4. Now we can check that the data was produced by using Kafka’s console consumer process to read data from the topic. We point it at the same test topic, our ZooKeeper instance, tell it to decode each message using Avro using the same Schema Registry URL to look up schemas, and finally tell it to start from the beginning of the topic (by default the consumer only reads messages published after it starts).

    $ ./bin/kafka-avro-console-consumer --topic test \
             --zookeeper localhost:2181 \

    You should see all the messages you created in the previous step written to the console in the same format.

    The consumer does not exit after reading all the messages so it can listen for and process new messages as they are published. Try keeping the consumer running and repeating step 3 – you will see messages delivered to the consumer immediately after you hit Enter for each message in the producer.

    When you’re done, shut down the consumer with Ctrl+C.

  5. Now let’s try to produce data to the same topic using an incompatible schema. We’ll run the producer with nearly the same command, but change the schema to expect plain integers.

    $ ./bin/kafka-avro-console-producer \
             --broker-list localhost:9092 --topic test \
             --property value.schema='{"type":"int"}'

    Now if you enter an integer and hit enter, you should see the following (expected) exception:

    org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: "int"
    Caused by: Schema being registered is incompatible with the latest schema; error code: 409
           at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(
           at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(
           at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(
           at io.confluent.kafka.formatter.AvroMessageReader.readMessage(

    When the producer tried to send a message, it checked the schema with the Schema Registry, which returned an error indicating the schema was invalid because it does not preserve backwards compatibility (the default Schema Registry setting). The console producer simply reports this error and exits, but your own applications could handle the problem more gracefully. Most importantly, we’ve guaranteed no incompatible data was published to Kafka.

  6. When you’re done testing, you can use confluent stop to shutdown each service in the right order. To completely delete any data produced during this test and start on a clean slate next time, you may run confluent destroy instead. This command will delete all the services’ data, which are otherwise persisted across restarts.

This simple guide only covered the Kafka and the Schema registry to get you started with the core services. See the documentation for each component for a quick start guide specific to that component: