Kafka Command-Line Interface (CLI) Tools

Once you download Apache Kafka® and extract the files, you can access several CLI tools under the /bin directory.

These tools enable you to start and stop Kafka, create and update topics, manage partitions and many more common operations.

You can get details on how to run each tool by by running it with no argument, or by using the --help argument.

Confluent Tip

When you Install Confluent Platform, all of the Kafka tools listed in this topic are also installed in the $CONFLUENT_HOME/bin folder, along with additional Confluent tools. Confluent has dropped the .sh extensions, so you do not need to use the extensions when calling the Confluent versions of these tools. For more information, see CLI Tools for Confluent Platform.

The following sections group the tools by function and provide basic usage information. In some cases, a tool is listed in more than one section.

Tools to start and stop Kafka and configure metadata

This section contains tools to start Kafka running either ZooKeeper or in KRaft-mode, and manage brokers.

kafka-server-start.sh

Use this tool start a Kafka Server. You can optionally pass the path to the properties file you want to use. If you are using ZooKeeper for metadata management, you must start ZooKeeper first. For KRaft-mode, generate a cluster ID and store it in the properties file. See the Kafka quickstart for an example of how to start Kafka.

kafka-server-stop.sh

Stops the running Kafka server. This command takes no arguments.

zookeeper-server-start.sh

Starts the ZooKeeper server. See the Kafka quickstart for an example of how to use this command.

zookeeper-server-stop.sh

Stops the ZooKeeper server. This command accepts no arguments.

kafka-storage.sh

A tool to generate a Cluster UUID and format storage with the ID when running in KRaft mode. See the Kafka quickstart for an example of how to use this command.

kafka-cluster.sh

Use to get the ID of a cluster or unregister a cluster.

zookeeper-shell.sh

This tool connects to the ZooKeeper shell.

kafka-features.sh

Manages feature flags to disable or enable functionality at runtime in Kafka. Pass the describe argument to describe the current active feature flags, upgrade to upgrade one or more feature flags, downgrade to downgrade one or more, and disable to disable one or more feature flags, which is the same as downgrading the version to zero.

kafka-broker-api-versions.sh

This tool retrieves and displays Broker information. Example: ./bin/kafka-broker-api-versions.sh --bootstrap-server <hostname:port>.

kafka-metadata-quorum.sh

Describes the metadata quorum status. Pass the describe command to describe the current state of the metadata quorum.

kafka-metadata-shell.sh

A tool that enables you to interactively examine the metadata stored in a KRaft cluster. For more information, see

the Kafka Wiki.

kafka-configs.sh

Use this tool to change and describe topic, client, user, broker, ip, or cluster link configuration settings. To change a property, specify the entity-type to the desired entity (topic, broker, user, etc), and use the alter option. See Topic Operations for example of how to modify topics.

zookeeper-security-migration.sh

This tool is used to restrict or provide access to ZooKeeper metadata. The tool updates the ACLs of znodes.

Topic, partition, and replica tools

kafka-topics.sh

Use to create, delete, or change a topic. You can also use the tool to retrieve a list of topics associated with a Kafka cluster. See Topic Operations.

Example: kafka-topics.sh --bootstrap-server <hostname:port> --topic test-topic --partitions 3.

kafka-configs.sh

Use this tool to change and describe topic, client, user, broker, ip, or cluster link configuration settings. To change a property, specify the entity-type to the desired entity (topic, broker, user, etc), and use the alter option. See Topic Operations for example of how to modify topics.

kafka-get-offsets.sh

A tool to retrieve topic-partition offsets.

kafka-leader-election.sh

This tool attempts to elect a new leader for a set of topic partitions.

Run this tool manually to restore leadership if the auto.leader.rebalance.enable property is set to false.

kafka-transactions.sh

A tool to list and describe transactions. Use to detect and abort hanging transactions. See Detect and Abort Hanging Transcations

kafka-reassign-partitions.sh

A tool to move topic partitions between replicas You pass a JSON-formatted file to specify the new replicas. To learn more, see Changing the replication factor in the Confluent docs.

kafka-delete-records.sh

This tool deletes partition records. Use this if a topic receives bad data. Pass a JSON-formatted file that specifies the topic, partition, and offset for data deletion. Data will be deleted up to the offset specified. Example: ./kafka-delete-records.sh --bootstrap-server <hostname:port> --offset-json-file deleteme.json

kafka-log-dirs.sh

This tool gets a list of replicas per log directory on a broker.

kafka-replica-verification.sh

Used to verify that all replicas of a topic contain the same data. Requires a broker-list parameter that contains a list of <hostname:port> entries specifying the server/port to connect to.

kafka-mirror-maker.sh

DEPRECATED: See connect-mirror-maker.sh. Enables the creation of a replica of an existing Kafka cluster. Example: ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters secondary. Learn more Kafka mirroring

connect-mirror-maker.sh

Replicates topics from one cluster to another using the Connect framework. You must pass an an mm2.properties MM2 configuration file. For more information, see KIP-382: MirrorMaker 2.0

Client, producer, and consumer tools

kafka-verifiable-consumer.sh

This tool consumes messages from a topic and emits consumer events as JSON objects to STDOUT. For example, group rebalances, received messages, and offsets committed. Intended for internal testing.

kafka-configs.sh

Use this tool to change and describe topic, client, user, broker, ip, or cluster link configuration settings. To change a property, specify the entity-type to the desired entity (topic, broker, user, etc), and use the alter option. See Topic Operations for example of how to modify topics.

kafka-verifiable-producer.sh

This tool produces increasing integers to the specified topic and prints JSON metadata to STDOUT on each sendrequest. This tool shows which messages have been acked and which have not. Used for internal testing.

kafka-console-consumer.sh

A tool that enables you to consume records from a topic. Requires a broker-list parameter that contains a list of <hostname:port> entries specifying the server/port to connect to. Example: kafka-console-consumer.sh --bootstrap-server <BOOTSTRAP_BROKER_LIST> --consumer.config config.properties --topic <TOPIC_NAME> --property "print.key=true"

kafka-console-producer.sh

Use to produce records to a topic Requires a broker-list parameter that contains a list of <hostname:port> entries specifying the server/port to connect to. Example kafka-console-producer.sh --broker-list <BOOTSTRAP_BROKER_LIST> --producer.config config.properties --topic <TOPIC_NAME> --property "parse.key=true" --property "key.separator=:"

kafka-producer-perf-test.sh

Enables you to produce a large quantity of data to test producer performance for the Kafka cluster. Example: ./bin/kafka-producer-perf-test.sh --topic topic-a --num-records 200000 --record-size 1000 --throughput 10000000 --producer-props bootstrap.servers=<hostname:port>

kafka-consumer-groups.sh

This tool gets a list of the active groups in the cluster. For examples, see Manage Consumer Groups.

kafka-consumer-perf-test.sh

This tool tests the consumer performance for the Kafka cluster.

Connect tools

connect-distributed.sh

Runs Connect workers in Distributed mode, meaning on multiple, distributed, machines. Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data.

connect-standalone.sh

Runs Connect in standalone mode meaning all work is performed in a single process. This is good for getting started but lacks fault tolerance. For more information, see Kafka Connect

connect-mirror-maker.sh

Replicates topics from one cluster to another using the Connect framework. You must pass an an mm2.properties MM2 configuration file. For more information, see KIP-382: MirrorMaker 2.0

Kafka Streams tools

kafka-streams-application-reset.sh

For Kafka Streams applications, this tool resets the application and forces it to reprocess its data from the beginning. Useful for debugging and testing. For more information, see Kafka Streams Application Reset Tool in the Confluent documentation.

Security tools

kafka-acls

Tool for adding, removing, and listing ACLs ACL Command-line interface

kafka-delegation-tokens.sh

This tool creates, renews, expires, and describes delegation tokens. For more information, see Authentication using Delegation Tokens. See Authentication using Delegation Tokens in the Confluent Documentation.

zookeeper-security-migration.sh

This tool is used to restrict or provide access to ZooKeeper metadata. The tool updates the ACLs of znodes.

Addition testing and miscellaneous tools

This section contains tools you can use for testing and troubleshooting your applications.

kafka-dump-log.sh

This tool helps to parse a log file and dump its contents to the console; useful for debugging a corrupt log segment. Requires a comma separated list of data and index log files to dump. Example ./kafka-log-dump.sh --files file1, file2

trogdor.sh

Trogdor is a test framework for Kafka. Trogdor can run benchmarks and other workloads. Trogdor can also inject faults in order to stress test the system. For more information, see Trogdor and TROGDOR

kafka-run-class.sh

This tool is a thin wrapper around the Kafka Java class. It is called by other tools, and should not be run or modified directly.