Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
kafkacat Utility¶
kafkacat is a command line utility that you can use to test and debug Apache Kafka® deployments. You can use kafkacat to produce, consume, and list topic and partition information for Kafka. Described as “netcat for Kafka”, it is a swiss-army knife of tools for inspecting and creating data in Kafka.
It is similar to Kafka Console Producer (kafka-console-producer
) and Kafka Console Consumer (kafka-console-consumer
),
but even more powerful.
Important
kafkacat is an open-source utility, available at https://github.com/edenhill/kafkacat. It is not supported by Confluent and is not included in Confluent Platform.
Consumer Mode¶
In consumer mode, kafkacat reads messages from a topic and partition and prints them to standard output (stdout). You must
specify a Kafka broker (-b
) and topic (-t
). You can optionally specify delimiter (-D
). The default delimiter
is newline.
You can supply kafkacat with a broker (-b
) and a topic (-t
) and view see its contents:
kafkacat -b localhost:9092 -t mysql_users
% Auto-selecting Consumer mode (use -P or -C to override)
{"uid":1,"name":"Cliff","locale":"en_US","address_city":"St Louis","elite":"P"}
{"uid":2,"name":"Nick","locale":"en_US","address_city":"Palo Alto","elite":"G"}
[...]
kafkacat automatically selects its mode depending on the terminal or pipe type.
- If data is being piped to kafkacat it will automatically select producer (
-P
) mode. - If data is being piped from kafkacat (e.g. standard terminal output) it will automatically select consumer (
-C
) mode.
You can explicitly specify mode by using the consumer (-C
) or producer (-P
) flag. You can also specify how many
messages you want to see with the lowercase mode identifier and a number (e.g. -c<num>
). For example, to consume a single
message:
kafkacat -b localhost:9092 -t mysql_users -C -c1
{"uid":1,"name":"Cliff","locale":"en_US","address_city":"St Louis","elite":"P"}
You can view the message key by using the -K
argument with a delimiter. For example, to view the message key with a
tab delimiter:
kafkacat -b localhost:9092 -t mysql_users -C -c1 -K\t
1 {"uid":1,"name":"Cliff","locale":"en_US","address_city":"St Louis","elite":"P"}
The -f
flag takes arguments specifying both the format of the output and the fields to include. Here’s a simple example
of pretty-printing the key and value pairs for each message:
kafkacat -b localhost:9092 -t mysql_users -C -c1 -f 'Key: %k\nValue: %s\n'
Key: 1
Value: {"uid":1,"name":"Cliff","locale":"en_US","address_city":"St Louis","elite":"P"}
Note that the -K:
was replaced because the key parameter is specified in the -f
format string now.
A more advanced use of -f
would be to show even more metadata - offsets, timestamps, and even data lengths:
kafkacat -b localhost:9092 -t mysql_users -C -c2 -f '\nKey (%K bytes): %k\t\nValue (%S bytes): %s\nTimestamp: %T\tPartition: %p\tOffset: %o\n--\n'
Key (1 bytes): 1
Value (79 bytes): {"uid":1,"name":"Cliff","locale":"en_US","address_city":"St Louis","elite":"P"}
Timestamp: 1520618381093 Partition: 0 Offset: 0
--
Key (1 bytes): 2
Value (79 bytes): {"uid":2,"name":"Nick","locale":"en_US","address_city":"Palo Alto","elite":"G"}
Timestamp: 1520618381093 Partition: 0 Offset: 1
--
Producer Mode¶
In producer mode, kafkacat reads messages from standard input (stdin). You must specify a Kafka broker (-b
) and topic
(-t
). You can optionally specify a delimiter (-D
). The default delimiter is newline.
You can easily send data to a topic using kafkacat. Run it with the -P
command and enter the data you want, and
then press Ctrl-D
to finish:
kafkacat -b localhost:9092 -t new_topic -P
test
Replay it (replace -P
with -C
) to verify:
kafkacat -b localhost:9092 -t new_topic -C
test
You can send data to kafkacat by adding data from a file. The following example treats each line of file /tmp/msgs
as an individual
message by using the -l
flag. Without the -l
flag, the entire file is treated as its own message. This is
useful for sending binary data. This example also uses the -T
flag to also echo the input to stdout
.
kafkacat -b localhost:9092 -t <my_topic> -T -P -l /tmp/msgs
These are
three messages
sent through kafkacat
You can specify the key for messages, using the same -K
parameter plus delimiter character that was used for the
previous consumer example:
kafkacat -b localhost:9092 -t keyed_topic -P -K:
1:foo
2:bar
kafkacat -b localhost:9092 -t keyed_topic -C -f 'Key: %k\nValue: %s\n'
Key: 1
Value: foo
Key: 2
Value: bar
You can set the partition:
kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 1
1:foo
kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 2
2:bar
kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 3
3:wibble
Replay, using the format and -f
field as above:
kafkacat -b localhost:9092 -t partitioned_topic -C -f '\nKey (%K bytes): %k\t\nValue (%S bytes): %s\nTimestamp: %T\tPartition: %p\tOffset: %o\n--\n'
% Reached end of topic partitioned_topic [0] at offset 0
Key (1 bytes): 1
Value (3 bytes): foo
Timestamp: 1520620113485 Partition: 1 Offset: 0
--
Key (1 bytes): 2
Value (3 bytes): bar
Timestamp: 1520620121165 Partition: 2 Offset: 0
--
Key (1 bytes): 3
Value (6 bytes): wibble
Timestamp: 1520620129112 Partition: 3 Offset: 0
--
Metadata Listing Mode¶
In metadata list mode (-L
), kafkacat displays the current state of the Kafka cluster and its topics, partitions, replicas
and in-sync replicas (ISR).
kafkacat -b localhost:9092 -L
Add the JSON (-J
) option to have the output emitted as JSON. This can be useful when passing this data to other applications
for further processing.
kafkacat -b mybroker -L -J
For more information and examples, see the kafkacat GitHub repository.