Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Node.js¶
In this tutorial, you will run a Node.js client application that produces messages to and consumes messages from an Apache Kafka® cluster.
After you run the tutorial, view the provided source code and use it as a reference to develop your own Kafka client application.
Prerequisites¶
Client¶
- Node.js version 8.6 or higher installed on your local machine.
- Users of macOS 10.13 (High Sierra) and later should read node-rdkafka’s
additional configuration instructions related to OpenSSL
before running
npm install
. - OpenSSL version 1.0.2.
Kafka Cluster¶
- You can use this tutorial with a Kafka cluster in any environment:
- In Confluent Cloud
- On your local host
- Any remote Kafka cluster
- If you are running on Confluent Cloud, you must have access to a Confluent Cloud cluster
- The first 20 users to sign up for Confluent Cloud and use promo code
C50INTEG
will receive an additional $50 free usage (details)
- The first 20 users to sign up for Confluent Cloud and use promo code
Setup¶
Clone the confluentinc/examples GitHub repository and check out the
5.5.15-post
branch.git clone https://github.com/confluentinc/examples cd examples git checkout 5.5.15-post
Change directory to the example for Node.js.
cd clients/cloud/nodejs/
Create a local file (for example, at
$HOME/.confluent/librdkafka.config
) with configuration parameters to connect to your Kafka cluster. Starting with one of the templates below, customize the file with connection information to your cluster. Substitute your values for{{ BROKER_ENDPOINT }}
,{{CLUSTER_API_KEY }}
, and{{ CLUSTER_API_SECRET }}
(see Connecting Clients to Confluent Cloud for instructions on how to create or find those values).Template configuration file for Confluent Cloud
# Kafka bootstrap.servers={{ BROKER_ENDPOINT }} security.protocol=SASL_SSL sasl.mechanisms=PLAIN sasl.username={{ CLUSTER_API_KEY }} sasl.password={{ CLUSTER_API_SECRET }}
Template configuration file for local host
# Kafka bootstrap.servers=localhost:9092
Basic Producer and Consumer¶
In this example, the producer application writes Kafka data to a topic in your Kafka cluster.
If the topic does not already exist in your Kafka cluster, the producer application will use the Kafka Admin Client API to create the topic.
Each record written to Kafka has a key representing a username (for example, alice
) and a value of a count, formatted as json (for example, {"count": 0}
).
The consumer application reads the same Kafka topic and keeps a rolling sum of the count as it processes each record.
Produce Records¶
Install npm dependencies.
npm install
Run the producer, passing in arguments for:
- the local file with configuration parameters to connect to your Kafka cluster
- the topic name
node producer.js -f $HOME/.confluent/librdkafka.config -t test1
Verify the producer sent all the messages. You should see:
Created topic test1 Producing record alice {"count":0} Producing record alice {"count":1} Producing record alice {"count":2} Producing record alice {"count":3} Producing record alice {"count":4} Producing record alice {"count":5} Producing record alice {"count":6} Producing record alice {"count":7} Producing record alice {"count":8} Producing record alice {"count":9} Successfully produced record to topic "test1" partition 0 {"count":0} Successfully produced record to topic "test1" partition 0 {"count":1} Successfully produced record to topic "test1" partition 0 {"count":2} Successfully produced record to topic "test1" partition 0 {"count":3} Successfully produced record to topic "test1" partition 0 {"count":4} Successfully produced record to topic "test1" partition 0 {"count":5} Successfully produced record to topic "test1" partition 0 {"count":6} Successfully produced record to topic "test1" partition 0 {"count":7} Successfully produced record to topic "test1" partition 0 {"count":8} Successfully produced record to topic "test1" partition 0 {"count":9}
View the producer code.
Consume Records¶
Run the consumer, passing in arguments for:
- the local file with configuration parameters to connect to your Kafka cluster
- the topic name you used earlier
node consumer.js -f $HOME/.confluent/librdkafka.config -t test1
Verify the consumer received all the messages:
Consuming messages from test1 Consumed record with key alice and value {"count":0} of partition 0 @ offset 0. Updated total count to 1 Consumed record with key alice and value {"count":1} of partition 0 @ offset 1. Updated total count to 2 Consumed record with key alice and value {"count":2} of partition 0 @ offset 2. Updated total count to 3 Consumed record with key alice and value {"count":3} of partition 0 @ offset 3. Updated total count to 4 Consumed record with key alice and value {"count":4} of partition 0 @ offset 4. Updated total count to 5 Consumed record with key alice and value {"count":5} of partition 0 @ offset 5. Updated total count to 6 Consumed record with key alice and value {"count":6} of partition 0 @ offset 6. Updated total count to 7 Consumed record with key alice and value {"count":7} of partition 0 @ offset 7. Updated total count to 8 Consumed record with key alice and value {"count":8} of partition 0 @ offset 8. Updated total count to 9 Consumed record with key alice and value {"count":9} of partition 0 @ offset 9. Updated total count to 10
View the consumer code.