Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Java Spring Boot¶
In this tutorial, you will run a Java Spring Boot client application that produces messages to and consumes messages from an Apache Kafka® cluster.
After you run the tutorial, view the provided source code and use it as a reference to develop your own Kafka client application.
Prerequisites¶
Client¶
- Java 1.8 or higher to run the demo application.
Kafka Cluster¶
- You can use this tutorial with a Kafka cluster in any environment:
- In Confluent Cloud
- On your local host
- Any remote Kafka cluster
- If you are running on Confluent Cloud, you must have access to a Confluent Cloud cluster
- The first 20 users to sign up for Confluent Cloud and use promo code
C50INTEG
will receive an additional $50 free usage (details)
- The first 20 users to sign up for Confluent Cloud and use promo code
Setup¶
Clone the confluentinc/examples GitHub repository and check out the
5.5.15-post
branch.git clone https://github.com/confluentinc/examples cd examples git checkout 5.5.15-post
Change directory to the example for Java Spring Boot.
cd clients/cloud/java-springboot/
Create a local file (for example, at
$HOME/.confluent/springboot.config
) with configuration parameters to connect to your Kafka cluster. Starting with one of the templates below, customize the file with connection information to your cluster. Substitute your values for{{ BROKER_ENDPOINT }}
,{{CLUSTER_API_KEY }}
, and{{ CLUSTER_API_SECRET }}
(see Connecting Clients to Confluent Cloud for instructions on how to create or find those values).Template configuration file for Confluent Cloud
# Kafka spring.kafka.properties.ssl.endpoint.identification.algorithm=https spring.kafka.properties.sasl.mechanism=PLAIN spring.kafka.properties.bootstrap.servers={{ BROKER_ENDPOINT }} spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="{{ CLUSTER_API_KEY }}" password="{{ CLUSTER_API_SECRET }}"; spring.kafka.properties.security.protocol=SASL_SSL
Template configuration file for local host
# Kafka spring.kafka.properties.bootstrap.servers=localhost:9092
Avro and Confluent Cloud Schema Registry¶
In this example, the producer application writes Kafka data to a topic in your Kafka cluster.
If the topic does not already exist in your Kafka cluster, the producer application will use the Kafka Admin Client API to create the topic.
Each record written to Kafka has a key representing a username (for example, alice
) and a value of a count, formatted as json (for example, {"count": 0}
).
The consumer application reads the same Kafka topic and keeps a rolling sum of the count as it processes each record.
As described in the Schema Registry and Confluent Cloud in the Confluent Cloud GUI, enable Confluent Cloud Schema Registry and create an API key and secret to connect to it.
Verify that your VPC can connect to the Confluent Cloud Schema Registry public internet endpoint.
Update your local configuration file (for example, at
$HOME/.confluent/springboot.config
) with parameters to connect to Schema Registry.Template configuration file for Confluent Cloud
# Kafka spring.kafka.properties.ssl.endpoint.identification.algorithm=https spring.kafka.properties.sasl.mechanism=PLAIN spring.kafka.properties.bootstrap.servers={{ BROKER_ENDPOINT }} spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="{{ CLUSTER_API_KEY }}" password="{{ CLUSTER_API_SECRET }}"; spring.kafka.properties.security.protocol=SASL_SSL # Schema Registry spring.kafka.properties.basic.auth.credentials.source=USER_INFO spring.kafka.properties.schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }} spring.kafka.properties.schema.registry.url=https://{{ SR_ENDPOINT }} # producer configuration spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer # consumer configuration spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
Template configuration file for local host
# Kafka spring.kafka.properties.bootstrap.servers=localhost:9092 # Confluent Schema Registry spring.kafka.properties.schema.registry.url=http://localhost:8081 # producer configuration spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer # consumer configuration spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
Verify your Confluent Cloud Schema Registry credentials work from your host. In the following example, substitute your values for
{{ SR_API_KEY}}
,{{SR_API_SECRET }}
, and{{ SR_ENDPOINT }}
.# View the list of registered subjects $ curl -u {{ SR_API_KEY }}:{{ SR_API_SECRET }} https://{{ SR_ENDPOINT }}/subjects # Same as above, as a single bash command to parse the values out of $HOME/.confluent/springboot.config $ curl -u $(grep "^schema.registry.basic.auth.user.info" $HOME/.confluent/springboot.config | cut -d'=' -f2) $(grep "^schema.registry.url" $HOME/.confluent/springboot.config | cut -d'=' -f2)/subjects
Produce and Consume Records¶
This Spring Boot application has the following two components: Producer and Consumer that are initialized during the Spring Boot application startup.
The producer writes Kafka data to a topic in your Kafka cluster. Each record has
a String key representing a username (for example, alice
) and a value of a
count, formatted with the Avro schema DataRecordAvro.avsc
{"namespace": "io.confluent.examples.clients.cloud",
"type": "record",
"name": "DataRecordAvro",
"fields": [
{"name": "count", "type": "long"}
]
}
Run the producer and consumer with the following command. It builds the jar and executes
spring-kafka
powered producer and consumer../startProducerConsumer.sh
Verify the producer sent all the messages. You should see:
... 2020-02-13 14:41:57.924 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 20 2020-02-13 14:41:57.927 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 21 2020-02-13 14:41:57.927 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 22 2020-02-13 14:41:57.927 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 23 2020-02-13 14:41:57.928 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 24 2020-02-13 14:41:57.928 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 25 2020-02-13 14:41:57.928 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 26 2020-02-13 14:41:57.929 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 27 2020-02-13 14:41:57.929 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 28 2020-02-13 14:41:57.930 INFO 44191 --- [ad | producer-1] i.c.e.c.c.springboot.ProducerExample : Produced record to topic test partition 3 @ offset 29 10 messages were produced to topic test ...
Verify the consumer received all the messages. You should see:
... 2020-02-13 14:41:58.248 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 0} 2020-02-13 14:41:58.248 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 1} 2020-02-13 14:41:58.248 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 2} 2020-02-13 14:41:58.248 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 3} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 4} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 5} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 6} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 7} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 8} 2020-02-13 14:41:58.249 INFO 44191 --- [ntainer#0-0-C-1] i.c.e.c.c.springboot.ConsumerExample : received alice {"count": 9}
When you are done, press
CTRL-C
.View the producer code and consumer code.
Kafka Streams¶
The Kafka Streams API reads from the same topic and does a rolling count and stateful sum aggregation as it processes each record.
Run the Kafka Streams application:
./startStreams.sh
Verify that you see the output:
... [Consumed record]: alice, 0 [Consumed record]: alice, 1 [Consumed record]: alice, 2 [Consumed record]: alice, 3 [Consumed record]: alice, 4 [Consumed record]: alice, 5 [Consumed record]: alice, 6 [Consumed record]: alice, 7 [Consumed record]: alice, 8 [Consumed record]: alice, 9 ... [Running count]: alice, 0 [Running count]: alice, 1 [Running count]: alice, 3 [Running count]: alice, 6 [Running count]: alice, 10 [Running count]: alice, 15 [Running count]: alice, 21 [Running count]: alice, 28 [Running count]: alice, 36 [Running count]: alice, 45 ...
When you are done, press
CTRL-C
.View the Kafka Streams code.