Configure Confluent Cloud Clients

You can write Kafka client applications to connect to Confluent Cloud in pretty much any language of your choosing. The clients just need to be configured using the Confluent Cloud cluster credentials.

Refer to GitHub examples for client code written in the following programming languages and tools. These “Hello, World!” examples produce to and consume from Confluent Cloud, and for the subset of languages that support it, there are additional examples using Confluent Cloud Schema Registry and Avro.

  • C
  • Clojure
  • Confluent CLI
  • C-sharp
  • Go
  • Groovy
  • Java
  • Kafka commands
  • Kafka connect datagen
  • kafkacat
  • Kotlin
  • KSQL datagen
  • Node.js
  • Python
  • Ruby
  • Rust
  • Scala

Note

Community clients that connect to Confluent Cloud must support SASL authentication.

Java Client

  1. Log in to your cluster using the ccloud login command with the cluster URL specified.

    ccloud login
    
    Enter your Confluent Cloud credentials:
    Email: susan@myemail.com
    Password:
    
  2. Set the Confluent Cloud environment.

    1. Get the environment ID.

      ccloud environment list
      

      Your output should resemble:

           Id    |      Name
      +----------+----------------+
        * a-542  | dev
          a-4985 | prod
          a-2345 | jdoe-gcp-env
          a-9012 | jdoe-aws-env
      
    2. Set the environment using the ID (<env-id>).

      ccloud environment use <env-id>
      

      Your output should resemble:

      Now using a-4985 as the default (active) environment.
      
  3. Set the cluster to use.

    1. Get the cluster ID.

      ccloud kafka cluster list
      

      Your output should resemble:

            Id      |       Name        | Provider |   Region    | Durability | Status
      +-------------+-------------------+----------+-------------+------------+--------+
          ekg-rr8v7 | dev-aws-oregon    | aws      | us-west-2   | LOW        | UP
          ekg-q2j96 | prod              | gcp      | us-central1 | LOW        | UP
      
    2. Set the cluster using the ID (<cluster-id>). This is the cluster where the commands are run.

      ccloud kafka cluster use <cluster-id>
      
  4. Create the API key/secret with the resource ID (<resource-id>) specified and save the output. You can find the Kafka resource ID by using the ccloud kafka cluster list command. You can find the Schema Registry resource ID by using the ccloud schema-registry cluster describe command.

    ccloud api-key create --resource <resource-id>
    

    Your output should resemble:

    Save the API key and secret. The key/secret is not retrievable later.
    +---------+------------------------------------------------------------------+
    | API Key | KIELS5LZKXCBOT9L                                                 |
    | Secret  | XVLE434R43R532RFSASDeaatawefafeazzzeeeeeelllll4354t5345452432x   |
    +---------+------------------------------------------------------------------+
    

    Tip

    To use an existing API key/secret, run this command with the resource ID (<resource-id>), API key (<api-key>), and API secret (<api-secret>) specified. This command registers an API key/secret created by another process and stores it locally.

    ccloud api-key store <api-key> <api-secret> --resource <resource-id>
    
  5. Associate the Kafka API key/secret with this cluster, the API key (<api-key>) must be specified. This step is not necessary for Schema Registry resources.

    ccloud api-key use <api-key>
    
  6. In the Confluent Cloud UI, enable Confluent Cloud Schema Registry and get the Schema Registry endpoint URL, the API key, and the API secret. For more information, see Configure Confluent Cloud Schema Registry.

  7. In the Environment Overview page, click Clusters and select your cluster from the list.

  8. From the navigation menu, click Data In/Out -> Clients. Insert the following configuration settings into your client code.

    ssl.endpoint.identification.algorithm=https
    sasl.mechanism=PLAIN
    request.timeout.ms=20000
    bootstrap.servers=<bootstrap-server-url>
    retry.backoff.ms=500
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="<api-key>" password="<api-secret>";
    security.protocol=SASL_SSL
    
    // Schema Registry specific settings
    basic.auth.credentials.source=USER_INFO
    schema.registry.basic.auth.user.info=<sr-api-key>:<sr-api-secret>
    schema.registry.url=<schema-registry-url>
    
    // Enable Avro serializer with Schema Registry (optional)
    key.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
    value.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
    

librdkafka-based C Clients

Confluent’s official Python, Golang, and .NET clients for Apache Kafka® are all based on librdkafka, as are other community-supported clients such as node-rdkafka.

  1. Log in to your cluster using the ccloud login command with the cluster URL specified.

    ccloud login
    
    Enter your Confluent Cloud credentials:
    Email: susan@myemail.com
    Password:
    
  2. Set the Confluent Cloud environment.

    1. Get the environment ID.

      ccloud environment list
      

      Your output should resemble:

           Id    |      Name
      +----------+----------------+
        * a-542  | dev
          a-4985 | prod
          a-2345 | jdoe-gcp-env
          a-9012 | jdoe-aws-env
      
    2. Set the environment using the ID (<env-id>).

      ccloud environment use <env-id>
      

      Your output should resemble:

      Now using a-4985 as the default (active) environment.
      
  3. Set the cluster to use.

    1. Get the cluster ID.

      ccloud kafka cluster list
      

      Your output should resemble:

            Id      |       Name        | Provider |   Region    | Durability | Status
      +-------------+-------------------+----------+-------------+------------+--------+
          ekg-rr8v7 | dev-aws-oregon    | aws      | us-west-2   | LOW        | UP
          ekg-q2j96 | prod              | gcp      | us-central1 | LOW        | UP
      
    2. Set the cluster using the ID (<cluster-id>). This is the cluster where the commands are run.

      ccloud kafka cluster use <cluster-id>
      
  4. Create the API key/secret with the resource ID (<resource-id>) specified and save the output. You can find the Kafka resource ID by using the ccloud kafka cluster list command. You can find the Schema Registry resource ID by using the ccloud schema-registry cluster describe command.

    ccloud api-key create --resource <resource-id>
    

    Your output should resemble:

    Save the API key and secret. The key/secret is not retrievable later.
    +---------+------------------------------------------------------------------+
    | API Key | KIELS5LZKXCBOT9L                                                 |
    | Secret  | XVLE434R43R532RFSASDeaatawefafeazzzeeeeeelllll4354t5345452432x   |
    +---------+------------------------------------------------------------------+
    

    Tip

    To use an existing API key/secret, run this command with the resource ID (<resource-id>), API key (<api-key>), and API secret (<api-secret>) specified. This command registers an API key/secret created by another process and stores it locally.

    ccloud api-key store <api-key> <api-secret> --resource <resource-id>
    
  5. Associate the Kafka API key/secret with this cluster, the API key (<api-key>) must be specified. This step is not necessary for Schema Registry resources.

    ccloud api-key use <api-key>
    
  6. In the Confluent Cloud UI, on the Environment Overview page, click Clusters and select your cluster from the list.

  7. From the navigation menu, click Data In/Out -> Clients. Click C/C++ and insert the following configuration settings into your client code.

    bootstrap.servers=<broker-list>
    api.version.request=true
    broker.version.fallback=0.10.0.0
    api.version.fallback.ms=0
    sasl.mechanisms=PLAIN
    security.protocol=SASL_SSL
    ssl.ca.location=/usr/local/etc/openssl/cert.pem
    sasl.username=<api-key>
    sasl.password=<api-secret>
    

    Tip

    The api.version.request, broker.version.fallback, and api.version.fallback.ms options instruct librdkafka to use the latest protocol version and not fall back to an older version.

    For more information about librdkafka and Kafka version compatibility, see the documentation. For a complete list of the librdkafka configuration options, see the configuration documentation.