Configure Confluent Cloud Clients

You can write Kafka client applications to connect to Confluent Cloud in pretty much any language of your choosing. The clients just need to be configured using the Confluent Cloud cluster credentials.

Refer to Code Examples for client examples written in the following programming languages and tools. These “Hello, World!” examples produce to and consume from any Kafka cluster, including Confluent Cloud, and for the subset of languages that support it, there are additional examples using Confluent Cloud Schema Registry and Avro.

../../_images/clients-all2.png

Note

All clients that connect to Confluent Cloud must support SASL_PLAIN authentication and TLS 1.2 encryption.

Java Client

  1. Log in to your cluster using the ccloud login command with the cluster URL specified.

    ccloud login
    
    Enter your Confluent Cloud credentials:
    Email: susan@myemail.com
    Password:
    
  2. Set the Confluent Cloud environment.

    1. Get the environment ID.

      ccloud environment list
      

      Your output should resemble:

           Id    |      Name
      +----------+----------------+
        * a-542  | dev
          a-4985 | prod
          a-2345 | jdoe-gcp-env
          a-9012 | jdoe-aws-env
      
    2. Set the environment using the ID (<env-id>).

      ccloud environment use <env-id>
      

      Your output should resemble:

      Now using a-4985 as the default (active) environment.
      
  3. Set the cluster to use.

    1. Get the cluster ID.

      ccloud kafka cluster list
      

      Your output should resemble:

            Id      |       Name        | Provider |   Region    | Durability | Status
      +-------------+-------------------+----------+-------------+------------+--------+
          ekg-rr8v7 | dev-aws-oregon    | aws      | us-west-2   | LOW        | UP
          ekg-q2j96 | prod              | gcp      | us-central1 | LOW        | UP
      
    2. Set the cluster using the ID (<cluster-id>). This is the cluster where the commands are run.

      ccloud kafka cluster use <cluster-id>
      
  4. Create an API key and secret, and save them. This is required to produce or consume to your topic.

    You can generate the API key from the Confluent Cloud web UI or on the Confluent Cloud CLI. Be sure to save the API key and secret.

    • On the web UI, click the Kafka API keys tab and click Create key. Save the key and secret, then click the checkbox next to I have saved my API key and secret and am ready to continue.

      ../../_images/cloud-api-key-confirm3.png
    • Or, from the Confluent Cloud CLI, type the following command:

      ccloud api-key create --resource <resource-id>
      

      Your output should resemble:

      Save the API key and secret. The secret is not retrievable later.
      +---------+------------------------------------------------------------------+
      | API Key | LD35EM2YJTCTRQRM                                                 |
      | Secret  | 67JImN+9vk+Hj3eaj2/UcwUlbDNlGGC3KAIOy5JNRVSnweumPBUpW31JWZSBeawz |
      +---------+------------------------------------------------------------------+
      
  5. Optional: Add the API secret with ccloud api-key store <key> <secret>. When you create an API key with the CLI, it is automatically stored locally. However, when you create an API key using the UI, API, or with the CLI on another machine, the secret is not available for CLI use until you store it. This is required because secrets cannot be retrieved after creation.

    ccloud api-key store <api-key> <api-secret> --resource <resource-id>
    
  6. Set the API key to use for Confluent Cloud CLI commands with the command ccloud api-key use <key> --resource <resource-id>.

    ccloud api-key use <api-key> --resource <resource-id>
    
  7. In the Confluent Cloud UI, enable Confluent Cloud Schema Registry and get the Schema Registry endpoint URL, the API key, and the API secret. For more information, see Quick Start for Schema Management on Confluent Cloud.

  8. In the Environment Overview page, click Clusters and select your cluster from the list.

  9. From the navigation menu, click Data In/Out -> Clients. Insert the following configuration settings into your client code.

    ssl.endpoint.identification.algorithm=https
    sasl.mechanism=PLAIN
    bootstrap.servers=<bootstrap-server-url>
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="<api-key>" password="<api-secret>";
    security.protocol=SASL_SSL
    client.dns.lookup=use_all_dns_ips
    
    // Producer specific settings
    acks=all
    linger.ms=5
    
    // Schema Registry specific settings
    basic.auth.credentials.source=USER_INFO
    schema.registry.basic.auth.user.info=<sr-api-key>:<sr-api-secret>
    schema.registry.url=<schema-registry-url>
    
    // Enable Avro serializer with Schema Registry (optional)
    key.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
    value.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
    

librdkafka-based C Clients

Confluent’s official Python, Golang, and .NET clients for Apache Kafka® are all based on librdkafka, as are other community-supported clients such as node-rdkafka.

  1. Log in to your cluster using the ccloud login command with the cluster URL specified.

    ccloud login
    
    Enter your Confluent Cloud credentials:
    Email: susan@myemail.com
    Password:
    
  2. Set the Confluent Cloud environment.

    1. Get the environment ID.

      ccloud environment list
      

      Your output should resemble:

           Id    |      Name
      +----------+----------------+
        * a-542  | dev
          a-4985 | prod
          a-2345 | jdoe-gcp-env
          a-9012 | jdoe-aws-env
      
    2. Set the environment using the ID (<env-id>).

      ccloud environment use <env-id>
      

      Your output should resemble:

      Now using a-4985 as the default (active) environment.
      
  3. Set the cluster to use.

    1. Get the cluster ID.

      ccloud kafka cluster list
      

      Your output should resemble:

            Id      |       Name        | Provider |   Region    | Durability | Status
      +-------------+-------------------+----------+-------------+------------+--------+
          ekg-rr8v7 | dev-aws-oregon    | aws      | us-west-2   | LOW        | UP
          ekg-q2j96 | prod              | gcp      | us-central1 | LOW        | UP
      
    2. Set the cluster using the ID (<cluster-id>). This is the cluster where the commands are run.

      ccloud kafka cluster use <cluster-id>
      
  4. Create an API key and secret, and save them. This is required to produce or consume to your topic.

    You can generate the API key from the Confluent Cloud web UI or on the Confluent Cloud CLI. Be sure to save the API key and secret.

    • On the web UI, click the Kafka API keys tab and click Create key. Save the key and secret, then click the checkbox next to I have saved my API key and secret and am ready to continue.

      ../../_images/cloud-api-key-confirm3.png
    • Or, from the Confluent Cloud CLI, type the following command:

      ccloud api-key create --resource <resource-id>
      

      Your output should resemble:

      Save the API key and secret. The secret is not retrievable later.
      +---------+------------------------------------------------------------------+
      | API Key | LD35EM2YJTCTRQRM                                                 |
      | Secret  | 67JImN+9vk+Hj3eaj2/UcwUlbDNlGGC3KAIOy5JNRVSnweumPBUpW31JWZSBeawz |
      +---------+------------------------------------------------------------------+
      
  5. Optional: Add the API secret with ccloud api-key store <key> <secret>. When you create an API key with the CLI, it is automatically stored locally. However, when you create an API key using the UI, API, or with the CLI on another machine, the secret is not available for CLI use until you store it. This is required because secrets cannot be retrieved after creation.

    ccloud api-key store <api-key> <api-secret> --resource <resource-id>
    
  6. Set the API key to use for Confluent Cloud CLI commands with the command ccloud api-key use <key> --resource <resource-id>.

    ccloud api-key use <api-key> --resource <resource-id>
    
  7. In the Confluent Cloud UI, on the Environment Overview page, click Clusters and select your cluster from the list.

  8. From the navigation menu, click Data In/Out -> Clients. Click C/C++ and insert the following configuration settings into your client code.

    bootstrap.servers=<broker-list>
    api.version.request=true
    broker.version.fallback=0.10.0.0
    api.version.fallback.ms=0
    sasl.mechanisms=PLAIN
    security.protocol=SASL_SSL
    ssl.ca.location=/usr/local/etc/openssl/cert.pem
    sasl.username=<api-key>
    sasl.password=<api-secret>
    

    Tip

    The api.version.request, broker.version.fallback, and api.version.fallback.ms options instruct librdkafka to use the latest protocol version and not fall back to an older version.

    For more information about librdkafka and Kafka version compatibility, see the documentation. For a complete list of the librdkafka configuration options, see the configuration documentation.

Configuring clients for cluster rolls

Confluent Cloud regularly rolls all clusters for upgrades and maintenance. Rolling a cluster means updating all the brokers that make up that cluster one at a time, so that the cluster remains fully available and performant throughout the update. The Kafka protocol and architecture are designed for exactly this type of highly-available, fault-tolerant operation, so correctly configured clients will gracefully handle the broker changes that happen during a roll.

During a cluster roll clients may encounter the following retriable exceptions, which will generate warnings on correctly-configured clients:

UNKNOWN_TOPIC_OR_PARTITION: "This server does not host this topic-partition."
LEADER_NOT_AVAILABLE: "There is no leader for this topic-partition as we are in the middle of a leadership election."
NOT_LEADER_FOR_PARTITION: "This server is not the leader for that topic-partition."
NOT_ENOUGH_REPLICAS: "Messages are rejected since there are fewer in-sync replicas than required."
NOT_ENOUGH_REPLICAS_AFTER_APPEND: "Messages are written to the log, but to fewer in-sync replicas than required."

By default, Kafka producer clients will retry for 2 minutes, print these warnings to logs, and recover without any intervention. Consumer and admin clients default to retrying for 1 minute.

If clients are configured with insufficient retries or retry-time, the exceptions above will be logged as errors.

If clients run out of memory buffer space while retrying, and also run out of time while the client blocks waiting for memory, timeout exceptions will occur.

Recommendations

Make sure all Confluent Cloud producer client applications are configured to retry for at least 2 minutes, make the maximum number of retry attempts, and require acknowledgments from all in-sync replicas. Consumer and admin clients should retry requests for at least 5 minutes.

// Producer settings
delivery.timeout.ms=120000
retries=2147483647
acks=all
// Consumer and admin settings
default.api.timeout.ms=300000

We do not recommend triggering internal alerts on the retriable warnings listed above, because they will occur regularly as part of normal operations and will be gracefully handled by correctly-configured clients without disruption to your streaming applications. Instead, we recommend limiting alerts to client errors that cannot be automatically retried.

For additional recommendations on how to architect, monitor, and optimize your Kafka applications on Confluent Cloud, refer to Developing Client Applications on Confluent Cloud.