Tutorial: Access Management in Confluent Cloud

This tutorial provides an end-to-end workflow for Confluent Cloud user and service account management. The steps are:

Step 1: Invite User

Refer to Add a user account.

Step 2: Configure the CLI, Cluster, and Access to Kafka

  1. Accept the invitation from email and log in using a web browser.

  2. Install the Confluent CLI.

  3. Log in to the Confluent CLI using the confluent login command.

    confluent login
    

    Specify your credentials.

    Enter your Confluent Cloud credentials:
    Email: jane.smith@big-data.com
    Password: ************
    

    The output should resemble:

    Logged in as "jane.smith@big-data.com".
    Using environment "a-42619" ("default").
    

    Note

    You are logged in to the default environment for your organization. If you want to work in a different environment than the default, set the environment using the ID (<env-id>):

    confluent environment use <env-id>
    

    Your output should resemble:

    Now using a-4985 as the default (active) environment.
    
  4. List the available clusters using the confluent kafka cluster list command.

    confluent kafka cluster list
    

    The output should resemble:

        Id         |    Name       |   Type   | Provider |   Region    | Availability | Status
    +--------------+---------------+----------+----------+-------------+--------------+----------+
      lkc-j5zrrm   | dev-test-01   | STANDARD | gcp      | us-central1 | single-zone  | UP
      lkc-382g7m   | dev-test-02   | BASIC    | aws      | us-west-2   | single-zone  | UP
      lkc-43npm    | prod-test-01  | BASIC    | aws      | us-west-2   | single-zone  | UP
      lkc-lq8dd    | stage-test-01 | BASIC    | aws      | us-west-2   | single-zone  | DELETED
    
  5. Connect to cluster dev-test-01 (lkc-j5zrrm) using the confluent kafka cluster use command. This is the cluster where the commands are run. Be sure to replace the cluster ID shown in the example with your own.

    confluent kafka cluster use lkc-j5zrrm
    

    The output should look like:

    Set Kafka cluster "lkc-j5zrrm" as the active cluster for environment "a-42619".
    
  6. Create an API key and secret, and save them. You must complete this step to produce or consume to your topics. In this step you create an API Key for your user account, which has full permissions. See steps 5 and 6 for guidance on how to create API Keys for service accounts and grant them permissions with ACLs.

    You can generate the API key using either the Confluent Cloud Console or the Confluent CLI. Be sure to save the API key and secret.

    • If using the web UI, click the API access tab and click + Add key. Save the key and secret, then click the checkbox next to I have saved my API key and secret and am ready to continue.

      ../_images/cloud-api-key-confirm.png
    • If using the Confluent CLI, type the following command (be sure to replace the cluster ID shown in the example with your own):

      confluent api-key create --resource lkc-j5zrrm
      

      The output should look like:

      It may take a couple of minutes for the API key to be ready.
      Save the API key and secret. The secret is not retrievable later.
      +---------+------------------------------------------------------------------+
      | API Key | ABCD1EFGHIJK2LMN                                                 |
      | Secret  | abC1dEf23G4H567IJKLmn8O1PqrST1UvW0XyZAbcdefGHIjK23LMNOpQRSTUv4WX |
      +---------+------------------------------------------------------------------+
      
  7. Optional: Add the API secret using confluent api-key store <key> <secret>. When you create an API key with the CLI, it is automatically stored locally. However, when you create an API key using the web UI or with the CLI on another machine, the secret is not available for CLI use until you store it. This is required because secrets cannot be retrieved after creation.

    confluent api-key store <api-key> <api-secret> --resource <resource-id>
    
  8. Set the API key to use for Confluent CLI commands:

    confluent api-key use <api-key> --resource <resource-id>
    

Step 3: Create and Manage Topics

  1. Create topics with all the default values using the confluent kafka topic create command.

    confluent kafka topic create myTopic1
    

    The output should look like:

    Created topic "myTopic1".
    

    Add another topic:

    confluent kafka topic create myTopic2
    
  2. Create a topic with six partitions.

    confluent kafka topic create myTopic3 --partitions 6
    

    The output should look like:

    Created topic "myTopic3".
    
  3. List topics using the confluent kafka topic list command.

    confluent kafka topic list
    

    The output should resemble:

        Name
    +----------+
      myTopic1
      myTopic2
      myTopic3
    
  4. Delete a topic named myTopic1 using the confluent kafka topic delete command.

    confluent kafka topic delete myTopic1
    

    The output should look like:

    Deleted topic "myTopic1".
    
  5. Describe a topic using the confluent kafka topic describe command.

    confluent kafka topic describe myTopic2
    

    The output should resemble:

    Topic: myTopic2 PartitionCount: 6 ReplicationFactor: 3
       Topic   | Partition | Leader | Replicas |   ISR
    +----------+-----------+--------+----------+---------+
      myTopic2 |         0 |      0 | [0 3 2]  | [0 3 2]
      myTopic2 |         1 |      2 | [2 1 0]  | [2 1 0]
      myTopic2 |         2 |      1 | [1 0 2]  | [1 2]
      myTopic2 |         3 |      3 | [3 2 1]  | [3 2 1]
      myTopic2 |         4 |      0 | [0 1 3]  | [0 1 3]
      myTopic2 |         5 |      2 | [2 3 0]  | [2 3 0]
    
  6. Modify the myTopic2 configuration to set cleanup.policy using the confluent kafka topic update command.

    confluent kafka topic update myTopic2 --config cleanup.policy=compact
    

    The output should resemble:

    Updated the following configs for topic "myTopic2":
           Name      |  Value
    +----------------+---------+
      cleanup.policy | compact
    

Step 4: Produce and consume

  1. Produce messages to a topic using the confluent kafka topic produce command.

    confluent kafka topic produce myTopic3
    

    Type your messages at the prompt, and press Return after each one.

    Your command window should resemble the following:

    confluent kafka topic produce myTopic3
    Starting Kafka Producer. ^C or ^D to exit
    hello
    cool topic
    did you get this message?
    first
    second
    third
    yes! I love this cool topic
    

    If you want to stay on a single screen, type ^C to exit the producer, and then run the consumer with -b (or --from-beginning) as shown in the next step.

  2. Consume messages from a topic using the confluent kafka topic consume command.

    confluent kafka topic consume myTopic3 --from-beginning
    

    Your output should resemble the following:

    confluent kafka topic consume myTopic3 --from-beginning
    Starting Kafka Consumer. ^C or ^D to exit
    second
    did you get this message?
    first
    third
    cool topic
    hello
    yes! I love this cool topic
    

Step 5: Create Service Accounts and API Key/Secret Pairs

In Step 2 you added an API Key for your user account, which has super.user privileges. This step describes how to create a service account so that you can grant application access with limited permissions.

  1. Create a service account named dev-apps using the confluent iam service-account create command.

    confluent iam service-account create "dev-apps" \
    --description "Service account for dev apps"
    

    The output should resemble:

    +----------------+----------------------------------+
    | ID             |                        sa-1a2b3c |
    | Name           | dev-apps                         |
    | Description    | Service account for dev apps     |
    +----------------+----------------------------------+
    

    Note the ID associated to this service account, in this case sa-1a2b3c.

  2. Create an API key/secret pair for this service account using the confluent api-key create command. It also needs the cluster ID, which is available from the output of confluent kafka cluster list.

    confluent api-key create --service-account sa-1a2b3c --resource lkc-4xrp1
    
  3. Take note of the API key and secret, this is the only time you will be able to see it.

  4. Client applications that will connect to this cluster will need to configure at least these three identifying parameters:

    • API key: available when you create the API key/secret pair the first time
    • API secret: available when you create the API key/secret pair the first time
    • bootstrap.servers: set to the Endpoint in the output of confluent kafka cluster describe

Step 6: Manage Access with ACLs

  1. Grant the dev-apps service account the ability to produce to topics using the confluent kafka acl create command.

      confluent kafka acl create --allow --service-account sa-1a2b3c --operation WRITE --topic myTopic2
    
      Principal          | Permission | Operation | ResourceType | ResourceName      | PatternType
    +--------------------+------------+-----------+--------------+-------------------+------------+
      User:sa-1a2b3c     | ALLOW      | WRITE     | TOPIC        | myTopic2          | LITERAL
    
  2. If the service also needs to create topics, grant the dev-apps service account the ability to create new topics.

      confluent kafka acl create --allow --service-account sa-1a2b3c --operation CREATE --topic "*"
    
      Principal          | Permission | Operation | ResourceType | ResourceName      | PatternType
    +--------------------+------------+-----------+--------------+-------------------+------------+
      User:sa-1a2b3c     | ALLOW      | CREATE    | TOPIC        | myTopic2          | LITERAL
    
  3. Grant the dev-apps service account the ability to consume from a particular topic using the confluent kafka acl create command. Note that it requires two commands: one to specify the topic and one to specify the consumer group.

    confluent kafka acl create --allow --service-account sa-1a2b3c --operation READ --topic myTopic2
    
      Principal          | Permission | Operation | ResourceType | ResourceName      | PatternType
    +--------------------+------------+-----------+--------------+-------------------+------------+
      User:sa-1a2b3c     | ALLOW      | READ      | TOPIC        | myTopic2          | LITERAL
    
    confluent kafka acl create --allow --service-account sa-1a2b3c --operation READ --consumer-group java_example_group_1
    
      Principal        | Permission | Operation | ResourceType | ResourceName         | PatternType
    +------------------+------------+-----------+--------------+----------------------+------------+
      User:sa-1a2b3c   | ALLOW      | READ      | GROUP        | java_example_group_1 | LITERAL
    
  4. List all ACLs for the dev-apps service account using the confluent kafka acl list command.

    confluent kafka acl list --service-account sa-1a2b3c
    

    The output should resemble:

      Principal         | Permission | Operation | ResourceType | ResourceName         | PatternType
    +-------------------+------------+-----------+--------------+----------------------+------------+
      User:sa-1a2b3c    | ALLOW      | CREATE    | TOPIC        | *                    | LITERAL
      User:sa-1a2b3c    | ALLOW      | READ      | TOPIC        | myTopic2             | LITERAL
      User:sa-1a2b3c    | ALLOW      | WRITE     | TOPIC        | myTopic2             | LITERAL
      User:sa-1a2b3c    | ALLOW      | READ      | GROUP        | java_example_group_1 | LITERAL
    
  5. You can add ACLs on prefixed resource patterns. For example, you can add an ACL for any topic whose name starts with demo.

    confluent kafka acl create --allow --service-account sa-1a2b3c --operation WRITE --topic demo --prefix
    
      Principal          | Permission | Operation | ResourceType | ResourceName   | PatternType
    +--------------------+------------+-----------+--------------+----------------+------------+
      User:sa-1a2b3c     | ALLOW      | WRITE     | TOPIC        | demo           | PREFIXED
    
  6. You can add ACLs using a wildcard which matches any name for that resource. For example, you can add an ACL to allow a topic of any name.

    confluent kafka acl create --allow --service-account sa-1a2b3c --operation WRITE --topic "*"
    
      Principal          | Permission | Operation | ResourceType | ResourceName    | PatternType
    +--------------------+------------+-----------+--------------+-----------------+------------+
      User:sa-1a2b3c     | ALLOW      | WRITE     | TOPIC        | *               | LITERAL
    

    Note

    Windows users must use double quotation marks around the wildcard character. Linux and MacOS users can use either double or single quotation marks.

  7. Remove an ACL from the dev-apps service account using the confluent kafka acl delete command.

    confluent kafka acl delete --allow --service-account sa-1a2b3c --operation WRITE --topic myTopic2
    
    Deleted ACLs.
    

Step 7: Log out

Log out using the confluent logout command.

confluent logout