Access Control Lists (ACLs) for Confluent Cloud

Access Control Lists (ACLs) provide secure access to your Confluent Cloud Kafka data.

Important

Anyone with access to Confluent Cloud web browser has full access to all resources (which is the same as having super user access).

Also, Confluent Cloud ACLs are similar to Kafka ACLs (except for the CLI commands). Before attempting to create and use ACLs, you should familiarize yourself with ACL concepts. Doing so can help you avoid common pitfalls that can occur when creating and using ACLs to manage access to components and cluster data.

The operations available to a user depend on the resources to which a user has access. When defining an ACL, you should consider which resources your users or groups have access to, and the available operations when managing those resources. For example, you might have to define more than a single ACL, depending on the resources that specific users require access to.

Note that the Confluent Cloud ACL resources and operations listed here are a subset of the Kafka ACL resources and operations.

Resource Operation
Cluster
  • Create (allows creating topics)
  • Describe: number of brokers, other meta-data
  • IdempotentWrite: for producers in Idempotent mode, InitProducerId(idempotent): To initialize the producer
  • Alter (CreateAcls, DeleteAcls, DescribeConfigs)
Consumer Groups
  • Delete
  • Describe
  • Read
Topic
  • Alter
  • AlterConfigs
  • Create
  • Delete
  • Describe (for example, number of partitions)
  • DescribeConfigs
  • Read
  • Write
TransactionalID
  • Describe
  • Write

Confluent Cloud does not support IP or Google Cloud Platform (GCP) whitelisting, where all entities are denied access except those included in the whitelist.

Use of wildcards and prefix matching make Kafka ACLs much easier to use than having to fully specify every topic or resource. For more details, refer to Prefixed ACLs.

ACLs are managed using the Confluent Cloud CLI. For a complete list of Kafka ACLs, see Authorization using ACLs.

See also

Restrict Access to Confluent Cloud

All access to organization, environment, and cluster-level management actions is restricted using RBAC. However, User accounts in Confluent Cloud have superuser admin privileges on all other resources, such as data within your Kafka clusters. For these resources, you can provide restricted access using service accounts and by distributing API keys using the Kafka command-line tools.

Prerequisite
  • Confluent Platform is installed on the same local machine as the Confluent Cloud CLI.
  1. Create a properties file with the following contents, including an API key (api-key) and secret (<api-secret>) pair, and bootstrap servers (<broker-endpoint1>) and save as cloud-access.properties. An OrganizationAdmin, EnvironmentAdmin, or CloudClusterAdmin can provision the API key/secret pair using the Confluent Cloud CLI.

    bootstrap.servers=<broker-endpoint>
    request.timeout.ms=20000
    retry.backoff.ms=500
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
      password="<api-secret>" \
      username="<api-key>";
    sasl.mechanism=PLAIN
    security.protocol=SASL_SSL
    ssl.endpoint.identification.algorithm=https
    
  2. Run your kafka- tools with the cloud-access.properties specified. For example:

    • kafka-topics

      kafka-topics --create --bootstrap-server <broker-endpoint> --replication-factor 3 \
      --partitions 1 --topic my-topic --command-config cloud-access.properties
      
    • kafka-console-producer

      kafka-console-producer --topic my-topic --producer.config cloud-access.properties \
      --broker-list <broker-endpoint>
      
    • kafka-console-consumer

      kafka-console-consumer --topic my-topic --consumer.config cloud-access.properties \
      --bootstrap-server <broker-endpoint> --from-beginning