Cluster Identifiers in Confluent Platform

When you assign user roles using the Confluent CLI, you need the identifiers for the clusters in your Confluent Platform deployment.

For example, the following command assigns the DeveloperRead role on a topic in the Kafka cluster identified by <kafka-id>.

# Grant read-only access for a user to a topic.
confluent iam rbac role-binding create \
  --principal User:<user-name> \
  --role DeveloperRead \
  --resource Topic:<topic-name> \
  <kafka-id>

When creating role bindings for Schema Registry, ksqlDB, and Connect you must provide two identifiers: the Kafka cluster identifier and an additional component cluster identifier. For example, the following command assigns the DeveloperWrite role on a topic in a Schema Registry cluster:

# Grant write access for a user to a topic in Schema Registry.
confluent iam rbac role-binding create \
  --principal User:<user-name> \
  --role DeveloperWrite \
  --resource Topic:<topic-name> \
  <kafka-id> \
  --schema-registry-cluster <schema-registry-cluster-id>

View a cluster ID

Before searching for a component’s cluster ID, you must know the URL (for example, http://127.0.0.1:8080/) for all of your Confluent Platform components. Contact your IT admin to get the HTTP address (which depends on your setup) for each component.

To view the cluster ID for a Confluent Platform component:

confluent cluster describe --url <service url>

Kafka example

For Kafka, your output should resemble:

confluent cluster describe --url http://localhost:8090
Scope:
       Type       |           ID
+-----------------+------------------------+
  kafka-cluster   | LRx92c9yQ+ws786HYosuBn

In this example, the Kafka cluster ID is LRx92c9yQ+ws786HYosuBn.

Use the Kafka cluster ID with the kafka-cluster-id option when you assign a role or an ACL to a user. The following Confluent CLI command shows how to grant the DeveloperRead role on this cluster.

# Grant read-only access for a user to a topic.
confluent iam rbac role-binding create \
  --principal User:<user-name> \
  --role DeveloperRead \
  --resource Topic:<topic-name> \
  LRx92c9yQ+ws786HYosuBn

ksqlDB example

For ksqlDB, your output should resemble:

confluent cluster describe --url http://localhost:8088
Scope:
       Type       |           ID
+-----------------+------------------------+
  ksql-cluster    | ksql-cluster
  kafka-cluster   | JFb61d2pD6fe224FbsjoZl

In this example, the ksqlDB service ID is ksql-cluster.

Use the ksqlDB service ID with the ksql-cluster-id option when you assign a role to a user. The following Confluent CLI command shows how to grant the ResourceOwner role on this cluster.

confluent iam rbac role-binding create \
    --principal User:<user-name> \
    --role ResourceOwner \
    JFb61d2pD6fe224FbsjoZl \
    --ksql-cluster-id ksql-cluster \
    --resource KsqlCluster:ksql-cluster

Schema Registry example

For Schema Registry, your output should resemble:

confluent cluster describe --url http://localhost:8081
Scope:
          Type           |           ID
+------------------------+--------------------------+
 schema-registry-cluster |  schema-registry
 kafka-cluster           |  DCs16f7dN-pu781RtumkJd

In this example, the Schema Registry cluster ID is schema-registry.

The following Confluent CLI command shows how to grant the DeveloperRead role on a Schema Registry cluster that has the default cluster ID.

confluent iam rbac role-binding create \
  --principal User:<user-name> \
  --role DeveloperRead \
  --schema-registry-cluster schema-registry \
  DCs16f7dN-pu781RtumkJd

The Schema Registry cluster ID is the schema.registry.group.id configuration setting in the schema-registry.properties file. The default value is schema-registry. Assign the ID by using the schema-registry-cluster option in the confluent iam rbac role-binding create command.

Use the cluster ID of the Kafka cluster that stores schemas. This cluster is configured with the kafkastore.boostrap.servers property.

Connect example

For Connect, your output should resemble:

confluent cluster describe --url http://localhost:8083
Scope:
       Type       |           ID
+-----------------+------------------------+
  connect-cluster | connect-cluster
  kafka-cluster   | DEk20b9rR-at315LMtcuUw

In this example, the Connect cluster ID is connect-cluster.

The following Confluent CLI command shows how to grant the DeveloperRead role on the connect-cluster Connect cluster.

confluent iam rbac role-binding create \
  --principal User:<user-name> \
  --role DeveloperRead \
  --connect-cluster-id connect-cluster \
  DEk20b9rR-at315LMtcuUw

The Connect cluster ID is the group.id setting from your worker configuration file. Assign the ID by using the connect-cluster-id option in the confluent iam rbac role-binding create command.

Use the cluster ID of the Kafka cluster that stores connector configuration, status, and offset information. This cluster is configured in the Connect worker file that has the bootstrap.servers property. For more information, see Distributed Worker Configuration.

Note

If running in standalone mode, the connect-cluster-id is STANDALONE, in all capital letters.

Confluent Manager for Apache Flink and cluster IDs

Confluent Manager for Apache Flink does not use a formal cluster ID. It does not expose or require one because it has distinct job and task manager setup. As a result of this manager architecture, Flink does not register with other components.

For example, Confluent Control Center does not show a cluster ID for Flink when monitoring jobs or integrating with Kafka topics. Flink connects to Kafka as a client and not as a managed cluster component that needs to be uniquely identified within Confluent Platform.

Instead, You configure Flink to connect to Kafka, Schema Registry, and other components using their respective connection properties. When managing CMF applications deployed with Kubernetes application mode, you can distinguish them by assigning clear and descriptive application identifiers (for example, fraud-detection-app, log-processing-app). This approach helps avoid conflicts and ensures easier identification and management of clusters in multi-cluster setups.