Use OAuth/OIDC to Authenticate to Confluent Cloud¶
Confluent OAuth supports the OAuth 2.0 protocol for authentication and authorization. OAuth is an open-standard protocol that grants access to supported clients using a temporary access token. Clients use delegated authorization to access and use Confluent Cloud resources and data on behalf of a user or application.
OAuth 2.0 is the authorization framework, while OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0. Confluent Cloud supports both OAuth 2.0 and OIDC protocols, providing flexible authentication options for your applications.
When to use OAuth/OIDC:
- You want to manage application identities through your own identity provider
- You need short-lived, secure credentials for application authentication
- You want to integrate with existing enterprise identity systems
- You need fine-grained access control based on user attributes and groups
For information about other authentication methods, see authentication overview.
Summary of key features provided by OAuth 2.0 support in Confluent Cloud:
- Manage application identities and credentials through your own identity provider.
- Authenticate with Confluent Cloud resources using short-lived credentials (JSON Web Tokens).
- Confluent Cloud’s OAuth 2.0 service provides OIDC-based tokens for authentication and authorization that are based on the OAuth 2.0 Authorization Framework [RFC 6749] and is compliant with OpenID Connect (OIDC).
- Use identity pools to map group and other attributes to policies (RBAC or ACLs). For details, see Use OAuth Identity Pools with Your OAuth/OIDC Identity Provider on Confluent Cloud.
- You can configure OAuth using the Confluent Cloud Console, Confluent CLI, and REST API.
- You can automate end to end using the OAuth REST API.
- Support for OAuth auto pool mapping allows automatic mapping of clients to multiple identity pools based on matching filters and removes the need to explicitly specify identity pool IDs in client configurations. For details, see Use auto pool mapping with OAuth identity pools.
Supported identity providers:
- Microsoft Entra ID (Azure AD)
- Okta
- Auth0
- Google Identity Platform
- Other OAuth/OIDC-compliant providers
For step-by-step instructions to add an identity provider, see Add an identity provider using Confluent Cloud Console.
Core OAuth concepts¶
Understanding these fundamental concepts is essential for working with OAuth in Confluent Cloud:
JWT claims¶
JWT tokens contain claims (key-value pairs) that provide identity and authorization information. Common claims include:
sub
(subject): The unique identifier for the user or applicationaud
(audience): The intended recipient of the tokeniss
(issuer): The identity provider that issued the tokenscp
(scope): The permissions granted to the tokengroups
: User group memberships for authorization
Access token format¶
Confluent Cloud only accepts JSON Web Token (JWT) access tokens, based on an open, industry standard for representing claims to be transferred securely between two parties. A JWT is a string that represents a set of claims as a JSON object in a JSON Web Signature (JWS) or JSON Web Encryption (JWE) structure, enabling the claims to be signed or encrypted.
Each JWT includes a header, body, and signature that is formatted like this:
header.body.signature
For details about JWT credentials, see the following resources:
- JWT (JSON Web Tokens) website, provided by Auth0
- Introduction to JSON Web Tokens
- JWT Debugger
- JWT Handbook: a free ebook
- JSON Web Token (JWT) [RFC 7519]
- JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens [RFC 9068]
Identity pools¶
Identity pools group external identities and assign access based on claims-based policies. They act as a bridge between your identity provider and Confluent Cloud resources, mapping external identities to specific permissions.
For detailed information about creating and managing identity pools, see Use OAuth Identity Pools with Your OAuth/OIDC Identity Provider on Confluent Cloud.
Pool filters¶
Pool filters use Common Expression Language (CEL) to evaluate JWT claims and determine access. They allow you to create dynamic, claims-based access control policies that automatically map users to the appropriate identity pool based on their token claims.
For examples and configuration details, see Set OAuth identity pool filters.
Security model¶
The OAuth security model in Confluent Cloud works through the following flow:
- Your identity provider issues JWT tokens with claims.
- Confluent Cloud validates tokens using trusted JSON Web Key Sets (JWKS).
- Pool filters evaluate claims to determine which identity pool to use.
- Identity pools provide access based on configured policies (RBAC or ACLs).
For information about managing JWKS URIs, see Manage the JWKS URI on Confluent Cloud.
Tip
If you’re curious to learn about OAuth for Kafka and the centralized identity management system, check out the following podcast:
OAuth 2.0 flow¶
At a high level, the following diagram shows a sample OAuth flow for an organization.

Here is a summary of the steps in the OAuth 2.0 flow:
Establish trust between |ccloud| and your identity provider.
To establish trust, you need to add the identity provider.
- Define the type of identity provider.
- Create a trust relationship between Confluent Cloud and your identity provider.
- Add the claims to be used for authentication and authorization.
Configure your identity pool and access policy.
An identity pool is a group of external identities that are assigned a certain level of access based on policy.
For details, see Use OAuth Identity Pools with Your OAuth/OIDC Identity Provider on Confluent Cloud.
Configure clients.
To configure your clients:
Configure the client ID and client secret in the Kafka client.
The identity provider generates a client ID and client secret and gives them to the client to use for all future OAuth exchanges.
- The client requests a JSON Web Token (JWT) from the identity provider
using the client credentials grant.
The client credentials grant is an OAuth 2.0 flow where the client authenticates directly with the identity provider using its client credentials to obtain an access token.
Use the access token.
The Kafka client (
SASL/OAUTHBEARER
) sends the token to Confluent Cloud. If you are using auto pool mapping, the Kafka automatically matches the token to the appropriate identity pool based on the token claims. For details, see Use auto pool mapping with OAuth identity pools.For detailed client configuration instructions, see OAuth Client Configuration Overview.
Producer/consumer configuration with explicit identity pool ID
Replace the placeholder values with your actual values.
bootstrap.servers=<bootstrap-URL> security.protocol=SASL_SSL sasl.oauthbearer.token.endpoint.url=https://myidp.example.com/oauth2/default/v1/token sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler sasl.mechanism=OAUTHBEARER sasl.jaas.config= \ org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ clientId='<client-id>' scope='<requested-scope>' clientSecret='<client-secret>' extension_logicalCluster='<cluster-id>' extension_identityPoolId='<pool-id>';
Here is an example of the Kafka client configuration:
bootstrap.servers=pkc-e8mp9.us-east-1.aws.confluent.cloud:9092 security.protocol=SASL_SSL sasl.oauthbearer.token.endpoint.url=https://auth.example.com/oauth2/v1/token sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler sasl.mechanism=OAUTHBEARER sasl.jaas.config= \ org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ clientId='kafka-client-123' scope='kafka' clientSecret='client-secret-abc123' extension_logicalCluster='lkc-ab123' extension_identityPoolId='pool-1234abc';
Producer/consumer configuration with auto pool mapping
Replace the placeholder values with your actual values.
bootstrap.servers=<bootstrap-URL> security.protocol=SASL_SSL sasl.oauthbearer.token.endpoint.url=https://myidp.example.com/oauth2/default/v1/token sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler sasl.mechanism=OAUTHBEARER sasl.jaas.config= \ org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ clientId='<client-id>' scope='<requested-scope>' clientSecret='<client-secret>' extension_logicalCluster='<cluster-id>';
Note the absence of the
extension_identityPoolId
parameter in the auto pool mapping configuration. When omitted, the auto pool mapping feature automatically matches the client to the appropriate identity pool based on the token claims. For details, see Use auto pool mapping with OAuth identity pools.Here is an example of the Kafka client configuration, with the
extension_identityPoolId
parameter omitted:bootstrap.servers=pkc-e8mp9.us-east-1.aws.confluent.cloud:9092 security.protocol=SASL_SSL sasl.oauthbearer.token.endpoint.url=https://auth.example.com/oauth2/v1/token sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler sasl.mechanism=OAUTHBEARER sasl.jaas.config= \ org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ clientId='kafka-client-123' scope='kafka' clientSecret='client-secret-abc123' extension_logicalCluster='lkc-ab123';
Confluent Cloud validates the token received based on the trusted JSON Web Key Set (JWKS), extracts the authenticated ID (
sub
) or other configured claim, extracts the authorization ID (pool ID
), and maps to the authorization policy.JSON Web Token (JWT) example:
{ "ver": 1, "jti": "AT.-u7tKPqYmJm2t2wZgHnzKVOCY6Hy51y2ohXdRX0Z1gQ", "iss": "https://mycompany/oauth2/default", "aud": "mycompany-okta", "iat": 1617050423, "exp": 1617054023, "sub": "0oa1xn4ddcJb2GyFN4x7", "groups": [ "Marketing", "ProjectA" ] }
For information about accessing Kafka REST APIs with OAuth, see Access Kafka REST APIs with an OAuth-OIDC identity provider on Confluent Cloud.
Token exchange flows¶
OAuth 2.0 supports multiple token exchange flows for different authentication scenarios.
For Kafka clients, the most relevant flows are client_credentials
and jwt_bearer
,
which are machine-to-machine authentication flows that don’t require human interaction.
Client credentials flow¶
The client_credentials
flow is the most common token exchange method for Kafka clients.
This flow follows RFC 6749 Section 4.4
and is the currently supported exchange flow for Java and non-Java clients in Confluent Cloud.
Flow overview¶
The client credentials flow follows these steps:
- Client authentication: The client authenticates with the authorization server using its client ID and client secret with HTTP Basic authentication.
- Token request: The client sends a POST request to the token endpoint with the
grant type set to
client_credentials
. - Token validation: The authorization server validates the client credentials and issues an access token.
- Token response: The authorization server returns the access token to the client.
- Resource access: The client uses the access token to access protected resources (Kafka brokers).
Request format¶
The client sends a request to the authorization server with HTTP Basic authentication:
POST /token HTTP/1.1
Host: server.example.com
Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3
Content-Type: application/x-www-form-urlencoded
grant_type=client_credentials
The Authorization header contains the base64-encoded client_id:client_secret pair, and the request body specifies the grant type as client_credentials.
Implementation details¶
The Java Kafka client uses the HttpAccessTokenRetriever class to handle client credentials requests. The implementation:
- Formats the authorization header using the client ID and secret
- URL-encodes the credentials according to RFC 6749
- Constructs the request body with
grant_type=client_credentials
- Sends the request to the configured token endpoint
- Processes the response to extract the access token
JWT bearer flow¶
The jwt_bearer
flow is defined in RFC 7523 Section 8.1
and provides an alternative to client credentials using signed JWT assertions.
This flow is particularly useful for integrations with providers like Google OIDC
that don’t support the client_credentials
grant.
Flow overview¶
The JWT bearer flow follows these steps:
- JWT creation: The client creates a signed JWT assertion containing claims (issuer, subject, audience, expiration).
- Token request: The client sends a POST request to the token endpoint with the
grant type set to
urn:ietf:params:oauth:grant-type:jwt-bearer
and includes the signed JWT assertion. - JWT validation: The authorization server validates the JWT signature and claims.
- Token response: The authorization server returns an access token to the client.
- Resource access: The client uses the access token to access protected resources (Kafka brokers).
Request format¶
The client sends a request with a signed JWT assertion in the request body:
POST /token.oauth2 HTTP/1.1
Host: authz.example.net
Content-Type: application/x-www-form-urlencoded
grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer
&assertion=eyJhbGciOiJFUzI1NiIsImtpZCI6IjE2In0.
eyJpc3Mi[...omitted for brevity...].
J9l-ZhwP[...omitted for brevity...]
The JWT assertion contains:
- A
kid
(key ID) header identifying the private key - A signed payload with claims like
iss
,sub
,aud
,exp
- The signature created using the corresponding private key
Implementation details¶
The JWT bearer flow requires:
- A private key for signing the assertion
- Configuration of JWT claims (
issuer
,subject
,audience
,expiration
) - Support for different signing algorithms (
RS256
,ES256
) - Proper JWT construction and signing
Note
The JWT bearer flow is not currently supported in the standard Kafka client configuration but can be implemented using custom token providers. For details, see Custom OAuth implementations for the Java client.
Token request implementation¶
The Kafka client’s token request implementation follows this high-level flow:
- Authentication trigger: The Kafka client initiates authentication when connecting to brokers.
- Callback handler: The
OAuthBearerLoginCallbackHandler
processes the authentication request. - Token retrieval: The
HttpAccessTokenRetriever
sends HTTP requests to the configured token endpoint. - Token validation: The
AccessTokenValidator
validates the received token. - Retry logic: If the request fails, the retry mechanism implements exponential backoff.
- Token caching: Successful tokens are cached and reused for subsequent connections.
Key components¶
OAuthBearerLoginCallbackHandler
: Main callback handler that processes authentication requests.HttpAccessTokenRetriever
: Handles HTTP requests to the token endpoint.AccessTokenValidator
: Validates received tokens.Retry Mechanism
: Implements exponential backoff for failed requests.
Retry logic¶
The client implements an exponential backoff retry mechanism:
- Immediate attempt to connect to the HTTP endpoint.
- If the first attempt fails, a second attempt after
sasl.login.retry.backoff.ms
. - If the second attempt fails, the duration is doubled before a third attempt.
- This pattern repeats until
sasl.login.retry.backoff.max.ms
is reached.
Token caching¶
After successful authentication, the returned access token can be reused by other connections from the same client. While additional connections don’t issue new token retrieval HTTP calls, the broker validates the token each time it’s sent by a client connection.
Based on KIP-368, the OAuth token reauthentication logic is automatically inherited by this implementation, so no additional work is needed to support that feature.
Limitations¶
OAuth 2.0 for Confluent Cloud includes the following limitations:
- Authentication is supported for Standard, Enterprise, Dedicated, and Freight Kafka clusters only.
- ACLs for identity pools can be managed only by using Confluent CLI and the REST API.
- Supported clients include:
- Apache Kafka client: 3.2.1 or later
- Confluent Platform: 7.2.1 or later; 7.1.3 or later
- librdkafka: 1.9.2 or later
For default OAuth service limits, see:
What’s next¶
Now that you understand the core OAuth concepts and flow, you can:
- Review identity pool filters examples to see how claims are evaluated.
- Study best practices for secure implementation.
- Practice with your identity provider’s test environment.
- Implement a simple OAuth integration following the step-by-step guides.
For additional learning resources, see:
- OAuth 2.0 specification: RFC 6749
- OpenID Connect specification: OpenID Connect Core
- JWT specification: RFC 7519
- Your identity provider’s documentation:
For information about managing OAuth configurations, see Manage OAuth-OIDC identity provider configurations on Confluent Cloud.