Frequently Asked Questions for Schema Registry¶
FAQ Quick List¶
- What are the minimum required RBAC for Schema Registry on Confluent Cloud?
- What RBAC roles are needed to change Confluent Cloud Schema Registry mode (IMPORT, ReadOnly, and so on)?
- What RBAC roles are available for Stream Catalog?
- How does RBAC work with Schema Registry on Confluent Platform?
- How do you find and delete unused schemas?
- How do you find schema IDs?
- Are there limits on the number of schemas you can maintain?
- How do you delete schemas?
- Can you recover deleted schemas?
- What are schema contexts and when should you use them?
- What is the advantage of using qualified schemas over schemas under the default context?
- Which clients can consume against the schema context?
- Does Schema Linking support mTLS?
- Can the schema exporter use any set of valid certificates to authenticate with source and destination schema registries, or only default certificates?
- How do you monitor a schema link? Does it generate a JMX matrix?
- How do you troubleshoot if a schema link fails?
- How do you avoid any impact to schema links during maintenance and change windows?
- How will Schema Linking be maintained across Confluent Platform version updates?
- How do you implement bi-directional Schema Linking?
Q&As¶
What are the minimum required RBAC for Schema Registry on Confluent Cloud?¶
- OrganizationAdmin, Environment Admin, and DataSteward roles have full access to Schema Registry operations.
- Schema Registry also supports “resource level” role-based access control (RBAC). You can provide a resource level role, such as ResourceOwner, access to schema subjects within the Schema Registry.
To learn more, see Access control (RBAC) for Confluent Cloud Schema Registry.
What RBAC roles are needed to change Confluent Cloud Schema Registry mode (IMPORT, ReadOnly, and so on)?¶
- Mode can be set at the Schema Registry level or subject level.
- OrganizationAdmin,
Environment Admin,
and DataSteward roles
can set
mode
at the Schema Registry level. - For individual subjects,
mode
follows the compatibility mode RBAC.
To learn more, see Access control (RBAC) for Confluent Cloud Schema Registry.
What RBAC roles are available for Stream Catalog?¶
To learn about RBAC and Stream Catalog on Confluent Cloud, see Access control (RBAC) for Stream Catalog.
How does RBAC work with Schema Registry on Confluent Platform?¶
To learn about RBAC and Confluent Platform, see Configuring Role-Based Access Control for Schema Registry.
How do you find and delete unused schemas?¶
To learn about managing storage, deleting schemas, and schema limits on Confluent Cloud see the following sections:
- On Confluent Cloud: Delete Schemas and Manage Storage Space on Confluent Cloud
- On Confluent Platform: Schema Deletion Guidelines
How do you find schema IDs?¶
There are several ways to get schema IDs, including:
- View schema IDs on Confluent Cloud Console or Confluent Control Center in Confluent Platform:
- For Confluent Cloud, see Manage Schemas on Confluent Cloud
- For Confluent Platform, see Manage Schemas on Control Center on Confluent Platform.
- Use the Confluent CLI:
- Run the command confluent schema-registry schema describe, the output for which includes the ID of the specified schema.
- Use the local Kafka scripts to print schema IDs with the consumer:
- Confluent Cloud documentation: Print schema IDs with command line consumer utilities
- Confluent Platform documentation: Print schema IDs with command line consumer utilities
- Use API calls to show schema IDs:
- In Confluent Cloud API usage examples, see List schemas with subject and ID for each and Get the latest version of a schema with its schema ID
- See Confluent Platform API usage examples for several calls that show schema IDs.
Are there limits on the number of schemas you can maintain?¶
Confluent Cloud Schema Registry imposes limits on the number of schema versions supported in the registry, depending on the cluster type. When these limits are reached, you can identify unused schemas and free up storage space by deleting them. To learn more, see Delete Schemas and Manage Storage Space on Confluent Cloud.
There are no limits on schemas on self-managed Confluent Platform. To learn more about managing schemas on Confluent Platform, including soft and hard deletes, and schema versioning, see Schema Deletion Guidelines in the Confluent Platform documentation.
How do you delete schemas?¶
To learn about deleting schemas on Confluent Cloud, see Delete Schemas and Manage Storage Space on Confluent Cloud.
To learn how to delete schemas on Confluent Platform, see Schema Deletion Guidelines in the Confluent Platform documentation.
Can you recover deleted schemas?¶
You can recover soft-deleted schemas on both Confluent Cloud and Confluent Platform, as described in:
- Recover a soft-deleted schema (Confluent Cloud)
- Recovering a soft-deleted schema (Confluent Platform)
If you have still have the schema definition for a hard-deleted schema that you want to recover, you can essentially recover a the schema using subject-level schema migration as a workaround. To learn how to do this, see Migrate an Individual Schema to an Already Populated Schema Registry (subject-level migration).
What are schema contexts and when should you use them?¶
A schema context is an ad-hoc grouping of subject names and schema IDs. You can use a context name strategy to help you organize your schemas; using them to group together by name a set of logically related schemas into what can be thought of as a sub-registry.
Schema IDs and subject names without explicit contexts are maintained in the
default context. Subject names and IDs are unique per context, so you could have
an unqualified subject :.:my-football-teams
in the default context
(indicated by the .
representing the default context) and a qualified
subject :.my-cool-teams:my-football-teams:
in the context
:.my-cool-teams:
and they can function as independent and unique subjects.
The qualified and unqualified subjects could even have the same schema IDs, and
still be unique by virtue of being in different contexts.
There are a few use cases for contexts beyond simple organization, and more concepts and strategies for using them. You can leverage multi-context APIs and set up a context name strategy for Schema Registry clients to use.
Schema contexts are useful for Schema Linking, where they are used in concert with exporters, but you can also use them outside of Schema Linking if so desired. To learn more about schema contexts and how they work, see:
What is the advantage of using qualified schemas over schemas under the default context?¶
Schema linking preserves schema IDs; therefore if you export schemas to another cluster, you can copy them into non-default contexts to avoid ID collision with schemas under existing contexts. Also, contexts provide a way to separate different environments for schemas. For example, you could develop with schemas in a “developer” context, and promote them to “production” context when development is done.
To learn more, see:
Which clients can consume against the schema context?¶
All clients (Java, .NET, Spring Boot, and so on), can specify an explicit
context as part of the Schema Registry URL; for example,
http://mysr:8081/contexts/mycontext
. Currently only the Java client also
passes the subject name when it looks up an ID. With the subject name, Schema Registry can
find the correct context for the ID if it is not in the default context.
This may be supported by .NET and Python clients in future releases.
Does Schema Linking support mTLS?¶
Source and destination schema registries provide support for mTLS. Does Schema Linking also provide this support? If so, how do you provide certificates to connect with the source Schema Registry?
On Confluent Platform 7.1 and later, Schema Registry clients can accept certificates for mTLS authentication in PEM format.
Can the schema exporter use any set of valid certificates to authenticate with source and destination schema registries, or only default certificates?¶
Yes, any certificates can be passed.
How do you monitor a schema link? Does it generate a JMX matrix?¶
Currently, only a count of exporters is available via JMX.
How do you troubleshoot if a schema link fails?¶
Query the status endpoint /exporters/{name}/status
to get the exception stack trace of a failure.
How do you avoid any impact to schema links during maintenance and change windows?¶
Exporters can be paused at any time. Also they should be resilient to failure; they will simply pick up where they left off.
How will Schema Linking be maintained across Confluent Platform version updates?¶
Any future changes to Schema Linking will be done in a backward-compatible manner.
How do you implement bi-directional Schema Linking?¶
Schema Linking is implemented in “push” mode; therefore, to achieve bi-directional Schema Linking, each side must initiate a schema exporter to the other side.