Manage Schemas in Confluent Cloud

Schema Registry is fully supported on Confluent Cloud with the per-environment, hosted Schema Registry, and is a key element of Stream Governance on Confluent Cloud.

Tip

Try out the embedded Confluent Cloud interactive tutorials! Want to jump right in? Take this link to sign up or sign in to Confluent Cloud, and try out the guided workflows directly in Confluent Cloud.

View a schema

View the schema details for a specific topic.

Search for schemas

You can also find and view schemas by searching for them. Searches are global; that is, they span across environments and clusters.

  1. Start typing the name of a schema subject, data record, or data field name into the search bar at the top. You will get results as you type, including for other entities like topics.

    ../_images/cloud-02a-search-schema.png
  2. Hit enter to select an entity like a schema.

    ../_images/cloud-02b-search-schema.png

To learn more, see Search entities and tags.

List all schemas from the Environment view

  1. Navigate to an environment you want to work with, and click to select it as the current environment.

    ../_images/cloud-02c-view-manage-schemas.png
  2. Either click the Schema Registry tab, or click Schemas on the right panel to get a list of all schemas in the environment.

    Screenshot of sample schema in Confluent Cloud
  3. Browse the list to find the schema you want to view.

    ../_images/cloud-02d-view-schemas-list.png
  4. Click a schema in the list to view it.

    By default, the schema is shown in tree view. (To learn more, see Tree view and code view.)

    ../_images/cloud-02e-view-schemas-list.png

Tree view and code view

Two different types of views are available for schemas:

  • tree view
  • editable code view

To switch between the views, click the buttons to the left of the schema level search box:

../_images/cloud-sr-schema-tree-toggle-icon-captions.png

By default, schemas are displayed in a tree view which allows you to understand the structure of the schema and navigate the hierarchy of elements and sub-elements.

../_images/cloud-sr-schema-tree-view.png

In the tree view you can:

  • Use the arrows to the left of an element to expand it and view sub-elements.
  • Apply and manage available tags as described in Tag entities, data, and schemas.

In edit mode (the “code view”), you can create and edit schemas as described in the sections below.

../_images/cloud-sr-schema-code-view.png

Create a topic schema

Create key and value schemas. Value schemas are typically created more frequently than key schemas.

Best practices:

  • Provide default values for fields to facilitate backward-compatibility if pertinent to your schema.
  • Document at least the more obscure fields for human-readability of a schema.

Tip

You can also create schemas from the Confluent CLI, as described in the Create a Schema section in the Quick Start. A handy commands reference is here.

Create a topic value schema

  1. From the navigation menu, click Topics, then click a topic to select it (or create a new one).

  2. Click the Schema tab.

    ../_images/cloud-03-set-msg-value-schema.png
  3. Click Set a schema. The Schema editor appears.

    ../_images/cloud-04-schema-value-editor.png
  4. Select a schema type: JSON, Avro, or Protobuf. (The default is Avro.)

  5. The basic structure of a schema appears prepopulated in the editor as a starting point. Enter the schema in the editor:

    • name: Enter a name for the schema if you do not want to accept the default, which is determined by the subject name strategy. The default is schema_type_topic_name. Required.

    • type: Either record, enum, union, array, map, or fixed. (The type record is specified at the schema’s top level and can include multiple fields of different data types.) Required.

    • namespace: Fully-qualified name to prevent schema naming conflicts. String that qualifies the schema name. Optional but recommended.

    • fields: JSON array listing one or more fields for a record. Required.

      Each field can have the following attributes:

      • name: Name of the field. Required.
      • type: Data type for the field. Required.
      • doc: Field metadata. Optional but recommended.
      • default: Default value for a field. Optional but recommended.
      • order: Sorting order for a field. Valid values are ascending, descending, or ignore. Default: Ascending. Optional.
      • aliases: Alternative names for a field. Optional.

    For example, you could add the following simple schema.

    {
      "type": "record",
      "name": "value_my_new_widget",
      "fields": [
        {
          "name": "name",
          "type": "string"
        }
      ]
    }
    

    This will display in Confluent Cloud as shown below.

    ../_images/cloud-05-entered-schema.png

    In edit mode, you have options to:

    • Validate the schema for syntax and structure before you create it.
    • Add schema references with a guided wizard.
  6. Click Create.

    • If the entered schema is valid, you can successfully save it and a Schema updated message is briefly displayed in the banner area. The schema is saved and shown in tree view form.

      ../_images/cloud-06-schema-updated.png
    • If the entered schema is invalid, parse errors are highlighted in the editor (as in this example where a curly bracket was left off). If parse errors aren’t auto-highlighted, click the See error messages link on the warning banner to enable them.

      ../_images/cloud-schema-invalid-avro-warning-banner.png
      ../_images/cloud-07-schema-invalid-avro.png

If applicable, repeat the procedure as appropriate for the topic key schema.

Work with schema references

You can add a reference to another schema, using the wizard to help locate available schemas and versions.

../_images/cloud-05a-schema-references.png

The Reference name you provide must match the target schema, based on guidelines for the schema format you are using:

  • In JSON Schema, the name is the value on the $ref field of the referenced schema
  • In Avro, the name is the full name of the referenced schema; this is the value in the name field of the referenced schema.
  • In Protobuf, the name is the value on the Import statement referenced schema

First, locate the schema you want to reference, and get the reference name for it. The reference name is made up of the record name and namespace. For example, given the following schema you might want to reference, the reference name would be Example.Employee.

../_images/cloud-05aa-schema-ref-name.png

Add a schema reference to the current schema in the editor

  1. Click Evolve schema.
  2. Click Add reference.
  3. Provide a Reference name per the rules described above.
  4. Select the schema from the Subject list.
  5. Select the Version of the schema you want to use.
  6. Click Validate to check if the reference will pass.
  7. Click Save to save the reference.

For example, to create a reference to the Avro schema for the value schema for the employees topic (Schema name employees-value) from the widget schema, you can configure a reference to Example.Employee as shown.

../_images/cloud-05b-schema-references.png

If you look at the referenced schema, employees-value, you can see that this reference was created by using:

  • The fully qualified name of the referenced schema as the Reference name (in this case Example.Employee)
  • Subject name shown in the referenced schema: employees-value
  • Version of the referenced schema that you want to use
../_images/cloud-05c-schema-references.png

To learn more, see Schema references. An example of an API call to list schemas referencing a given schema is shown in List schemas referencing a schema.

View, edit, or delete schema references for a topic

Existing schema references show up on editable versions of the schema where they are configured.

  1. Navigate to a topic; for example, the widget-value schema associated with the widget topic in the previous example.

  2. Click into the editor as if to edit the schema.

    If there are references to other Schemas configured in this schema, they will display in the Schema references list below the editor.

    You can also add more references to this schema, modify existing, or delete references from this view.

Create a topic key schema

  1. Click the Key option. You are prompted to set a message key schema.

    ../_images/cloud-08-set-msg-key-schema.png
  2. Click Set a schema.

  3. Choose Avro format and/or delete the sample formatting and simply paste in a string UUID.

  4. Enter the schema into the editor and click Save.

    Here is an example of a schema appropriate for a key value.

    {
      "namespace": "io.confluent.examples.clients.basicavro",
      "name": "key_widget",
      "type": "string"
    }
    

Best practices and pitfalls for key values

Kafka messages are key-value pairs. Message keys and message values can be serialized independently. For example, the value may be using an Avro record, while the key may be a primitive (string, integer, and so forth). Typically message keys, if used, are primitives. How you set the key is up to you and the requirements of your implementation.

As a best practice, keep key value schema complexity to a minimum. Use either a simple, non-serialized data type such as a string UUID or long ID, or an Avro record that does not use maps or arrays as fields, as shown in the example below. Do not use Protobuf messages and JSON objects for key values. Avro does not guarantee deterministic serialization for maps or arrays, and Protobuf and JSON schema formats do not guarantee deterministic serialization for any object. Using these formats for key values will break topic partitioning. To learn more, see Partitioning gotchas in the Confluent Community Forum.

For detailed examples of key and value schemas, see the discussion under Formats, Serializers, and Deserializers in the Schema Registry documentation.

Derive a schema from messages

As an alternative to manually creating a schema, you can generate a new schema from a given set of messages. To learn more, see the setup and examples provided in schema-registry:derive-schema in the Schema Registry Maven Plugin documentation.

Edit schemas

Edit an existing schema for a topic.

  1. From the navigation menu, click Topics, then click a topic to select it.

  2. Click the Schema tab.

  3. Select the Key or Value option for the schema.

  4. The tree view is shown by default.

    ../_images/cloud-schema-tree-view.png
  5. Click Evolve Schema.

    ../_images/cloud-schema-code-view.png
  6. Make the changes in the schema editor.

    For example, you could edit the previous schema by adding a new field called region.

    {
      "fields": [
        {
          "name": "name",
          "type": "string"
        },
        {
          "name": "region",
          "type": "string",
          "default": ""
        }
      ],
      "name": "value_widgets",
      "type": "record"
    }
    

    In edit mode, you have options to:

    Tip

    When the compatibility mode is set to Backward Compatibility, you must provide a default for the new field. This ensures that consumer applications can read both older messages written to the Version 1 schema (with only a name field) and new messages constructed per the Version 2 schema (with name and region fields). For messages that match the Version 1 schema and only have values for name, region is left empty. To learn more, see Passing compatibility checks in the Confluent Cloud Schema Registry Tutorial Tutorial.

  7. Click Save.

    • If the schema update is valid and compatible with its prior versions (assuming a backward-compatible mode), the schema is updated and the version count is incremented. You can compare the different versions of a schema.

      ../_images/cloud-09-schema-version-updated.png
    • If the schema update is invalid or incompatible with an earlier schema version, parse errors are highlighted in the editor. If parse errors aren’t auto-highlighted, click the See error messages link on the warning banner to enable them.

      For example, if you add a new field but do not include a default value as described in the previous step, you will get an incompatibility error. You can fix this by adding a default value for “region”.

      ../_images/cloud-schema-invalid-avro-warning-banner.png
      ../_images/cloud-10-schema-incompatible.png

Compare schema versions

Compare versions of a schema to view its evolutionary differences.

  1. From the navigation menu, click Topics, then click a topic to select it.

  2. Click the Schema tab.

  3. Select the Key or Value option for the schema. (The schema Value is displayed by default.)

    ../_images/cloud-11a-schema-version-newest.png
  4. Click the ellipses (3 dots) to get the popup menu, and select Compare versions .

    The current version number of the schema is indicated on the version menu.

    ../_images/cloud-11b-schema-version-history-choose.png
  5. Select the Turn on version diff check box.

  6. Select the versions to compare from each version menu. The differences are highlighted for comparison.

    ../_images/cloud-12-schema-compare.png

Change subject level (per topic) compatibility mode of a schema

The default compatibility mode is Backward. The mode can be changed for the schema of any topic if necessary.

Caution

If you change the compatibility mode of an existing schema already in production use, be aware of any possible breaking changes to your applications.

This section describes how to change the compatibility mode at the subject level. You can also set compatibility globally for all schemas in an environment. However, the subject-level compatibility settings described below override those global settings.

  1. Select an environment.

  2. Select a cluster.

  3. From the navigation menu, click Topics, then click a topic to select it.

  4. Click the Schema tab for the topic.

  5. Select the Key or Value option for the schema.

  6. Click the edit icon (edit_button) next to the Compatibility mode indicator.

    ../_images/cloud-13a-schema-compat-mode-menu.png

    The Compatibility settings are displayed.

    ../_images/cloud-13a-schema-compat-update.png
  7. Select a mode option:

    Descriptions indicate the compatibility behavior for each option. For more information, including the changes allowed for each option, see Schema Evolution and Compatibility.

  8. Click Save.

Search for schemas and fields

Confluent Cloud offers global search across environments and clusters for various entity types now including schemas and related metadata. To learn more, see Search entities and tags in Stream Catalog.

Tag schemas and fields

Confluent Cloud provides the ability to tag schema versions and fields within schemas as a means of organizing and cataloging data based on both custom and commonly used tag names. To learn about tagging, see Tag entities, data, and schemas in Data Discovery.

Work with schema contexts

A schema context is a grouping of subject names and schema IDs. Contexts provide more flexibility with regard to subject naming, schema IDs, and how clients can reference schemas.

Specify schema contexts

Schema Registry provides the option to logically group schemas by specifying schema contexts. By default, schemas live in the default context. By providing qualified names for schemas, you group them into what are essentially sub-registries with context-specific paths. This gives you the ability to have multiple schemas with the same subject names and IDs existing as unique entities within their different contexts. There are several advantages to this, including the ability to provide specific contexts for different clients. Schema contexts are used extensively for Schema Linking, but can also be used independently of that feature as needed.

To learn more about schema contexts, see Schema contexts within Schema Linking. Also, the Schema Linking Quick Start includes examples of working with both subjects in the default context (unqualified subjects) and named contexts (qualified subjects).

Filter schema subjects by context

When using contexts, you can:

  • Filter subjects by context on the list page.

    ../_images/cloud-schema-contexts-list-by.png
  • Create a new subject under an existing or new context.

../_images/cloud-schema-create-under-context.png
  • Browse messages for a subject under a specific context.
../_images/cloud-schema-browse-messages-under-a-context.png

Download a schema from Confluent Cloud

  1. From the navigation menu, click Topics, then click a topic to select it.

  2. Click the Schema tab.

  3. Select the Key or Value option for the schema.

  4. Click the ellipses (3 dots) on the upper right to get the menu, then select Download.

    ../_images/cloud-15-schema-download-menu.png

    A schema JSON file for the topic is downloaded into your Downloads directory.

    For example, if you download the version 1 schema for the employees topic from the Quick Start, you get a file called schema-employees-value-v1.avsc with the following contents.

    {
      "fields": [
        {
          "name": "Name",
          "type": "string"
        },
        {
          "name": "Age",
          "type": "int"
        }
      ],
      "name": "Employee",
      "namespace": "Example",
      "type": "record"
    }
    

Tip

The file extension indicates the schema format. For Avro schema the file extension is .avsc; for Protobuf schema, .proto; and for JSON Schema, .json.

Delete a schema from Confluent Cloud

  1. From the navigation menu, click Topics, then click a topic to select it.

  2. Click the Schema tab.

  3. Select the Key or Value option for the schema.

  4. Click the ellipses (3 dots) on the upper right to get the menu, then select Delete.

    ../_images/cloud-14-schema-delete-menu.png
  5. On the dialog, select whether to delete only a particular version of the schema or the entire subject (all versions).

    ../_images/cloud-14-schema-delete-dialog.png
  6. Select Delete to carry out the action.

To learn more about shard and soft deletes of schemas, schema limits, and how to free up space for more schemas, see Delete Schemas and Manage Storage Space on Confluent Cloud.

Manage schemas for a Confluent Cloud environment

Schema Registry itself sits at the environment level and serves all clusters in an environment, therefore several tasks related to schemas are managed through the registry at this level.

To view and manage Schema Registry for a Confluent Cloud environment:

  1. Select an environment from the Home page. (An environment list is available from the top right menu.)

  2. Click the Schema Registry tab.

    Screenshot of Schema Registry settings

See Add a cloud environment and Manage Stream Governance Packages in Confluent Cloud to learn about Stream Governance package options.

See Configure and Manage Schemas for an Environment to learn how to:

Access control (RBAC) for Confluent Cloud Schema Registry

Role-Based Access Control (RBAC) enables administrators to set up and manage user access to Schema Registry subjects and topics. This allows for multiple users to collaborate on with different access levels to various resources.

The following table describes how RBAC roles map to Schema Registry resources. For details on how to manage RBAC for these resources, see List the role bindings for a principal, Predefined RBAC Roles on Confluent Cloud, and List the role bindings for a principal. For more schema related RBAC information, see also Access control (RBAC) for Stream Lineage and Access control (RBAC) for Schema Linking.

Role Scope Read subject Write subject Delete subject Read subject compatibility Write subject compatibility Grant permissions
OrganizationAdmin Organization
EnvironmentAdmin Environment
CloudClusterAdmin Cluster            
Operator Organization, Environment, Cluster            
MetricsViewer Organization, Environment, Cluster            
ResourceOwner Schema Subject
DeveloperManage Schema Subject        
DeveloperRead Schema Subject        
DeveloperWrite Schema Subject      
DataDiscovery Environment        
DataSteward Environment  

Table Legend:

  • ✔ = Yes
  • Blank space = No

Tip

  • “Global compatibility” does not apply to roles. To grant permission to a user to manage global compatibility, grant the DeveloperManage role on a subject resource named __GLOBAL.
  • When RBAC was first made available for Confluent Cloud Schema Registry (December 2022), ResourceOwner privileges on Schema Registry were automatically granted to all user and service accounts with existing API keys for Schema Registry clusters or existing CloudClusterAdmin privileges on any cluster in the same environment as Schema Registry. This auto-grant of privileges to existing accounts was exclusive to the feature rollout. For all new user and service accounts, you must explicitly configure access, per the details in the table above.

Supported features and limits for Confluent Cloud Schema Registry

  • A single Schema Registry is available per Environment.
  • Access Control to Schema Registry is based on API key and secret.
  • Your VPC must be able to communicate with the Confluent Cloud Schema Registry public internet endpoint. For more information, see Use Confluent Cloud Schema Registry to connect to a Public Endpoint in a Private Networking Environment.
  • Available on Amazon Web Services (AWS), Azure (Microsoft Azure), and Google Cloud for cloud provider geographies located in the US, Europe, and APAC. For each cloud provider, geographies are mapped under the hood to specific regions, as described in Add a cloud environment.
  • High availability (HA) is achieved by having multiple nodes within a cluster always in running state, with each node running in a different availability zone (AZ).
  • A size limit of 1MB is imposed on individual schemas in Confluent Cloud. Schemas larger than 1MB are not supported on Confluent Cloud. To solve for this, you can use schema references to distribute fields and data specifications across multiple schemas that would otherwise hit this limit.
  • Confluent Cloud Schema Registry limits the number of schema versions supported in the registry for Basic, Standard, and Dedicated cluster types, as described in Kafka Cluster Types in Confluent Cloud. You can view per-package limits on schemas as described in Manage Stream Governance Packages in Confluent Cloud. You can free up space by identifying and deleting unused schemas.
  • Rate limits on number of API requests is 25 Write requests per second or 75 Read requests across all API keys pointing to a particular LSRC. Requests are identified using an API key that points to a tenant (LSRC). Requests from different API keys from the same tenant are still counted to the same limit for the tenant. So, with many API keys on the same LSRC (Schema Registry logical cluster ID) you still have the same limit of 25 Write requests per second or 75 Read requests mutualized on all keys. To learn more about multi-tenancy on Confluent Cloud, see Multi-Tenancy and Client Quotas on Confluent Cloud. To view the Confluent Cloud metrics for the Schema Registry, see the Metrics API documentation.