Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      View all courses
      Kafka® 101
      Kafka® Internal Architecture
      Kafka® Connect 101
      Kafka® Security
      Kafka Streams 101
      NewDesigning Events and Event Streams
      Event Sourcing and Storage
      NewSchema Registry 101
      Data Mesh 101
      ksqlDB 101
      Inside ksqlDB
      Spring Frameworks and Kafka®
      NewKafka® for Python Developers
      Building Data Pipelines
      Confluent Cloud Networking
      Confluent Cloud Security
      NewGoverning Data Streams
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Confluent Developer Live

      Free live professional training

      Coding in Motion

      Build a real-time streaming app

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Kafka Summit and Current Conferences

      Premier data streaming events

      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

  • Docs
      Get started for free

      Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

      Learn more
      Documentation

      Guides, tutorials, and reference

      Confluent Cloud

      Fully managed, cloud-native service

      Confluent Platform

      Enterprise-grade distribution of Kafka

      Confluent Connectors

      Stream data between Kafka and other systems

      Tools

      Operational and developer tools

      Clients

      Use clients to produce and consume messages

Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

View all courses
Kafka® 101
Kafka® Internal Architecture
Kafka® Connect 101
Kafka® Security
Kafka Streams 101
NewDesigning Events and Event Streams
Event Sourcing and Storage
NewSchema Registry 101
Data Mesh 101
ksqlDB 101
Inside ksqlDB
Spring Frameworks and Kafka®
NewKafka® for Python Developers
Building Data Pipelines
Confluent Cloud Networking
Confluent Cloud Security
NewGoverning Data Streams
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Confluent Developer Live

Free live professional training

Coding in Motion

Build a real-time streaming app

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Kafka Summit and Current Conferences

Premier data streaming events

Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

Docs
Get started for free

Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

Learn more
Documentation

Guides, tutorials, and reference

Confluent Cloud

Fully managed, cloud-native service

Confluent Platform

Enterprise-grade distribution of Kafka

Confluent Connectors

Stream data between Kafka and other systems

Tools

Operational and developer tools

Clients

Use clients to produce and consume messages

Get Started Free
Confluent Documentation
/

CONFLUENT FOR KUBERNETES

  • Overview
  • Quickstart
  • Plan for Deployments
  • Prepare Kubernetes Cluster
  • Deploy Confluent for Kubernetes
  • Configure Confluent Platform
    • Configuration Overview
    • Configure Storage
    • Update Confluent License
    • Use Custom Docker Registry
    • Configure CPU and Memory Requirements
    • Configure Networking
      • Networking Overview
      • Configure Load Balancers
      • Configure Node Ports
      • Configure Port-based Static Access
      • Configure Host-based Static Access
      • Configure Routes
    • Configure Security
      • Security Overview
      • Authentication
      • Authorization
        • Configure Role-Based Access Control (RBAC)
        • Configure Simple ACLs
      • Network Encryption
      • Security Compliance
      • Credentials Management
    • Configure Pod Scheduling
    • Configure Connect
    • Configure Replicator
    • Configure Rack Awareness
    • Configure Confluent REST Proxy
    • Advanced Configuration
  • Deploy Confluent Platform
  • Manage Confluent Platform
    • Manage Kafka Admin REST Class
    • Manage Kafka Topics
    • Manage Schemas
      • Manage Schemas
      • Link Schemas
    • Manage Connectors
    • Scale Confluent Clusters
    • Scale Storage
    • Link Kafka Clusters
    • Manage Security
      • Manage Authentication
      • Manage RBAC
      • Manage Certificates
      • Manage Password Encoder Secret
    • Restart Confluent Components
    • Delete Confluent Deployment
    • Manage Confluent Cloud
  • Monitor Confluent Platform
  • Upgrade
    • Upgrade Overview
    • Upgrade Confluent for Kubernetes
    • Upgrade Confluent Platform
    • Migrate from Operator to Confluent for Kubernetes
    • Migrate On-premise Deployment to Confluent for Kubernetes
  • Deployment Scenarios
    • Multi-AZ Deployment
    • Multi-Region Deployment
    • Hybrid Deployment with Confluent Cloud
  • Troubleshooting
  • API Reference
  • Confluent Plugin Reference
  • Release Notes
  1. Home
  2. Confluent for Kubernetes
  3. Manage Confluent Platform with Confluent for Kubernetes
  4. Manage Security

Link Schemas¶

Schema Linking is a Confluent feature for keeping schemas in sync between two Schema Registry clusters.

You can use Schema Linking in conjunction with Cluster Linking to keep both schemas and topic data in sync across two Schema Registry and Kafka clusters.

Schema Linking can also be used independently of Cluster Linking for replicating schemas between clusters for purposes of aggregation, backup, staging, and migration of schemas.

Schema Linking is supported using schema exporters that reside in Schema Registry and continuously export schemas from one context to another within the same Schema Registry cluster or across a different Schema Registry cluster.

A schema exporter can sync schemas in groups, referred to as schema context. Each schema context is an independent grouping of schema IDs and subject names. If schemas are exported without any context (contextType: NONE), those schemas are exported as is and go into the default context.

See Schema Linking for complete details of the Schema Linking feature.

The high-level workflow to run Schema Linking is:

  1. Deploy the source and the destination Schema Registry clusters.

  2. Enable schema exporter.

  3. Define schemas in the source Schema Registry cluster.

    When you register schemas in the source cluster, you can specify a custom context, by inserting the context in the schema name. If no context is given, the default context is used.

  4. Create a schema exporter in the source Schema Registry cluster.

    Exported schemas are placed in the IMPORT mode in the destination Schema Registry. Changes cannot be made to the schemas in the IMPORT mode.

    As needed:

    • Update configurations of the schema exporter.
    • Update the state of the schema exporter.
  5. Delete the schema exporter.

Confluent for Kubernetes (CFK) provides a declarative API, the SchemaExporter custom resource definition (CRD), to support the entire workflow of creating and managing schema exporters.

Enable schema exporter in Schema Registry¶

Update the source Schema Registry CR to enable schema exporter, and apply the changes with the kubectl apply -f <Schema Registry CR> command:

spec:
  passwordEncoder:       --- [1]
  enableSchemaExporter:  --- [2]
  • [1] Optional. Specify the password encoder for the source Schema Registry. See Manage Password Encoder Secret for details.
  • [2] Set to true to enable schema exporter in the Schema Registry.

Create schema exporter¶

A schema exporter is created and managed in the source Schema Registry cluster.

Note

When RABC is enabled in this Confluent Platform environment, the super user you configured for Kafka (kafka.spec.authorization.superUsers) does not have access to resources in the Schema Registry cluster. If you want the super user to be able to create schema exporters, grant the super user the permission on the Schema Registry cluster.

In the source Schema Registry clusters, create a schema exporter CR and apply the configuration with the kubectl apply -f <Schema Exporter CR> command:

apiVersion: platform.confluent.io/v1beta1
kind: SchemaExporter
metadata:
  name:                   --- [1]
  namespace:              --- [2]
spec:
  sourceCluster:          --- [3]
  destinationCluster:     --- [4]
  subjects:               --- [5]
  subjectRenameFormat:    --- [6]
  contextType:            --- [7]
  contextName:            --- [8]
  configs:                --- [9]
  • [1] Required. The name of the schema exporter. The name must be unique in a source Schema Registry cluster.

  • [2] The namespace for the schema exporter.

  • [3] The source Schema Registry cluster. You can either specify the cluster name or the endpoint. If not given, CFK will auto discover the source Schema Registry in the namespace of this schema exporter. The discover process errors out if more than one Schema Registry clusters are discovered in the namespace.

    See Specify the source and destination Schema Registry clusters for configuration details.

  • [4] The destination Schema Registry cluster where the schemas will be exported. If not defined, the source cluster is used as the destination, and the schema exporter will be exporting schemas across contexts within the source cluster.

    See Specify the source and destination Schema Registry clusters for configuration details.

  • [5] The subjects to export to the destination. Default value is ["*"], which denotes all subjects in the default context.

  • [6] The rename format that defines how to rename the subject at the destination.

    For example, if the value is my-${subject}, subjects at destination will become my-XXX where XXX is the original subject.

  • [7] Specify how to create context to move the subjects at the destination.

    The default value is AUTO, with which, the exporter will use an auto generated context in the destination cluster. The auto generated context name will be reported in the status.

    If set to NONE, the exporter copies the source schemas as-is.

  • [8] The name of the schema context on the destination to export the subjects. If this is defined, spec.contextType is ignored.

  • [9] Additional configs not supported by the SchemaExporter CRD properties.

An example SchemaExporter CR:

apiVersion: platform.confluent.io/v1beta1
kind: SchemaExporter
metadata:
  name: schemaExporter
  namespace: confluent
spec:
  sourceCluster:
    schemaRegistryClusterRef:
      name: sr
      namespace: operator
  destinationCluster:
    schemaRegistryRest
      endpoint: https://schemaregistry.operator-dest.svc.cluster.local:8081
      authentication:
        type: basic
        secretRef: sr-basic
  subjects:
  - subject1
  - subject2
  contextType: CUSTOM
  contextName: link-source

Specify the source and destination Schema Registry clusters¶

Using one of the following methods, a schema exporter can specify the source and the destination Schema Registry clusters:

  • Specify the Schema Registry cluster name and namespace
  • Specify the Schema Registry endpoint URL

Specify Schema Registry using Schema Registry cluster name¶

To specify the source or destination|sr| for the schema exporter, set the following in the SchemaExporter CR under spec.sourceCluster or spec.destinationCluster:

schemaRegistryClusterRef:
  name:                   --- [1]
  namespace:              --- [2]
  • [1] Required. The name of the Schema Registry cluster.
  • [2] Optional. The namespace where the Schema Registry cluster is running if different from the namespace of the schema

Specify Schema Registry using Schema Registry endpoint¶

To specify how to connect to the Schema Registry endpoint, specify the connection information in the SchemaExporter CR, in the spec.sourceCluster or spec.destinationCluster section:

Schema Registry endpoint

schemaRegistryRest:
  endpoint:               --- [1]
  authentication:
    type:                 --- [2]
  • [1] The endpoint where Schema Registry is running.
  • [2] Authentication method to use for the Schema Registry cluster. Supported types are basic, mtls, and bearer. You can use bearer when RBAC is enabled for Schema Registry.

Basic authentication to Schema Registry

schemaRegistryRest:
  authentication:
    type: basic                  --- [1]
    basic:
      secretRef:                 --- [2]
      directoryPathInContainer:  --- [3]
  • [1] Required for the basic authentication type.

  • [2] Required. The name of the secret that contains the credentials. See Basic authentication for the required format.

  • [3] Set to the directory path in the container where required authentication credentials are injected by Vault.

    See Basic authentication for the required format.

    See Provide secrets for Confluent Platform application CR for providing the credential and required annotations when using Vault.

mTLS authentication to Schema Registry

schemaRegistryRest:
  authentication:
    type: mtls                 --- [1]
  tls:
    secretRef:                 --- [2]
    directoryPathInContainer:  --- [3]
  • [1] Required for the mTLS authentication type.

  • [2] The name of the secret that contains the TLS certificates.

    See Provide TLS keys and certificates in PEM format for the expected keys in the TLS secret. Only the PEM format is supported for SchemaExporter CRs.

  • [3] Set to the directory path in the container where the TLS certificates are injected by Vault.

    See Provide TLS keys and certificates in PEM format for the expected keys in the TLS secret. Only the PEM format is supported for SchemaExporter CRs.

    See Provide secrets for Confluent Platform application CR for providing the credential and required annotations when using Vault.

Bearer authentication to Schema Registry (for RBAC)

When RBAC is enabled for Schema Registry, you can configure bearer authentication as below:

schemaRegistryRest:
  authentication:
    type: bearer                 --- [1]
    bearer:
      secretRef:                 --- [2]
      directoryPathInContainer:  --- [3]
  • [1] Required for the bearer authentication type.

  • [2] or [3] is required.

  • [2] The name of the secret that contains the bearer credentials. See Bearer authentication for the required format.

  • [3] Set to the directory path in the container where required authentication credentials are injected by Vault.

    See Bearer authentication for the required format.

    See Provide secrets for Confluent Platform application CR for providing the credential and required annotations when using Vault.

TLS encryption for Schema Registry cluster

tls:
  enabled: true              --- [1]
  secretRef:                 --- [2]
  directoryPathInContainer:  --- [3]
  • [1] Required.

  • [2] or [3] is required.

  • [2] The name of the secret that contains the certificates.

    See Provide TLS keys and certificates in PEM format for the expected keys in the TLS secret. Only the PEM format is supported for SchemaExporter CRs.

  • [3] Set to the directory path in the container where the TLS certificates are injected by Vault.

    See Provide TLS keys and certificates in PEM format for the expected keys in the TLS secret. Only the PEM format is supported for SchemaExporter CRs.

    See Provide secrets for Confluent Platform application CR for providing the credential and required annotations when using Vault.

Edit schema exporter configuration¶

When you update configuration of an existing exporter, CFK pauses the exporter, updates the config, and resumes the exporter.

The following properties of the configuration cannot be changed for an existing exporter. Existing exporter should be deleted and re-created:

  • Source Schema Registry
  • Destination Schema Registry
  • Name of the schema exporter

Edit the schema exporter CR with desired configs and apply it with the kubectl apply -f <Schema Exporter CR> command.

The context type (contextType) defaults to AUTO only during creation. If you created a schema exporter with custom context and want to edit it to use an auto-generated context, contextType should be explicitly set to AUTO.

If the context name (contextName) is edited, only the new subjects/schema will be exported to the new context. Older schemas synced before the update will get synced in the earlier context. To migrate all the old schemas to the new context, you need to reset the exporter.

Similarly, if the subjectRename format is edited, only the new schema will be migrated with the new name format. You need to reset the exporter to remigrate the already synced schemas with the new name format.

Reset schema exporter¶

A schema exporter is in one of the STARTING, RUNNING and PAUSED states.

Reset a schema exporter to clear its saved offset.

To reset a schema exporter, add the reset exporter annotation to the schema exporter CR with the command:

kubectl annotate schemaexporter schemaexporter platform.confluent.io/reset-schema-exporter="true"

Pause schema exporter¶

To pause a schema exporter, add the pause exporter annotation to the schema exporter CR with the command:

kubectl annotate schemaexporter schemaexporter platform.confluent.io/pause-schema-exporter="true".

Resume schema exporter¶

To resume a schema exporter, add the resume exporter annotation to the schema exporter CR with the command:

kubectl annotate schemaexporter schemaexporter platform.confluent.io/resume-schema-exporter="true".

Delete schema exporter¶

Deleting the schema exporter does not delete the schemas already exported to the destination. The schemas exported to the destination Schema Registry stay in the last synced state.

Once the schema link is broken, exported schemas can be moved out of IMPORT mode using migration as explained in Migrate Schemas.

After the schemas are moved out of the IMPORT mode, to manage those schemas on the destination Schema Registry, create Schema CRs for those schemas on the destination cluster.

To delete a schema exporter:

kubectl delete schemaexporter schemaexporter.

Confluent Cloud is a fully-managed Apache Kafka service available on all three major clouds. Try it free today.

Get Started Free
  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

On this page: