Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Confluent Cloud Schema Registry Tutorial

Overview

This tutorial provides a step-by-step workflow for using Confluent Cloud Schema Registry. You will learn how to enable client applications to read and write Avro data and check compatibility as schemas evolve.

This tutorial is intended to run on Confluent Cloud Schema Registry. If you have a local Confluent Platform install, you may consult the Confluent Schema Registry tutorial meant for on-premises deployments at On-Premises Schema Registry Tutorial.

Setup

Prerequisites

Before proceeding with this tutorial, review a summary of the Schema Registry concepts in Schema Registry Tutorials

Verify that you have installed the following on your local machine:

  • An initialized Confluent Cloud cluster
    • The first 20 users to sign up for Confluent Cloud and use promo code C50INTEG will receive an additional $50 free usage (details).
  • Local install of Confluent Cloud CLI v1.7.0 or later
  • Java 1.8 or 1.11 to run the Java client
  • Maven to compile the client Java code
  • jq tool to nicely format the results from querying the Confluent Cloud Schema Registry REST endpoint

Environment Setup

  1. Run this tutorial in a new Confluent Cloud environment so it doesn’t interfere with your other work. You may use the ccloud-stack Utility for Confluent Cloud which provisions a new Confluent Cloud stack with a new environment, a new service account, a new Kafka cluster and associated credentials, enables Confluent Cloud Schema Registry and associated credentials, ACLs with wildcard for the service account. Follow the Usage instructions for ccloud-stack to log on to Confluent Cloud and create a ccloud-stack.

    ../_images/ccloud-stack-resources.png
  2. Running the ccloud-stack utility also generates a configuration file which has all the Confluent Cloud and Confluent Cloud Schema Registry connection information. Verify that the auto-generated file examples/ccloud/ccloud-stack/stack-configs/java-service-account-<account>.config resembles:

    # ------------------------------
    # ENVIRONMENT ID: <ENVIRONMENT ID>
    # SERVICE ACCOUNT ID: <SERVICE ACCOUNT ID>
    # KAFKA CLUSTER ID: <KAFKA CLUSTER ID>
    # SCHEMA REGISTRY CLUSTER ID: <SCHEMA REGISTRY CLUSTER ID>
    # ------------------------------
    ssl.endpoint.identification.algorithm=https
    security.protocol=SASL_SSL
    sasl.mechanism=PLAIN
    bootstrap.servers=<BROKER ENDPOINT>
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username\="<API KEY>" password\="<API SECRET>";
    basic.auth.credentials.source=USER_INFO
    schema.registry.basic.auth.user.info=<SR API KEY>:<SR API SECRET>
    schema.registry.url=https://<SR ENDPOINT>
    
  3. Copy the configuration file generated by ccloud-stack to $HOME/.confluent/java.config.

  4. Export the variables to your shell and substitute values for <SR API KEY>, <SR API SECRET>, and <SR ENDPOINT>. This enables you to copy and paste the commands in the rest of the tutorial.

    export SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO=<SR API KEY>:<SR API SECRET>
    export SCHEMA_REGISTRY_URL=<SR ENDPOINT>
    
  5. Clone the Confluent examples repo from GitHub and work in the clients/avro/ subdirectory, which provides the sample code you will compile and run in this tutorial.

    git clone https://github.com/confluentinc/examples.git
    
    cd examples/clients/avro
    
    git checkout 5.5.15-post
    

Create the transactions topic

For the exercises in this tutorial, you will be producing to and consuming from a topic called transactions. Create this topic in Confluent Cloud UI.

  1. Navigate to the Confluent Cloud UI at https://confluent.cloud, and click on your environment and Kafka cluster.

  2. Select Topics and click Create topic.

    ../_images/ccloud-create-topic-sr.png
  3. Name the topic transactions and click Create with defaults.

    ../_images/ccloud-create-topic-name-sr.png

    The new topic is displayed.

    ../_images/ccloud-create-topic-new-sr.png

Schema Definition

The first thing developers need to do is agree on a basic schema for data. Client applications form a contract:

  • producers will write data in a schema
  • consumers will be able to read that data

Consider the original Payment schema Payment.avsc. To view the schema, run this command:

cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment.avsc

Observe the schema definition:

{
 "namespace": "io.confluent.examples.clients.basicavro",
 "type": "record",
 "name": "Payment",
 "fields": [
     {"name": "id", "type": "string"},
     {"name": "amount", "type": "double"}
 ]
}

Here is a break-down of what this schema defines:

  • namespace: a fully qualified name that avoids schema naming conflicts
  • type: Avro data type, for example, record, enum, union, array, map, or fixed
  • name: unique schema name in this namespace
  • fields: one or more simple or complex data types for a record. The first field in this record is called id, and it is of type string. The second field in this record is called amount, and it is of type double.

Client Applications Writing Avro

Maven

This tutorial uses Maven to configure the project and dependencies. Java applications that have Kafka producers or consumers using Avro require pom.xml files to include, among other things:

  • Confluent Maven repository
  • Confluent Maven plugin repository
  • Dependencies org.apache.avro.avro and io.confluent.kafka-avro-serializer to serialize data as Avro
  • Plugin avro-maven-plugin to generate Java class files from the source schema

The pom.xml file may also include:

  • Plugin kafka-schema-registry-maven-plugin to check compatibility of evolving schemas

For a full pom.xml example, refer to this pom.xml.

Configuring Avro

Kafka applications using Avro data and Schema Registry need to specify at least two configuration parameters:

  • Avro serializer or deserializer
  • Properties to connect to Schema Registry

There are two basic types of Avro records that your application can use:

  • a specific code-generated class, or
  • a generic record

The examples in this tutorial demonstrate how to use the specific Payment class. Using a specific code-generated class requires you to define and compile a Java class for your schema, but it easier to work with in your code.

However, in other scenarios where you need to work dynamically with data of any type and do not have Java classes for your record types, use GenericRecord.

Confluent Platform also provides a serializer and deserializer for writing and reading data in “reflection Avro” format. To learn more, see Reflection Based Avro Serializer and Deserializer.

Java Producers

Within the client application, Java producers need to configure the Avro serializer for the Kafka value (or Kafka key) and URL to Schema Registry. Then the producer can write records where the Kafka value is of Payment class.

Example Producer Code

When constructing the producer, configure the message value class to use the application’s code-generated Payment class. For example:

...
import io.confluent.kafka.serializers.KafkaAvroSerializer;
...
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
...
KafkaProducer<String, Payment> producer = new KafkaProducer<String, Payment>(props));
final Payment payment = new Payment(orderId, 1000.00d);
final ProducerRecord<String, Payment> record = new ProducerRecord<String, Payment>(TOPIC, payment.getId().toString(), payment);
producer.send(record);
...

Because the pom.xml includes avro-maven-plugin, the Payment class is automatically generated during compile.

In this example, the connection information to the Kafka brokers and Schema Registry is provided by the configuration file that is passed into the code, but if you want to specify the connection information directly in the client application, see this java template.

For a full Java producer example, refer to the producer example.

Run the Producer

Run the following commands in a shell from examples/clients/avro.

  1. To run this producer, first compile the project:

    mvn clean compile package
    
  2. From the Confluent Cloud UI, make sure the cluster is selected, and click Topics.

    Next, click the transactions topic and go to the Messages tab.

    You should see no messages because no messages have been produced to this topic yet.

  3. Run ProducerExample, which produces Avro-formatted messages to the transactions topic. Pass in the path to the file you created earlier, $HOME/.confluent/java.config.

    mvn exec:java -Dexec.mainClass=io.confluent.examples.clients.basicavro.ProducerExample \
      -Dexec.args="$HOME/.confluent/java.config"
    

    The command takes a moment to run. When it completes, you should see:

    ...
    Successfully produced 10 messages to a topic called transactions
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    ...
    
  4. Now you should be able to see messages in the Confluent Cloud UI by inspecting the transactions topic as it dynamically shows the newly arriving data.

    From the Confluent Cloud UI, click into the cluster on the left, then go to Topics -> transactions -> Messages.

    Tip

    If you do not see any data, rerun the Producer and verify it completed successfully, and look at the Confluent Cloud UI again. The messages do not persist in the Console, so you need to view them soon after you run the producer.

    ../_images/ccloud-inspect-transactions.png

Java Consumers

Within the client application, Java consumers need to configure the Avro deserializer for the Kafka value (or Kafka key) and URL to Schema Registry. Then the consumer can read records where the Kafka value is of Payment class.

Example Consumer Code

By default, each record is deserialized into an Avro GenericRecord, but in this tutorial the record should be deserialized using the application’s code-generated Payment class. Therefore, configure the deserializer to use Avro SpecificRecord, i.e., SPECIFIC_AVRO_READER_CONFIG should be set to true. For example:

...
import io.confluent.kafka.serializers.KafkaAvroDeserializer;
...
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
...
KafkaConsumer<String, Payment> consumer = new KafkaConsumer<>(props));
consumer.subscribe(Collections.singletonList(TOPIC));
while (true) {
  ConsumerRecords<String, Payment> records = consumer.poll(100);
  for (ConsumerRecord<String, Payment> record : records) {
    String key = record.key();
    Payment value = record.value();
  }
}
...

Because the pom.xml includes avro-maven-plugin, the Payment class is automatically generated during compile.

In this example, the connection information to the Kafka brokers and Schema Registry is provided by the configuration file that is passed into the code, but if you want to specify the connection information directly in the client application, see this java template.

For a full Java consumer example, refer to the consumer example.

Run the Consumer

  1. To run this consumer, first compile the project.

    mvn clean compile package
    

    The BUILD SUCCESS message indicates the project built, and the command prompt becomes available again.

  2. Then run ConsumerExample (assuming you already ran the ProducerExample above). Pass in the path to the file you created earlier, $HOME/.confluent/java.config.

    mvn exec:java -Dexec.mainClass=io.confluent.examples.clients.basicavro.ConsumerExample \
      -Dexec.args="$HOME/.confluent/java.config"
    

    You should see:

    ...
    key = id0, value = {"id": "id0", "amount": 1000.0}
    key = id1, value = {"id": "id1", "amount": 1000.0}
    key = id2, value = {"id": "id2", "amount": 1000.0}
    key = id3, value = {"id": "id3", "amount": 1000.0}
    key = id4, value = {"id": "id4", "amount": 1000.0}
    key = id5, value = {"id": "id5", "amount": 1000.0}
    key = id6, value = {"id": "id6", "amount": 1000.0}
    key = id7, value = {"id": "id7", "amount": 1000.0}
    key = id8, value = {"id": "id8", "amount": 1000.0}
    key = id9, value = {"id": "id9", "amount": 1000.0}
    ...
    
  3. Press Ctrl+C to stop.

Other Kafka Clients

The objective of this tutorial is to learn about Avro and Schema Registry centralized schema management and compatibility checks. To keep examples simple, this tutorial focuses on Java producers and consumers, but other Kafka clients work in similar ways. For examples of other Kafka clients interoperating with Avro and Schema Registry:

Centralized Schema Management

Viewing Schemas in Schema Registry

At this point, you have producers serializing Avro data and consumers deserializing Avro data. The producers are registering schemas to Confluent Cloud Schema Registry and consumers are retrieving schemas from Confluent Cloud Schema Registry.

  1. From the Confluent Cloud UI, make sure the cluster is selected on the left, and click Topics.

  2. Click the transactions topic and go to the Schema tab to retrieve the latest schema from Confluent Cloud Schema Registry for this topic:

    ../_images/ccloud-schema-transactions.png

    The schema is identical to the schema file defined for Java client applications.

Using curl to Interact with Schema Registry

You can also use curl commands to connect directly to the REST endpoint in Confluent Cloud Schema Registry to view subjects and associated schemas.

  1. To view all the subjects registered in Confluent Cloud Schema Registry, use the following command.

    curl --silent -X GET -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO https://$SCHEMA_REGISTRY_URL/subjects | jq.
    

    Here is the expected output of the above command:

    [
      "transactions-value"
    ]
    

    In this example, the Kafka topic transactions has messages whose value (that is, payload) is Avro, and by default the Confluent Cloud Schema Registry subject name is transactions-value.

  2. To view the latest schema for this subject in more detail:

    curl --silent -X GET -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO https://$SCHEMA_REGISTRY_URL/subjects/transactions-value/versions/latest | jq .
    

    Here is the expected output of the above command:

    {
      "subject": "transactions-value",
      "version": 1,
      "id": 100001,
      "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}"
    }
    

    Here is a break-down of what this version of the schema defines:

    • subject: the scope in which schemas for the messages in the topic transactions can evolve
    • version: the schema version for this subject, which starts at 1 for each subject
    • id: the globally unique schema version id, unique across all schemas in all subjects
    • schema: the structure that defines the schema format

    Notice that in the output to the curl command above, the schema is escaped JSON; the double quotes are preceded by backslashes.

  3. Based on the schema id, you can also retrieve the associated schema by querying Confluent Cloud Schema Registry REST endpoint as follows:

    curl --silent -X GET -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO https://$SCHEMA_REGISTRY_URL/schemas/ids/100001 | jq .
    

    Here is the expected output:

    {
      "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}"
    }
    

Schema IDs in Messages

Integration with Schema Registry means that Kafka messages do not need to be written with the entire Avro schema. Instead, Kafka messages are written with the schema id. The producers writing the messages and the consumers reading the messages must be using the same Schema Registry to get the same mapping between a schema and schema id.

In this example, a producer sends the new schema for Payments to Confluent Cloud Schema Registry. Confluent Cloud Schema Registry registers this schema Payments to the subject transactions-value, and returns the schema id of 100001 to the producer. The producer caches this mapping between the schema and schema id for subsequent message writes, so it only contacts Confluent Cloud Schema Registry on the first schema write.

When a consumer reads this data, it sees the Avro schema id of 100001 and sends a schema request to Confluent Cloud Schema Registry. Confluent Cloud Schema Registry retrieves the schema associated to schema id 100001, and returns the schema to the consumer. The consumer caches this mapping between the schema and schema id for subsequent message reads, so it only contacts Confluent Cloud Schema Registry on the first schema id read.

Auto Schema Registration

By default, client applications automatically register new schemas. If they produce new messages to a new topic, then they will automatically try to register new schemas. This is very convenient in development environments, but in production environments we recommend that client applications do not automatically register new schemas. Best practice is to register schemas outside of the client application to control when schemas are registered with Schema Registry and how they evolve.

Within the application, you can disable automatic schema registration by setting the configuration parameter auto.register.schemas=false, as shown in the example below.

props.put(AbstractKafkaAvroSerDeConfig.AUTO_REGISTER_SCHEMAS, false);

To manually register the schema outside of the application, you can use the Confluent Cloud UI.

First, create a new topic called test in the same way that you created a new topic called transactions earlier in the tutorial. Then from the Schema tab, click Set a schema to define the new schema. Specify values for:

  • namespace: a fully qualified name that avoids schema naming conflicts
  • type: Avro data type, one of record, enum, union, array, map, fixed
  • name: unique schema name in this namespace
  • fields: one or more simple or complex data types for a record. The first field in this record is called id, and it is of type string. The second field in this record is called amount, and it is of type double.

If you were to define the same schema as used earlier, you would enter the following in the schema editor:

{
  "type": "record",
  "name": "Payment",
  "namespace": "io.confluent.examples.clients.basicavro",
  "fields": [
    {
      "name": "id",
      "type": "string"
    },
    {
      "name": "amount",
      "type": "double"
    }
  ]
}

If you prefer to connect directly to the REST endpoint in Schema Registry, then to define a schema for a new subject for the topic test, run the command below.

curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}"}' \
  -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO \
  https://$SCHEMA_REGISTRY_URL/subjects/test-value/versions

In this sample output, it creates a schema with id of 100001.:

{"id":100001}

Schema Evolution and Compatibility

Evolving Schemas

So far in this tutorial, you have seen the benefit of Schema Registry as being centralized schema management that enables client applications to register and retrieve globally unique schema ids. The main value of Schema Registry, however, is in enabling schema evolution. Similar to how APIs evolve and need to be compatible for all applications that rely on old and new versions of the API, schemas also evolve and likewise need to be compatible for all applications that rely on old and new versions of a schema. This schema evolution is a natural behavior of how applications and data develop over time.

Schema Registry allows for schema evolution and provides compatibility checks to ensure that the contract between producers and consumers is not broken. This allows producers and consumers to update independently and evolve their schemas independently, with assurances that they can read new and legacy data. This is especially important in Kafka because producers and consumers are decoupled applications that are sometimes developed by different teams.

Transitive compatibility checking is important once you have more than two versions of a schema for a given subject. If compatibility is configured as transitive, then it checks compatibility of a new schema against all previously registered schemas; otherwise, it checks compatibility of a new schema only against the latest schema.

For example, if there are three schemas for a subject that change in order X-2, X-1, and X then:

  • transitive: ensures compatibility between X-2 <==> X-1 and X-1 <==> X and X-2 <==> X
  • non-transitive: ensures compatibility between X-2 <==> X-1 and X-1 <==> X, but not necessarily X-2 <==> X

Refer to an example of schema changes which are incrementally compatible, but not transitively so.

The Confluent Schema Registry default compatibility type BACKWARD is non-transitive, which means that it’s not BACKWARD_TRANSITIVE. As a result, new schemas are checked for compatibility only against the latest schema.

These are the compatibility types:

  • BACKWARD: (default) consumers using the new schema can read data written by producers using the latest registered schema
  • BACKWARD_TRANSITIVE: consumers using the new schema can read data written by producers using all previously registered schemas
  • FORWARD: consumers using the latest registered schema can read data written by producers using the new schema
  • FORWARD_TRANSITIVE: consumers using all previously registered schemas can read data written by producers using the new schema
  • FULL: the new schema is forward and backward compatible with the latest registered schema
  • FULL_TRANSITIVE: the new schema is forward and backward compatible with all previously registered schemas
  • NONE: schema compatibility checks are disabled

Refer to Schema Evolution and Compatibility for a more in-depth explanation on the compatibility types.

Failing Compatibility Checks

Schema Registry checks compatibility as schemas evolve to uphold the producer-consumer contract. Without Schema Registry checking compatibility, your applications could potentially break on schema changes.

In the Payment schema example, let’s say the business now tracks additional information for each payment, for example, a field region that represents the place of sale. Consider the Payment2a schema which includes this extra field region:

cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc
{
 "namespace": "io.confluent.examples.clients.basicavro",
 "type": "record",
 "name": "Payment",
 "fields": [
     {"name": "id", "type": "string"},
     {"name": "amount", "type": "double"},
     {"name": "region", "type": "string"}
 ]
}

Before proceeding, because the default Schema Registry compatibility is backward, think about whether this new schema is backward compatible. Specifically, ask yourself whether a consumer can use this new schema to read data written by producers using the older schema without the region field. The answer is no. Consumers will fail reading data with the older schema because the older data does not have the region field, therefore this schema is not backward compatible.

Confluent provides a Schema Registry Maven Plugin, which you can use to check compatibility in development or integrate into your CI/CD pipeline.

Our sample pom.xml includes this plugin to enable compatibility checks.

...
<properties>
  <schemaRegistryUrl>http://localhost:8081</schemaRegistryUrl>
  <schemaRegistryBasicAuthUserInfo></schemaRegistryBasicAuthUserInfo>
</properties>
...
<build>
  <plugins>
  ...
    <plugin>
        <groupId>io.confluent</groupId>
        <artifactId>kafka-schema-registry-maven-plugin</artifactId>
        <version>${confluent.version}</version>
        <configuration>
            <schemaRegistryUrls>
                <param>${schemaRegistryUrl}</param>
            </schemaRegistryUrls>
            <userInfoConfig>${schemaRegistryBasicAuthUserInfo}</userInfoConfig>
            <subjects>
                <transactions-value>src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc</transactions-value>
            </subjects>
        </configuration>
        <goals>
            <goal>test-compatibility</goal>
        </goals>
    </plugin>
...
  </plugins>
</build>

It is currently configured to check compatibility of the new Payment2a schema for the transactions-value subject in Schema Registry.

  1. Run the compatibility check.

    mvn io.confluent:kafka-schema-registry-maven-plugin:test-compatibility \
        "-DschemaRegistryUrl=https://$SCHEMA_REGISTRY_URL" \
        "-DschemaRegistryBasicAuthUserInfo=$SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO" \
        "-DschemaLocal=src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc"
    
  2. Verify that the compatibility check fails. Here is the error message you will get:

    ...
    [ERROR] Schema examples/clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc is not compatible with subject(transactions-value)
    ...
    
  3. Try to register the new schema Payment2a manually to Schema Registry, which is a useful way for non-Java clients to check compatibility from the command line:

    curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
      --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\"}]}"}' \
      -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO \
      https://$SCHEMA_REGISTRY_URL/subjects/transactions-value/versions
    
  4. Verify that Confluent Cloud Schema Registry rejects the schema with an error message that it is incompatible:

    {"error_code":409,"message":"Schema being registered is incompatible with an earlier schema"}
    

Passing Compatibility Checks

To maintain backward compatibility, a new schema must assume default values for the new field if it is not provided.

  1. Consider an updated Payment2b schema that has a default value for region. To view the schema, run this command:

    cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2b.avsc
    

    You should see the following output.

    {
     "namespace": "io.confluent.examples.clients.basicavro",
     "type": "record",
     "name": "Payment",
     "fields": [
         {"name": "id", "type": "string"},
         {"name": "amount", "type": "double"},
         {"name": "region", "type": "string", "default": ""}
     ]
    }
    
  2. From UI, click the transactions topic and go to the Schema tab to retrieve the transactions topic’s latest schema from Schema Registry.

  3. Click Edit Schema.

    ../_images/tutorial-c3-edit-schema.png
  4. Add the new field region again, this time including the default value as shown below, then click Save.

    {
     "name": "region",
     "type": "string",
     "default": ""
    }
    
  5. Verify that the new schema is accepted.

    ../_images/tutorial-c3-edit-schema-pass.png

    Note

    If you get error messages about invalid Avro, check syntax; for example, quotes and colons, enclosing brackets, comma-separated from the previous field, and so on.)

  6. Think about the registered schema versions. The Schema Registry subject for the topic transactions that is called transactions-value has two schemas:

    • version 1 is Payment.avsc
    • version 2 is Payment2b.avsc that has the additional field for region with a default empty value.
  7. In the UI, still on the Schema tab for the topic transactions, click Version history and select Turn on version diff to compare the two versions:

    ../_images/tutorial-c3-schema-compare.png
  8. At the command line, go back to the Schema Registry Maven Plugin, update the pom.xml to refer to Payment2b.avsc instead of Payment2a.avsc.

  9. Re-run the compatibility check and verify that it passes:

    mvn io.confluent:kafka-schema-registry-maven-plugin:test-compatibility
    
  10. Verify that get this message showing that the schema passed the compatibility check.

    ...
    [INFO] Schema examples/clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2b.avsc is compatible with subject(transactions-value)
    ...
    
  11. If you prefer to connect directly to the REST endpoint in Schema Registry, then to register the new schema Payment2b, run the command below. It should succeed.

    curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
      --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\",\"default\":\"\"}]}"}' \
      -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO \
      https://$SCHEMA_REGISTRY_URL//subjects/transactions-value/versions
    

    The above curl command, if successful, returns the version id of the new schema:

    {"id":100002}
    
  12. View the latest subject for transactions-value in Confluent Cloud Schema Registry:

    curl --silent -X GET -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO https://$SCHEMA_REGISTRY_URL/subjects/transactions-value/versions/latest | jq .
    

    This command returns the latest Confluent Cloud Schema Registry subject for the transactions-value topic, including version number, id, and a description of the schema in JSON:

    {
      "subject": "transactions-value",
      "version": 100002,
      "id": 100002,
      "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\",\"default\":\"\"}]}"
    }
    

    Notice the changes:

    • version: changed from 100001 to 100002
    • id: changed from 100001 to 100002
    • schema: updated with the new field region that has a default value

Changing Compatibility Type

The default compatibility type is backward, but you may change it globally or per subject.

To change the compatibility type per subject from the UI, click the transactions topic and go to the Schema tab to retrieve the transactions topic’s latest schema from Schema Registry. Click Edit Schema and then click Compatibility Mode.

../_images/c3-edit-compatibility.png

Notice that the compatibility for this topic is set to the default backward, but you may change this as needed.

If you prefer to connect directly to the REST endpoint in Confluent Cloud Schema Registry, then to change the compatibility type for the topic transactions, i.e., for the subject transactions-value, run the example command below.

curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
       --data '{"compatibility": "BACKWARD_TRANSITIVE"}' \
       -u $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO https://$SCHEMA_REGISTRY_URL/config/transactions-value

Destroy the ccloud-stack

When you are finished with the tutorial, destroy the resources you created in Confluent Cloud.

  1. To destroy a cloud-stack created at the beginning of the tutorial, call the bash script ccloud_stack_destroy.sh and pass in the properties file auto-generated when you created the ccloud-stack.

    # Change directory if needed
    cd <path to examples>/ccloud/ccloud-stack/
    
    ./ccloud_stack_destroy.sh stack-configs/java-service-account-<account>.config
    
  2. Always verify that resources in Confluent Cloud have been destroyed.

Next Steps