.. _schema_registry_tutorial: |sr| Tutorial ============= Overview ~~~~~~~~ This tutorial provides a step-by-step workflow for using |sr-long|. You will learn how to enable client applications to read and write Avro data, check compatibility as schemas evolve, and use |c3|, which has integrated capabilities with |sr|. Benefits ^^^^^^^^ |ak-tm| producers write data to |ak| topics and |ak| consumers read data from |ak| topics. There is an implicit "contract" that producers write data with a schema that can be read by consumers, even as producers and consumers evolve their schemas. |sr| helps ensure that this contract is met with compatibility checks. It is useful to think about schemas as APIs. Applications depend on APIs and expect any changes made to APIs are still compatible and applications can still run. Similarly, streaming applications depend on schemas and expect any changes made to schemas are still compatible and they can still run. Schema evolution requires compatibility checks to ensure that the producer-consumer contract is not broken. This is where |sr| helps: it provides centralized schema management and compatibility checks as schemas evolve. Target Audience ^^^^^^^^^^^^^^^ The target audience is a developer writing |ak| streaming applications who wants to build a robust application leveraging Avro data and |sr|. The principles in this tutorial apply to any |ak| client that interacts with |sr|. This tutorial is not meant to cover the operational aspects of running the |sr| service. For production deployments of |sr|, refer to :ref:`schema-registry-prod`. .. include:: ../includes/cp-demo-tip.rst .. _schema_registry_terminology: Terminology Review ^^^^^^^^^^^^^^^^^^ First let us levelset on terminology, and answer the question: What is a **topic** versus a **schema** versus a **subject**? .. include:: includes/terms-schemas-topics.rst As a practical example, let's say a retail business is streaming transactions in a |ak| topic called ``transactions``. A producer is writing data with a schema ``Payment`` to that |ak| topic ``transactions``. If the producer is serializing the message value as Avro, then |sr| has a subject called ``transactions-value``. If the producer is also serializing the message key as Avro, |sr| would have a subject called ``transactions-key``, but for simplicity, in this tutorial consider only the message value. That |sr| subject ``transactions-value`` has at least one schema called ``Payment``. The subject ``transactions-value`` defines the scope in which schemas for that subject can evolve and |sr| does compatibility checking within this scope. In this scenario, if developers evolve the schema ``Payment`` and produce new messages to the topic ``transactions``, |sr| checks that those newly evolved schemas are compatible with older schemas in the subject ``transactions-value`` and adds those new schemas to the subject. Setup ~~~~~ .. _sr-tutorial-prereqs: Prerequisites ^^^^^^^^^^^^^ Before proceeding with this tutorial, verify that you have installed the following on your local machine: * `Confluent Platform 5.2 or later `__ * :ref:`cli-install` * Java 1.8 or 1.11 to run |cp| * Maven to compile the client Java code * ``jq`` tool to nicely format the results from querying the |sr| REST endpoint .. note:: This tutorial is intended to run on a local install of |cp|. If you have a |ccloud| cluster, you may also use the tutorial with that cluster, in which case enable |sr-ccloud| for your environment and set the `appropriate properties `__ in your client applications. Environment Setup ^^^^^^^^^^^^^^^^^ #. Clone the Confluent `examples `_ repo from GitHub and work in the ``clients/avro/`` subdirectory, which provides the sample code you will compile and run in this tutorial. .. code:: bash $ git clone https://github.com/confluentinc/examples.git .. code:: bash $ cd examples/clients/avro .. codewithvars:: bash $ git checkout |release|-post #. Use the :ref:`quickstart` to bring up a single-node |cp| development environment. With a single-line :ref:`confluent_local` command, you can have a basic |ak| cluster with |sr|, |c3-short|, and other services running on your local machine. .. code:: bash confluent local start Starting zookeeper zookeeper is [UP] Starting kafka kafka is [UP] Starting schema-registry schema-registry is [UP] Starting kafka-rest kafka-rest is [UP] Starting connect connect is [UP] Starting ksql-server ksql-server is [UP] Starting control-center control-center is [UP] Create the transactions topic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For the exercises in this tutorial, you will be producing to and consuming from a topic called ``transactions``. Create this topic in |c3-short|. #. Navigate to the |c3-short| web interface at `http://localhost:9021/ `_. .. important:: It may take a minute or two for |c3-short| to come online. .. image:: ../images/c3-landing-page.png :width: 600px #. Click into the cluster, select **Topics** and click **Add a topic**. .. image:: ../images/c3-create-topic-sr.png :width: 600px #. Name the topic ``transactions`` and click **Create with defaults**. .. image:: ../images/c3-create-topic-name-sr.png :width: 600px The new topic is displayed. .. image:: ../images/c3-create-topic-new-sr.png :width: 600px .. _schema_registry_tutorial_definition: Schema Definition ~~~~~~~~~~~~~~~~~ The first thing developers need to do is agree on a basic schema for data. Client applications form a contract: * producers will write data in a schema * consumers will be able to read that data Consider the :devx-examples:`original Payment schema|clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment.avsc`. Run this command, .. code:: java cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment.avsc to view the schema: .. code:: java {"namespace": "io.confluent.examples.clients.basicavro", "type": "record", "name": "Payment", "fields": [ {"name": "id", "type": "string"}, {"name": "amount", "type": "double"} ] } Here is a break-down of what this schema defines: * ``namespace``: a fully qualified name that avoids schema naming conflicts * ``type``: `Avro data type `_, for example, ``record``, ``enum``, ``union``, ``array``, ``map``, or ``fixed`` * ``name``: unique schema name in this namespace * ``fields``: one or more simple or complex data types for a ``record``. The first field in this record is called `id`, and it is of type `string`. The second field in this record is called `amount`, and it is of type `double`. .. _sr-tutorial-clients-avro-maven: Client Applications Writing Avro ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Maven ^^^^^ This tutorial uses Maven to configure the project and dependencies. Java applications that have |ak| producers or consumers using Avro require ``pom.xml`` files to include, among other things: * Confluent Maven repository * Confluent Maven plugin repository * Dependencies ``org.apache.avro.avro`` and ``io.confluent.kafka-avro-serializer`` to serialize data as Avro * Plugin ``avro-maven-plugin`` to generate Java class files from the source schema The ``pom.xml`` file may also include: * Plugin ``kafka-schema-registry-maven-plugin`` to check compatibility of evolving schemas For a full ``pom.xml`` example, refer to this :devx-examples:`pom.xml|clients/avro/pom.xml`. Configuring Avro ^^^^^^^^^^^^^^^^ |ak| applications using Avro data and |sr| need to specify at least two configuration parameters: * Avro serializer or deserializer * Properties to connect to |sr| There are two basic types of Avro records that your application can use: * a specific code-generated class, or * a generic record The examples in this tutorial demonstrate how to use the specific ``Payment`` class. Using a specific code-generated class requires you to define and compile a Java class for your schema, but it easier to work with in your code. However, in other scenarios where you need to work dynamically with data of any type and do not have Java classes for your record types, use `GenericRecord `_. .. tip:: Starting with version 5.4.0, |cp| also provides a serializer and deserializer for writing and reading data in "reflection Avro" format. To learn more, see :ref:`messages-avro-reflection`. .. _sr-tutorial-java-producers: Java Producers ^^^^^^^^^^^^^^ Within the application, Java producers need to configure the Avro serializer for the |ak-tm| value (or |ak| key) and URL to |sr|. Then the producer can write records where the |ak| value is of ``Payment`` class. --------------------- Example Producer Code --------------------- When constructing the producer, configure the message value class to use the application's code-generated ``Payment`` class. For example: .. sourcecode:: java import io.confluent.kafka.serializers.KafkaAvroSerializer; import io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig; ... props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class); props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl); ... ... KafkaProducer producer = new KafkaProducer(props)); final Payment payment = new Payment(orderId, 1000.00d); final ProducerRecord record = new ProducerRecord(TOPIC, payment.getId().toString(), payment); producer.send(record); ... For a full Java producer example, refer to :devx-examples:`the producer example|clients/avro/src/main/java/io/confluent/examples/clients/basicavro/ProducerExample.java`. Because the ``pom.xml`` includes ``avro-maven-plugin``, the ``Payment`` class is automatically generated during compile. ---------------- Run the Producer ---------------- Run the following build commands in a shell in ``/examples/clients/avro``. #. To run this producer, first compile the project: .. code:: bash mvn clean compile package #. From the |c3-short| navigation menu at `http://localhost:9021/ `_, make sure the cluster is selected, and click **Topics**. Next, click the ``transactions`` topic and go to the **Messages** tab. You should see no messages because no messages have been produced to this topic yet. #. Run ``ProducerExample``, which produces Avro-formatted messages to the ``transactions`` topic. .. code:: bash mvn exec:java -Dexec.mainClass=io.confluent.examples.clients.basicavro.ProducerExample The command takes a moment to run. When it completes, you should see: .. code:: bash ... Successfully produced 10 messages to a topic called transactions [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ ... #. Now you should be able to see messages in |c3-short| by inspecting the ``transactions`` topic as it dynamically deserializes the newly arriving data that was serialized as Avro. At `http://localhost:9021/ `_, click into the cluster on the left, then go to **Topics** -> ``transactions`` -> **Messages**. .. tip:: If you do not see any data, rerun the Producer and verify it completed successfully, and look at the |c3-short| again. The messages do not persist in the Console, so you need to view them soon after you run the producer. .. figure:: ../images/c3-inspect-transactions.png :width: 600px .. _sr-tutorial-java-consumers: Java Consumers ^^^^^^^^^^^^^^ Within the client application, Java consumers need to configure the Avro deserializer for the |ak| value (or |ak| key) and URL to |sr|. Then the consumer can read records where the |ak| value is of ``Payment`` class. --------------------- Example Consumer Code --------------------- By default, each record is deserialized into an Avro ``GenericRecord``, but in this tutorial the record should be deserialized using the application's code-generated ``Payment`` class. Therefore, configure the deserializer to use Avro ``SpecificRecord``, i.e., ``SPECIFIC_AVRO_READER_CONFIG`` should be set to ``true``. For example: .. sourcecode:: java import io.confluent.kafka.serializers.KafkaAvroDeserializer; import io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig; ... props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class); props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true); props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl); ... ... KafkaConsumer consumer = new KafkaConsumer<>(props)); consumer.subscribe(Collections.singletonList(TOPIC)); while (true) { ConsumerRecords records = consumer.poll(100); for (ConsumerRecord record : records) { String key = record.key(); Payment value = record.value(); } } ... For a full Java consumer example, refer to :devx-examples:`the consumer example|clients/avro/src/main/java/io/confluent/examples/clients/basicavro/ConsumerExample.java`. Because the ``pom.xml`` includes ``avro-maven-plugin``, the ``Payment`` class is automatically generated during compile. ---------------- Run the Consumer ---------------- #. To run this consumer, first compile the project. .. code:: bash mvn clean compile package #. Then run ``ConsumerExample`` (assuming you already ran the ``ProducerExample`` above). .. code:: bash mvn exec:java -Dexec.mainClass=io.confluent.examples.clients.basicavro.ConsumerExample You should see: .. code:: bash ... offset = 0, key = id0, value = {"id": "id0", "amount": 1000.0} offset = 1, key = id1, value = {"id": "id1", "amount": 1000.0} offset = 2, key = id2, value = {"id": "id2", "amount": 1000.0} offset = 3, key = id3, value = {"id": "id3", "amount": 1000.0} offset = 4, key = id4, value = {"id": "id4", "amount": 1000.0} offset = 5, key = id5, value = {"id": "id5", "amount": 1000.0} offset = 6, key = id6, value = {"id": "id6", "amount": 1000.0} offset = 7, key = id7, value = {"id": "id7", "amount": 1000.0} offset = 8, key = id8, value = {"id": "id8", "amount": 1000.0} offset = 9, key = id9, value = {"id": "id9", "amount": 1000.0} ... #. Press ``Ctrl+C`` to stop. Other |ak| Clients ^^^^^^^^^^^^^^^^^^^ The objective of this tutorial is to learn about Avro and |sr| centralized schema management and compatibility checks. To keep examples simple, this tutorial focuses on Java producers and consumers, but other |ak| clients work in similar ways. For examples of other |ak| clients interoperating with Avro and |sr|: * :ref:`KSQL ` * :ref:`Kafka Streams ` * :ref:`Kafka Connect ` * :ref:`Confluent REST Proxy ` * :ref:`Non-Java clients based on librdkafka ` , including Confluent Python, Confluent Go, Confluent DotNet Centralized Schema Management ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Viewing Schemas in Schema Registry ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At this point, you have producers serializing Avro data and consumers deserializing Avro data. The producers are registering schemas to |sr| and consumers are retrieving schemas from |sr|. #. From the |c3-short| navigation menu at `http://localhost:9021/ `__, make sure the cluster is selected on the left, and click **Topics**. #. Click the ``transactions`` topic and go to the **Schema** tab to retrieve the latest schema from |sr| for this topic: .. figure:: ../images/c3-schema-transactions.png :width: 600px The schema is identical to the :ref:`schema file defined for Java client applications`. .. _tutorial-use-curl-with-schema-registry: Using curl to Interact with Schema Registry ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can also use `curl `__ commands to connect directly to the REST endpoint in |sr| to view subjects and associated schemas. #. To view all the subjects registered in |sr| (assuming |sr| is running on the local machine listening on port 8081): .. code:: bash curl --silent -X GET http://localhost:8081/subjects/ | jq . Here is the expected output of the above command: .. code:: bash [ "transactions-value" ] In this example, the |ak| topic ``transactions`` has messages whose value (that is, `payload`) is Avro, and by default the |sr| subject name is ``transactions-value``. #. To view the latest schema for this subject in more detail: .. code:: bash curl --silent -X GET http://localhost:8081/subjects/transactions-value/versions/latest | jq . Here is the expected output of the above command: .. code:: bash { "subject": "transactions-value", "version": 1, "id": 1, "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}" } Here is a break-down of what this version of the schema defines: * ``subject``: the scope in which schemas for the messages in the topic ``transactions`` can evolve * ``version``: the schema version for this subject, which starts at 1 for each subject * ``id``: the globally unique schema version id, unique across all schemas in all subjects * ``schema``: the structure that defines the schema format Notice that in the output to the ``curl`` command above, the schema is escaped JSON; the double quotes are preceded by backslashes. #. Based on the schema id, you can also retrieve the associated schema by querying |sr| REST endpoint as follows: .. code:: bash curl --silent -X GET http://localhost:8081/schemas/ids/1 | jq . Here is the expected output: .. code:: bash { "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}" } Schema IDs in Messages ^^^^^^^^^^^^^^^^^^^^^^ Integration with |sr| means that |ak| messages do not need to be written with the entire Avro schema. Instead, |ak| messages are written with the schema id. The producers writing the messages and the consumers reading the messages must be using the same |sr| to get the same mapping between a schema and schema id. In this example, a producer sends the new schema for ``Payments`` to |sr|. |sr| registers this schema ``Payments`` to the subject ``transactions-value``, and returns the schema id of ``1`` to the producer. The producer caches this mapping between the schema and schema id for subsequent message writes, so it only contacts |sr| on the first schema write. When a consumer reads this data, it sees the Avro schema id of ``1`` and sends a schema request to |sr|. |sr| retrieves the schema associated to schema id ``1``, and returns the schema to the consumer. The consumer caches this mapping between the schema and schema id for subsequent message reads, so it only contacts |sr| on the first schema id read. Auto Schema Registration ^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: includes/auto-schema-registration.rst To manually register the schema outside of the application, you can use |c3-short|. First, create a new topic called ``test`` in the same way that you created a new topic called ``transactions`` earlier in the tutorial. Then from the **Schema** tab, click **Set a schema** to define the new schema. Specify values for: * ``namespace``: a fully qualified name that avoids schema naming conflicts * ``type``: `Avro data type `_, one of ``record``, ``enum``, ``union``, ``array``, ``map``, ``fixed`` * ``name``: unique schema name in this namespace * ``fields``: one or more simple or complex data types for a ``record``. The first field in this record is called ``id``, and it is of type ``string``. The second field in this record is called ``amount``, and it is of type ``double``. If you were to define the same schema as used earlier, you would enter the following in the |c3-short| schema editor: .. code:: java { "type": "record", "name": "Payment", "namespace": "io.confluent.examples.clients.basicavro", "fields": [ { "name": "id", "type": "string" }, { "name": "amount", "type": "double" } ] } If you prefer to connect directly to the REST endpoint in |sr|, then to define a schema for a new subject for the topic ``test``, run the command below. .. code:: bash curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"}]}"}' http://localhost:8081/subjects/test-value/versions In this sample output, it creates a schema with id of ``1``.: .. code:: bash {"id":1} Schema Evolution and Compatibility ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Evolving Schemas ^^^^^^^^^^^^^^^^ So far in this tutorial, you have seen the benefit of |sr| as being centralized schema management that enables client applications to register and retrieve globally unique schema ids. The main value of |sr|, however, is in enabling schema evolution. Similar to how APIs evolve and need to be compatible for all applications that rely on old and new versions of the API, schemas also evolve and likewise need to be compatible for all applications that rely on old and new versions of a schema. This schema evolution is a natural behavior of how applications and data develop over time. |sr| allows for schema evolution and provides compatibility checks to ensure that the contract between producers and consumers is not broken. This allows producers and consumers to update independently and evolve their schemas independently, with assurances that they can read new and legacy data. This is especially important in |ak| because producers and consumers are decoupled applications that are sometimes developed by different teams. .. include:: includes/transitive.rst These are the compatibility types: .. include:: includes/compatibility_list.rst Refer to :ref:`schema_evolution_and_compatibility` for a more in-depth explanation on the compatibility types. Failing Compatibility Checks ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |sr| checks compatibility as schemas evolve to uphold the producer-consumer contract. Without |sr| checking compatibility, your applications could potentially break on schema changes. In the Payment schema example, let's say the business now tracks additional information for each payment, for example, a field ``region`` that represents the place of sale. Consider the :devx-examples:`Payment2a schema|clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc` which includes this extra field ``region``: .. code:: java cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc {"namespace": "io.confluent.examples.clients.basicavro", "type": "record", "name": "Payment", "fields": [ {"name": "id", "type": "string"}, {"name": "amount", "type": "double"}, {"name": "region", "type": "string"} ] } Before proceeding, because the default |sr| compatibility is :ref:`backward`, think about whether this new schema is backward compatible. Specifically, ask yourself whether a consumer can use this new schema to read data written by producers using the older schema without the `region` field. The answer is no. Consumers will fail reading data with the older schema because the older data does not have the `region` field, therefore this schema is not backward compatible. Confluent provides a :ref:`Schema Registry Maven Plugin `, which you can use to check compatibility in development or integrate into your CI/CD pipeline. Our sample :devx-examples:`pom.xml|clients/avro/pom.xml#L84-L99` includes this plugin to enable compatibility checks. .. codewithvars:: xml io.confluent kafka-schema-registry-maven-plugin |release| http://localhost:8081 src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc test-compatibility It is currently configured to check compatibility of the new ``Payment2a`` schema for the ``transactions-value`` subject in |sr|. #. Run the compatibility check and verify that it fails: .. codewithvars:: bash mvn io.confluent:kafka-schema-registry-maven-plugin:|release|:test-compatibility Here is the error message you will get: .. code:: bash ... [ERROR] Schema examples/clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2a.avsc is not compatible with subject(transactions-value) ... #. Try to register the new schema ``Payment2a`` manually to |sr|, which is a useful way for non-Java clients to check compatibility if you are not using |c3-short|: .. code:: bash curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\"}]}"}' http://localhost:8081/subjects/transactions-value/versions As expected, |sr| rejects the schema with an error message that it is incompatible: .. code:: bash {"error_code":409,"message":"Schema being registered is incompatible with an earlier schema"} Passing Compatibility Checks ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To maintain :ref:`backward` compatibility, a new schema must assume default values for the new field if it is not provided. #. Consider an updated :devx-examples:`Payment2b schema|clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2b.avsc` that has a default value for ``region``. Run the ``cat`` command as shown to view the updated schema. .. code:: java cat src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2b.avsc You should see the following output. .. code:: java {"namespace": "io.confluent.examples.clients.basicavro", "type": "record", "name": "Payment", "fields": [ {"name": "id", "type": "string"}, {"name": "amount", "type": "double"}, {"name": "region", "type": "string", "default": ""} ] } #. From |c3-short|, click the ``transactions`` topic and go to the **Schema** tab to retrieve the ``transactions`` topic's latest schema from |sr|. #. Click **Edit Schema**. .. image:: ../images/tutorial-c3-edit-schema.png :width: 600px #. Add the new field ``region`` again, this time including the default value as shown below, then click **Save**. .. code:: java { "name": "region", "type": "string", "default": "" } You should see it accepted. .. image:: ../images/tutorial-c3-edit-schema-pass.png :width: 600px (If you get error messages about invalid Avro, check syntax; for example, quotes and colons, enclosing brackets, comma-separated from the previous field, and so on.) Now this |sr| subject for the topic ``transactions`` has two schemas: * version 1 is ``Payment.avsc`` * version 2 is ``Payment2b.avsc`` that has the additional field for ``region`` with a default empty value. #. In |c3-short|, still on the Schema tab for the topic ``transactions``, click **Version history** and select **Turn on version diff** to compare the two versions: .. image:: ../images/tutorial-c3-schema-compare.png :width: 600px #. At the command line, go back to the `Schema Registry Maven Plugin `_, update the :devx-examples:`pom.xml|clients/avro/pom.xml` to refer to ``Payment2b.avsc`` instead of ``Payment2a.avsc``. #. Re-run the compatibility check and verify that it passes: .. codewithvars:: bash mvn io.confluent:kafka-schema-registry-maven-plugin:|release|:test-compatibility You will get this message showing that the schema passed the compatibility check. .. code:: bash ... [INFO] Schema examples/clients/avro/src/main/resources/avro/io/confluent/examples/clients/basicavro/Payment2b.avsc is compatible with subject(transactions-value) ... #. If you prefer to connect directly to the REST endpoint in |sr|, then to register the new schema ``Payment2b``, run the command below. It should succeed. .. code:: bash curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\",\"default\":\"\"}]}"}' http://localhost:8081/subjects/transactions-value/versions The above ``curl`` command, if successful, returns the version ``id`` of the new schema: .. code:: bash {"id":2} #. View the latest subject for ``transactions-value`` in |sr|: .. code:: bash curl --silent -X GET http://localhost:8081/subjects/transactions-value/versions/latest | jq . This command returns the latest |sr| subject for the ``transactions-value`` topic, including version number, id, and a description of the schema in JSON: .. code:: bash { "subject": "transactions-value", "version": 2, "id": 2, "schema": "{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},{\"name\":\"region\",\"type\":\"string\",\"default\":\"\"}]}" } Notice the changes: * ``version``: changed from ``1`` to ``2`` * ``id``: changed from ``1`` to ``2`` * ``schema``: updated with the new field ``region`` that has a default value Changing Compatibility Type ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The default compatibility type is `backward`, but you may change it globally or per subject. To change the compatibility type per subject from |c3-short|, click the ``transactions`` topic and go to the **Schema** tab to retrieve the ``transactions`` topic's latest schema from |sr|. Click **Edit Schema** and then click **Compatibility Mode**. .. image:: ../images/c3-edit-compatibility.png :width: 600px Notice that the compatibility for this topic is set to the default `backward`, but you may change this as needed. If you prefer to connect directly to the REST endpoint in |sr|, then to change the compatibility type for the topic ``transactions``, i.e., for the subject ``transactions-value``, run the example command below. .. code:: bash curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \ --data '{"compatibility": "BACKWARD_TRANSITIVE"}' \ http://localhost:8081/config/transactions-value Next Steps ~~~~~~~~~~ * Adapt your applications to use Avro data * Change compatibility modes to suit your application needs * Test new schemas so that they pass compatibility checks * Try out :ref:`schemaregistry_using`, which show more curl commands over HTTP and HTTPS. * For a more in-depth understanding of the benefits of Avro, read `Why Avro For Kafka Data `_ * For a more in-depth understanding of the benefits of |sr|, read `Yes, Virginia, You Really Do Need a Schema Registry `_ * Read the user guide on managing schemas on |c3|: :ref:`topicschema` * Read the Quick Start on configuring and using |sr| for |ccloud|: `Schema Registry and Confluent Cloud `__ * To learn more about getting started with |avro-tm| with Java clients see ``__. * For a deep dive on all supported schema formats, and how to configure clients to use Avro, Protobuf, or JSON Schema, see :ref:`serializer_and_formatter`. * Work through the :ref:`cp-demo` to understand |sr| in the context of a full |cp| deployment, including various types of security enabled.