Schema Registry Maven Plugin for Confluent Platform

A Maven plugin for Confluent Schema Registry is available to help throughout the development process, with configuration options as listed below.

Tip

There is no official out-of-the-box Gradle plugin available for Schema Registry. However, you can reference any of these plugins in your Maven pom.xml (Project Object Model file):

configs for all goals

Starting with Confluent Platform 7.0.0, the configs option is available for the Schema Registry Maven plugin for all goals. You can use configs to add any valid configuration to the CachedSchemaRegistryClient.

The syntax details are:

configs

  • Type: Map<String, String>
  • Required: false

For example, to set up SSL for the Maven plugin, specify the keystore/truststore location by adding the following configuration to the plugin:

<configuration>
   <configs>
     <schema.registry.ssl.keystore.location>path-to-keystore.jks</schema.registry.ssl.keystore.location>
     <schema.registry.ssl.keystore.password>password</schema.registry.ssl.keystore.password>
     <schema.registry.ssl.truststore.location>path-to-truststore.jks</schema.registry.ssl.truststore.location>
     <schema.registry.ssl.truststore.password>password</schema.registry.ssl.truststore.password>
    </configs>
</configuration>

Note that the schema.registry. prefix is needed, just like other Schema Registry client configurations for HTTPS.

schema-registry:download

The download goal is used to pull down schemas from a Schema Registry server. This plugin is used to download schemas for the requested subjects and write them to a folder on the local file system. If the versions array is empty, then the latest schemas are downloaded for all subjects. If the array is populated, its length must match the length of subjectPatterns array. This allows you to compare a new schema against a specific schema and to get older versions.

Tip

In Confluent Platform version 7.2.0-0 and later, you can specify a version to download, which better supports schema-registry:test-local-compatibility. Previous to Confluent Platform 7.2.0-0, you could only download the latest version.

schemaRegistryUrls

Schema Registry URLs to connect to.

  • Type: String[]
  • Required: true
userInfoConfig

User credentials for connecting to Schema Registry, of the form user:password. This is required if connecting to Confluent Cloud Schema Registry.

  • Type: String[]
  • Required: false
  • Default: null
outputDirectory

Output directory to write the schemas to.

  • Type: File
  • Required: true
schemaExtension

The file extension to use for the output file name. This must begin with a . character.

  • Type: File
  • Required: false
  • Default: .avsc
subjectPatterns

The subject patterns to download. This is a list of regular expressions. Patterns must match the entire subject name.

  • Type: String[]
  • Required: true

Example 1: Download the latest version of a schema

The following code uses the plugin to download the latest version of schemas with the subject pattern ^TestSubject000-(key|value)$ :

<plugin>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-schema-registry-maven-plugin</artifactId>
    <version>7.6.0</version>
    <configuration>
        <schemaRegistryUrls>
            <param>http://192.168.99.100:8081</param>
        </schemaRegistryUrls>
        <outputDirectory>src/main/avro</outputDirectory>
        <subjectPatterns>
            <param>^TestSubject000-(key|value)$</param>
        </subjectPatterns>
    </configuration>
</plugin>

Example 2: Specify a schema version to download

The following code uses the plugin to download version 1 of schemas with the subject pattern ^topic1-(key|value)$, and the latest version of schemas with the subject pattern ^topic2-(key|value)$:

<plugin>
  <groupId>io.confluent</groupId>
  <artifactId>kafka-schema-registry-maven-plugin</artifactId>
  <version>7.6.0</version>
  <configuration>
      <schemaRegistryUrls>
          <param>http://127.0.0.1:8081</param>
      </schemaRegistryUrls>
      <outputDirectory>outputDir/target/</outputDirectory>
      <subjectPatterns>
          <param>^topic1-(key|value)$</param>
          <param>^topic2-(key|value)$</param>
      </subjectPatterns>
      <versions>
          <param>1</param>
          <param>latest</param>
      </versions>
  </configuration>
</plugin>

schema-registry:set-compatibility

The goal, schema-registry:set-compatibility, is available in Confluent Platform version 7.2.0-0 and later.

This goal is used to update the configuration of a subject or at a global level directly from the plugin. This enables you to change the compatibility levels with evolving schemas, and centralize your subject and schema management.

schemaRegistryUrls

Schema Registry URLs to connect to.

  • Type: String[]
  • Required: true
compatibilityLevels

Map of subjects and the compatibility types to be set in them, respectively.

  • Type: Map<String,String> (Subject,CompatibilityLevel)
  • Required: true

If subject is NULL or __GLOBAL, then the Global level configuration is updated. If CompatibilityLevel is NULL, then the configuration is deleted.

The following example uses the plugin to set the compatibility of the subject order to BACKWARD, product to FORWARD_TRANSITIVE, deleting the compatibility of customer and changing the Global compatibility to BACKWARD_TRANSITIVE:

<plugin>
  <groupId>io.confluent</groupId>
  <artifactId>kafka-schema-registry-maven-plugin</artifactId>
  <version>7.6.0</version>
  <configuration>
        <schemaRegistryUrls>
            <param>http://192.168.99.100:8081</param>
        </schemaRegistryUrls>
        <compatibilityLevels>
            <order>BACKWARD</order>
            <product>FORWARD_TRANSITIVE</product>
            <customer>null</customer>
            <__GLOBAL>BACKWARD_TRANSITIVE</__GLOBAL>
        </compatibilityLevels>
    </configuration>
</plugin>

Example Usage

Example usage of schema-registry:test-compatibility:

  • to a local Schema Registry, see Schema Registry tutorial.
  • to Confluent Cloud Schema Registry, see GitHub example.

schema-registry:test-local-compatibility

This goal, available in Confluent Platform version 7.2.0-0 and later, tests compatibility of a local schema with other existing local schemas during development and testing phases.

Before the addition of schema-registry:test-local-compatibility, if you wanted to check compatibility of a new schema you had to connect to the Schema Registry. This meant registering all the schemas for which you want to perform compatibility checks, resulting in a reduction in available free schemas. This new goal solves that problem and supports quick, efficient compatibility testing of local schemas as appropriate for development phases.

Tip

For examples of testing schema compatibility using GitHub Actions, see the Workflows and examples provided below, and the kafka-github-actions demo repo.

schemas

Map of schema and location of schemas for which compatibility test is performed

  • Type: Map<String, File>
  • Required: true
previousSchemaPaths

Map of schema and location of previous schemas. The compatibility test is performed for a schema against the schemas in previousSchemaPaths. The location can be a directory or file name. If it is a directory name, all files inside folder are added. Subdirectories are ignored.

  • Type: Map<String, File>
  • Required: true
compatibilityLevels

Map of schema and the compatibility type for which check is performed. CompatibilityLevel is of type enum (one of NONE, BACKWARD, BACKWARD_TRANSITIVE, FORWARD, FORWARD_TRANSITIVE, FULL, or FULL_TRANSITIVE) For compatibility level BACKWARD, FORWARD, or FULL, exactly one previousSchema is expected per schema.

  • Type: Map<String, CompatibilityLevel>
  • Required: true
schemaTypes

String that specifies the schema type.

  • Type: Map<String, String> (one of AVRO (default), JSON, PROTOBUF)
  • Required: false
  • Default: AVRO

The following example uses the plugin to configure three subjects (order, product, and customer) using schema type: AVRO

<configuration>
        <schemas>
            <order>src/main/avro/order.avsc</order>
            <product>src/main/avro/product.avsc</product>
            <customer>src/main/avro/customer.avsc</customer>
        </schemas>
        <schemaTypes>
            <order>AVRO</order>
            <product>AVRO</product>
            <customer>AVRO</customer>
        </schemaTypes>
        <compatibilityLevels>
            <order>BACKWARD</order>
            <product>FORWARD</product>
            <customer>NONE</customer>
        </compatibilityLevels>
        <previousSchemaPaths>
            <order>src/main/avro/order.avsc</order>
            <product>src/main/avro/products/</product>
            <customer>src/main/avro/customer.avsc</customer>
        </previousSchemaPaths>
</configuration>

schema-registry:test-compatibility

This goal is used to read schemas from the local file system and test them for compatibility against the Schema Registry server(s). This goal can be used in a continuous integration pipeline to ensure that schemas in the project are compatible with the schemas in another environment.

Tip

For examples of testing schema compatibility using GitHub Actions, see the Workflows and examples provided below, and the kafka-github-actions demo repo.

schemaRegistryUrls

Schema Registry URLs to connect to.

  • Type: String[]
  • Required: true
userInfoConfig

User credentials for connecting to Schema Registry, of the form user:password. This is required if connecting to Confluent Cloud Schema Registry.

  • Type: String[]
  • Required: false
  • Default: null
subjects

Map containing subject to schema path of the subjects to be registered.

  • Type: Map<String, File>
  • Required: true

Tip

Starting with Confluent Platform 5.5.5, you can specify a slash /, and other special characters, in a subject name. To do so, first URL-encode the subject name, and then replace non-valid characters in the output. For example, if you have a subject name such as path/to/my.proto, the URL encoding would produce something like `%2Fpath%2Fto%2Fmy.proto`, which you can then revise by replacing % with _x as follows: _x2Fpath_x2Fto_x2Fmy.proto (because % is not valid in an XML name). The reasoning behind this is that the Maven plugin requires subject names be specified as XML elements, but some characters, like slashes, are not valid characters in an XML name. You might want to use slashes to register a Protobuf schema that is referenced by another schema, such as /path/to/my.proto for the subject. This workaround enables you to do that.

schemaTypes

String that specifies the schema type.

  • Type: String (one of AVRO (default), JSON, PROTOBUF)
  • Required: false
  • Default: AVRO
references

Map containing a reference name and a subject.

  • Type: Map<String, Reference[]>
  • Required: false
metadata

Map containing a subject and a Metadata object.

  • Type: Map<String, Metadata>
  • Required: false
ruleSet

Map containing a subject and a RuleSet object.

  • Type: Map<String, Ruleset>
  • Required: false
verbose

Include in the output the reason the schema fails the compatibility test, in cases where it fails.

  • Type: Boolean
  • Required: false
  • Default: true

The following example uses the plugin to configure three subjects (order, product, and customer) using schema type: AVRO

<plugin>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-schema-registry-maven-plugin</artifactId>
    <version>7.6.0</version>
    <configuration>
        <schemaRegistryUrls>
            <param>http://192.168.99.100:8081</param>
        </schemaRegistryUrls>
        <subjects>
            <order>src/main/avro/order.avsc</order>
            <product>src/main/avro/product.avsc</product>
            <customer>src/main/avro/customer.avsc</customer>
        </subjects>
        <schemaTypes>
            <order>AVRO</order>
            <product>AVRO</product>
            <customer>AVRO</customer>
        </schemaTypes>
        <references>
            <order>
              <reference>
                  <name>com.acme.Product</name>
                  <subject>product</subject>
              </reference>
              <reference>
                  <name>com.acme.Customer</name>
                  <subject>customer</subject>
              </reference>
            </order>
        </references>
    </configuration>
    <goals>
        <goal>test-compatibility</goal>
    </goals>
</plugin>

Example Usage

Example usage of schema-registry:test-compatibility:

  • to a local Schema Registry, see the Schema Registry tutorial`.
  • to Confluent Cloud Schema Registry, see GitHub example.

schema-registry:derive-schema

This goal is used to automatically generate a schema (Avro, JSON or ProtoBuf) from a given file containing messages in JSON format. The generated schema, provided as output to the derive-schema command, can be used as is or as a starting point for developers to build on.

The derive-schema goal takes the following three parameters (inputs).

messagePath

Location of file containing messages. Each message must be in JSON format and on a new line.

  • Type: File
  • Required: true
outputPath

Location of file to which result is written.

  • Type: File
  • Required: true
schemaType

String that specifies the schema type.

  • Type: String (one of AVRO (default), JSON, PROTOBUF)
  • Required: false
  • Default: AVRO

The output of the derive-schema command is a JSON with one or more generated schemas. The output includes the schema itself, along with messages matched, represented by the line numbers in the input file starting from 0.

{
  "schemas": [
    {
      "schema": "SCHEMA",
      "messagesMatched": [0,3,5]
    },
    {
      "schema": "SCHEMA",
      "messagesMatched": [1,2,4]
    }
  ]
}

Depending upon the schema format (type), the output will vary in terms of number of schemas returned, whether or not the format allows for optional fields, arrays with multiple data types, and so on. These output rules along with examples of input messages and resulting schemas for each format are shown below.

Avro rules and examples

Avro output rules are as follows.

  • Multiple schemas can be returned.
  • Arrays are expected to have a single data type. Any message not following this will throw an error.
  • Avro does not support optional fields, so records cannot be combined together.
  • Unions are supported.
  • Arrays containing multiple data types using unions are supported.

Example Avro messages are shown below:

{"name": "Foo", "Age": {"int": 12}}
{"name": "Bar", "Age": {"string": "12"}}
{"sport": "Football"}

Here is an example of a generated Avro schema based on Avro message inputs. Message 0 and Message 1 both have field Age of type union and can be merged into one schema. Message 2 has a different field sport which cannot be merged with other messages, so we have a different schema for Message 2.

{
  "schemas" : [ {
    "schema" : {
      "type" : "record",
      "name" : "Schema",
      "fields" : [ {
        "name" : "Age",
        "type" : [ "int", "string" ]
      }, {
        "name" : "name",
        "type" : "string"
      } ]
    },
    "messagesMatched" : [ 0, 1 ]
  }, {
    "schema" : {
      "type" : "record",
      "name" : "Schema",
      "fields" : [ {
        "name" : "sport",
        "type" : "string"
      } ]
    },
    "messagesMatched" : [ 2 ]
  } ]
}

JSON Schema rules and examples

JSON Schema output rules are as follows.

  • Exactly one schema is returned for all inputs.
  • The data type for an array is chosen by combining the data types of all its elements.
  • If there are multiple datatypes for the same name, they are combined together using oneOf. For records with the same name, all their fields are combined into one record. For arrays with the same name, their data types are combined.

Example JSON Schema messages are shown below:

{"name": "Foo", "Age": 12}
{"name": "Bar", "Age": "12"}
{"sport": "Football"}

Here is an example of a generated JSON Schema based on JSON Schema message inputs. In JSON Schema, all fields are optional. The field sport is merged with other messages. For the field Age, its types string and number are combined using oneOf.

{
  "schemas" : [ {
    "schema" : {
      "type" : "object",
      "properties" : {
        "Age" : {
          "oneOf" : [ {
            "type" : "number"
          }, {
            "type" : "string"
          } ]
        },
        "name" : {
          "type" : "string"
        },
        "sport" : {
          "type" : "string"
        }
      }
    },
    "messagesMatched" : [ 0, 1, 2 ]
  } ]
}

ProtoBuf rules and examples

Protobuf output rules are as follows.

  • Multiple schemas can be returned.
  • Arrays are expected to have a single data type. Any message not following this will throw an error.
  • For records with the same name, all their fields are combined into one record.

Example Protobuf messages are shown below:

{"name": "Foo", "Age": 12}
{"name": "Bar", "Age": "12"}
{"sport": "Football"}

Here is an example of a generated Protobuf based on Protobuf message inputs. In proto3, all fields are optional. Message 0 and Message 2 can be merged, assuming field sport to be optional. The same applies to Message 1 and Message 2. Message 0 and Message 1 have conflicting data types for field Age, so two different schemas are generated.

{
  "schemas" : [ {
    "schema" : "syntax = \"proto3\";\n\nmessage Schema {\n  int32 Age = 1;\n  string name = 2;\n  string sport = 3;\n}\n",
    "messagesMatched" : [ 0, 2 ]
  }, {
    "schema" : "syntax = \"proto3\";\n\nmessage Schema {\n  string Age = 1;\n  string name = 2;\n  string sport = 3;\n}\n",
    "messagesMatched" : [ 1, 2 ]
  } ]
}

Primitive Data Types Mapping

The following table shows how data types are interpreted for each schema type during schema creation.

Actual Class Avro Datatype Chosen JSON Datatype Chosen ProtoBuf Datatype Chosen
Long long number int64
Short/Integer int number int32
Null null null google.protobuf.Any
BigInteger/BigDecimal double number double
Float/Double double number double
Boolean boolean boolean boolean
String string string string

Workflow and POM.xml example

  1. Configure the POM.xml for a schema, including all required configuration parameters. (This example specifies Avro.)

    <plugins>
       <plugin>
           <groupId>io.confluent</groupId>
           <artifactId>kafka-schema-registry-maven-plugin</artifactId>
           <version>7.6.0</version>
           <configuration>
               <messagePath>messanges/my-message-01.txt</messagePath>
               <schemaType>avro</schemaType>
               <outputPath>new-schema-01.json</outputPath>
            </configuration>
        </plugin>
    <plugins>
    
  2. Run the derive-schema command.

    mvn io.confluent:kafka-schema-registry-maven-plugin:derive-schema
    
  3. View the generated schema.

schema-registry:validate

This goal is used to read schemas from the local file system and validate them locally, before registering them. If you find syntax errors, you can examine and correct them before submitting schemas to Schema Registry with schema-registry:register.

schemaRegistryUrls

Schema Registry URLs to connect to.

  • Type: String[]
  • Required: true
userInfoConfig

User credentials for connecting to Schema Registry, of the form user:password. This is required if connecting to Confluent Cloud Schema Registry.

  • Type: String[]
  • Required: false
  • Default: null
subjects

Map containing subject to schema path of the subjects to be registered.

  • Type: Map<String, File>
  • Required: true
schemaTypes

String that specifies the schema type.

  • Type: String (one of AVRO (default), JSON, PROTOBUF)
  • Required: false
  • Default: AVRO
references

Map containing a reference name and a subject. (The referenced schema must be registered.)

  • Type: Map<String, Reference[]>
  • Required: false
metadata

Map containing a subject and a Metadata object.

  • Type: Map<String, Metadata>
  • Required: false
ruleSet

Map containing a subject and a RuleSet object.

  • Type: Map<String, Ruleset>
  • Required: false

schema-registry:register

This goal is used to read schemas from the local file system and register them on the target Schema Registry server(s). This goal can be used in a continuous deployment pipeline to push schemas to a new environment.

schemaRegistryUrls

Schema Registry URLs to connect to.

  • Type: String[]
  • Required: true
userInfoConfig

User credentials for connecting to Schema Registry, of the form user:password. This is required if connecting to Confluent Cloud Schema Registry.

  • Type: String[]
  • Required: false
  • Default: null
subjects

Map containing subject to schema path of the subjects to be registered.

  • Type: Map<String, File>
  • Required: true
schemaTypes

String that specifies the schema type.

  • Type: String (one of AVRO (default), JSON, PROTOBUF)
  • Required: false
  • Default: AVRO
normalizeSchemas

Normalizes schemas based on semantic equivalence during registration or lookup. To learn more, see Schema normalization in Formats, Serializers, and Deserializers.

  • Type: Boolean
  • Required: false
  • Default: false
references

Map containing a reference name and a subject.

  • Type: Map<String, Reference[]>
  • Required: false
metadata

Map containing a subject and a Metadata object.

  • Type: Map<String, Metadata>
  • Required: false
ruleSet

Map containing a subject and a RuleSet object.

  • Type: Map<String, Ruleset>
  • Required: false

The following example uses the plugin to configure three subjects (order, product, and customer) using schema type: AVRO

<plugin>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-schema-registry-maven-plugin</artifactId>
    <version>7.6.0</version>
    <configuration>
        <schemaRegistryUrls>
            <param>http://192.168.99.100:8081</param>
        </schemaRegistryUrls>
        <subjects>
            <order>src/main/avro/order.avsc</order>
            <product>src/main/avro/product.avsc</product>
            <customer>src/main/avro/customer.avsc</customer>
        </subjects>
        <schemaTypes>
            <order>AVRO</order>
            <product>AVRO</product>
            <customer>AVRO</customer>
        </schemaTypes>
        <references>
            <order>
              <reference>
                  <name>com.acme.Product</name>
                  <subject>product</subject>
              </reference>
              <reference>
                  <name>com.acme.Customer</name>
                  <subject>customer</subject>
              </reference>
            </order>
        </references>
    </configuration>
    <goals>
        <goal>register</goal>
    </goals>
</plugin>

The following plugin example uses metadata and ruleSet parameters to add some metadata and perform some checks on the data as a part of registering the schema.

<plugin>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-schema-registry-maven-plugin</artifactId>
    <version>7.4.0</version>
    <configuration>
        <schemaRegistryUrls>
            <param>http://192.168.99.100:8081</param>
        </schemaRegistryUrls>
        <subjects>
            <customer>src/main/avro/customer.avsc</customer>
        </subjects>
        <schemaTypes>
            <customer>AVRO</customer>
        </schemaTypes>
        <metadata>
            <customer>
                <tags>
                    <ssn>PII</ssn>
                    <age>PHI</age>
                </tags>
                <properties>
                    <owner>Bob Jones</owner>
                    <email>bob@acme.com</email>
                </properties>
            </customer>
        </metadata>
        <ruleSet>
            <customer>
                <domainRules>
                    <rule>
                        <name>checkSsnLen</name>
                        <doc>Check the SSL length.</doc>
                        <kind>CONDITION</kind>
                        <mode>WRITE</mode>
                        <type>CEL</type>
                        <tags>
                            <tag>PII</tag>
                            <tag>PHI</tag>
                        </tags>
                        <params>
                            <key1>value1</key1>
                            <key2>value2</key2>
                        </params>
                        <expr>size(message.ssn) == 9</expr>
                        <onSuccess>NONE</onSuccess>
                        <onFailure>DLQ</onFailure>
                        <disabled>false</disabled>
                    </rule>
                </domainRules>
            </customer>
        </ruleSet>
    </configuration>
    <goals>
        <goal>register</goal>
    </goals>
</plugin>

Workflows and examples

You can integrate Maven Plugin goals with GitHub Actions into a continuous integration/continuous deployment (CI/CD) pipleline to manage schemas on Schema Registry. A general example for developing and validating an Apache Kafka® client application with a Python producer and consumer is provided in the kafka-github-actions demo repo.

Here is an alternative sample pom.xml with project configurations for more detailed validate and register steps.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>io.confluent</groupId>
    <artifactId>GitHub-Actions-Demo</artifactId>
    <version>1.0</version>

<pluginRepositories>
    <pluginRepository>
        <id>confluent</id>
        <url>https://packages.confluent.io/maven/</url>
    </pluginRepository>
</pluginRepositories>

    <properties>
        <schemaRegistryUrl><$CONFLUENT_SCHEMA_REGISTRY_URL></schemaRegistryUrl>
        <schemaRegistryBasicAuthUserInfo>
            <$CONFLUENT_BASIC_AUTH_USER_INFO>
        </schemaRegistryBasicAuthUserInfo>
        <confluent.version>7.6.0</confluent.version>
    </properties>

    <build>
        <plugins>
            <plugin>
                <groupId>io.confluent</groupId>
                <artifactId>kafka-schema-registry-maven-plugin</artifactId>
                <version>${confluent.version}</version>
                <configuration>
                    <schemaRegistryUrls>
                        <param>${schemaRegistryUrl}</param>
                    </schemaRegistryUrls>
                    <userInfoConfig>${schemaRegistryBasicAuthUserInfo}</userInfoConfig>
                </configuration>
                <executions>

                    <execution>
                        <id>validate</id>
                        <phase>validate</phase>
                        <goals>
                            <goal>validate</goal>
                        </goals>
                        <configuration>
                            <subjects>
                                <Orders-value>src/main/resources/order.avsc</Orders-value>
                                <Flights-value>src/main/resources/flight.proto</Flights-value>
                            </subjects>
                            <schemaTypes>
                                <Flights-value>PROTOBUF</Flights-value>
                            </schemaTypes>

                        </configuration>
                    </execution>

                    <execution>
                        <id>set-compatibility</id>
                        <phase>validate</phase>
                        <goals>
                            <goal>set-compatibility</goal>
                        </goals>
                        <configuration>
                            <compatibilityLevels>
                                <Orders-value>FORWARD_TRANSITIVE</Orders-value>
                                <Flights-value>FORWARD_TRANSITIVE</Flights-value>
                            </compatibilityLevels>
                        </configuration>
                    </execution>


                    <execution>
                        <id>test-local</id>
                        <phase>validate</phase>
                        <goals>
                            <goal>test-local-compatibility</goal>
                        </goals>
                        <configuration>
                            <schemas>
                                <order>src/main/resources/order.avsc</order>
                                <flight>src/main/resources/flight.proto</flight>
                            </schemas>
                            <schemaTypes>
                                <flight>ProtoBuf</flight>
                            </schemaTypes>
                            <previousSchemaPaths>
                                <flight>src/main/resources/flightSchemas</flight>
                                <order>src/main/resources/orderSchemas</order>
                            </previousSchemaPaths>
                            <compatibilityLevels>
                                <order>FORWARD_TRANSITIVE</order>
                                <flight>FORWARD_TRANSITIVE</flight>
                            </compatibilityLevels>
                        </configuration>
                    </execution>

                    <execution>
                        <id>test-compatibility</id>
                        <phase>validate</phase>
                        <goals>
                            <goal>test-compatibility</goal>
                        </goals>
                        <configuration>
                            <subjects>
                                <Orders-value>src/main/resources/order.avsc</Orders-value>
                                <Flights-value>src/main/resources/flight.proto</Flights-value>
                            </subjects>
                            <schemaTypes>
                                <Flights-value>PROTOBUF</Flights-value>
                            </schemaTypes>
                        </configuration>
                    </execution>

                    <execution>
                        <id>register</id>
                        <goals>
                            <goal>register</goal>
                        </goals>
                        <configuration>
                            <subjects>
                                <Orders-value>src/main/resources/order.avsc</Orders-value>
                                <Flights-value>src/main/resources/flight.proto</Flights-value>
                            </subjects>
                            <schemaTypes>
                                <Flights-value>PROTOBUF</Flights-value>
                            </schemaTypes>
                        </configuration>
                    </execution>

                </executions>
            </plugin>

        </plugins>
    </build>


</project>

The following workflows can be coded as GitHub actions to accomplish CICD for schema management.

  1. When a pull request is created to merge a new schema to master, validate the schema, check local schema compatibility, set compatibility of subject, and test schema compatibility with subject.

    run: mvn validate
    

    The validate step would include:

    mvn schema-registry:validate@validate
    mvn schema-registry:test-local-compatibility@test-local
    mvn schema-registry:set-compatibility@set-compatibility
    mvn schema-registry:test-compatibility@test-compatibility
    

    Integrated with GitHub Actions, the pull-request.yaml for this step might look like this:

    name: Testing branch for compatibility before merging
     on:
      pull_request:
      branches: [ master ]
      paths: [src/main/resources/*]
     jobs:
    
      validate:
        runs-on: ubuntu-latest
        steps:
           - uses: actions/checkout@v2
           - uses: actions/setup-java@v2
             with:
               java-version: '11'
               distribution: 'temurin'
               cache: maven
           - name: Validate if schema is valid
             run: mvn schema-registry:validate@validate
    
      test-local-compatibility:
      needs: validate
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v2
          - uses: actions/setup-java@v2
            with:
              java-version: '11'
              distribution: 'temurin'
              cache: maven
          - name: Test schema with locally present schema
            run: mvn schema-registry:test-local-compatibility@test-local
    
      set-compatibility:
        needs: test-local-compatibility
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v2
          - uses: actions/setup-java@v2
            with:
              java-version: '11'
              distribution: 'temurin'
              cache: maven
          - name: Set compatibility of subject
            run: mvn schema-registry:set-compatibility@set-compatibility
    
      test-compatibility:
        needs: set-compatibility
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v2
          - uses: actions/setup-java@v2
            with:
              java-version: '11'
              distribution: 'temurin'
              cache: maven
          - name: Test schema with subject
            run: mvn schema-registry:test-compatibility@test-compatibility
    

    If compatibility checking passes a new pull request is created for approval.

  2. Register schema when a pull request is approved and merged to master.

    Run the action to register the new schema on the Schema Registry:

    run: mvn schema-registry:register@register
    

    The push.yaml for this step would look like this:

    name: Registering Schema on merge of pull request
    on:
      push:
        branches: [ master ]
        paths: [src/main/resources/*]
    jobs:
      register-schema:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v2
          - uses: actions/setup-java@v2
            with:
              java-version: '11'
              distribution: 'temurin'
              cache: maven
          - name: Register Schema
            run: mvn io.confluent:kafka-schema-registry-maven-plugin:register@register