Schema Registry Configuration Reference for Confluent Platform¶
This section contains Schema Registry configuration parameters organized by level of importance.
- High: These parameters can have a significant impact on performance. Take care when deciding the values of these parameters.
- Medium: These parameters can have some impact on performance. Your specific environment will determine how much tuning effort should be focused on these parameters.
- Low: These parameters have a less general or less significant impact on performance.
These parameters are defined in the Schema Registry configuration file, schema-registry.properties
, which is located at CONFLUENT_HOME/etc/schema-registry/schema-registry.properties
) on a local install.
avro.compatibility.level¶
DEPRECATED: The Avro compatibility type.
Use schema.compatibility.level instead.
- Type: string
- Default: “backward”
- Importance: high
access.control.allow.methods¶
Set value to Jetty Access-Control-Allow-Origin header for specified methods.
- Type: string
- Default: “”
- Importance: low
access.control.allow.origin¶
Set value for Jetty Access-Control-Allow-Origin
header.
- Type: string
- Default: “”
- Importance: low
confluent.schema.registry.auth.ssl.principal.mapping.rules¶
A list of rules for mapping distinguished name (DN) from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, DN of the X.500 certificate is the principal.
- Type: list
- Default: DEFAULT
- Importance: low
debug¶
Boolean indicating whether extra debugging information is generated in some error response entities.
- Type: boolean
- Default: false
- Importance: low
enable.fips¶
Enable FIPS mode on the server. If FIPS mode is enabled, broker listener security protocols, TLS versions and cipher suites will be validated based on FIPS compliance requirement.
Important
If set to true
, the init.resource.extension.class configuration must be present, and it must contain io.confluent.kafka.schemaregistry.security.SchemaRegistryFipsResourceExtension
.
- Type: boolean
- Default: false
- Importance: low
exporter.config.topic¶
The topic used to persist exporter configs. This option is specific to schema exporters.
- Type: string
- Default: “_exporter_configs”
- Importance: high
exporter.state.topic¶
The topic used to persist exporter states. This option is specific to schema exporters.
- Type: string
- Default: “_exporter_states”
- Importance: high
exporter.max.exporters¶
Maximum number of exporters per tenant. This option is specific to schema exporters.
- Type: int
- Default: 10
- Importance: low
exporter.num.threads¶
The number of threads for performing exports. This option is specific to schema exporters.
- Type: int
- Default: 10
- Importance: low
exporter.max.retries¶
Maximum number of times to retry exporter operations. This option is specific to schema exporters.
- Type: int
- Default: 3
- Importance: low
exporter.retries.wait.ms¶
Specifies the time in milliseconds to wait before each retry. This option is specific to schema exporters.
- Type: int
- Default: 1000
- Importance: low
host.name¶
The advertised host name. Make sure to set this if running Schema Registry with multiple
nodes. Starting with Confluent Platform 7.4.0, this name is always used in the endpoint for
communication between Schema Registry instances, even if
inter.instance.listener.name is specified, or does not match any
listener. (The schema, port and TLS configurations will continue to be picked
based on the inter.instance.listener.name
config, if it is set.)
- Type: string
- Default: “192.168.50.1”
- Importance: high
Important
If host.name
is set in a multi Schema Registry node environment, it must resolve to a valid location or URL from the other Schema Registry instance(s) to ensure communication between the nodes.
host.http.connect.timeout.ms¶
The HTTP connection timeout in milliseconds for Schema Registry client. Specifies the maximum amount of time that a client will wait for a response from Schema Registry when attempting to connect before timing out (giving up). The default is 60000 milliseconds (60 seconds). This is now configurable on Confluent Platform 6.0 and later.
- Type: int
- Default: 60000
- Valid Values: [0,…]
- Importance: low
http.read.timeout.ms¶
The HTTP read timeout in milliseconds for Schema Registry client. Specifies the maximum amount of time that a client will wait for a response from Schema Registry on a READ attempt (for example, to retrieve a schema), before timing out (giving up). The default is 60000 milliseconds (60 seconds). This is now configurable on Confluent Platform 6.0 and later.
- Type: int
- Default: 60000
- Valid Values: [0,…]
- Importance: low
init.resource.extension.class¶
A list of classes to use as SchemaRegistryResourceExtension
. Implementing the interface SchemaRegistryResourceExtension
allows you to inject user defined resources to Schema Registry. These resources will be injected before Schema Registry is initialized.
- Type: list
- Default: []
- Importance: low
inter.instance.protocol¶
The protocol used while making calls between the instances of Schema Registry. The secondary to primary node calls for writes and deletes will use the specified protocol. The default value would be http
. When https
is set, ssl.keystore
and ssl.truststore
configurations are used while making the call. (Use instead of the deprecated schema.registry.inter.instance.protocol
.)
If inter.instance.protocol
and inter.instance.listener.name are both set, inter.instance.listener.name
takes precedence.
- Type: string
- Default: “http”
- Importance: low
inter.instance.listener.name¶
Name of listener used for communication between Schema Registry instances. If this value is unset, the listener used is defined by inter.instance.protocol.
If both properties are set at the same time, inter.instance.listener.name
takes precedence.
- Type: string
- Default: “”
- Importance: low
kafkastore.bootstrap.servers¶
A list of Kafka brokers to connect to. For example, PLAINTEXT://hostname:9092,SSL://hostname2:9092
The Kafka cluster containing the bootstrap servers specified in kafkastore.bootstrap.servers
is used to coordinate Schema Registry instances (leader election), and store schema data.
When Kafka security is enabled, kafkastore.bootstrap.servers
is also used to specify security protocols that Schema Registry uses to connect to Kafka.
- Type: list
- Default: []
- Importance: medium
kafkastore.connection.url¶
REMOVED AS A METHOD OF CONFIGURING LEADER ELECTION: For leader election, use kafkastore.bootstrap.servers
instead of kafkastore.connection.url
.
Important
- ZooKeeper leader election was removed in Confluent Platform 7.0.0. Kafka leader election should be used instead.
- See Migration from ZooKeeper primary election to Kafka primary election for details on upgrading leader election.
- Previous to 5.5.0 (Confluent Platform 5.4.x and earlier) if the Schema Registry Security Plugin for Confluent Platform was installed and configured to use ACLs,
it had to connect to ZooKeeper and used
kafkastore.connection.url
to do so. This is no longer the case with the addition of Schema Registry ACL Authorizer for Confluent Platform. If you do not have the ACL Authorizer, upgrade to a Confluent Platform version that has it.
ZooKeeper URL for the Apache Kafka® cluster
- Type: string
- Default: “”
- Importance: high
kafkastore.ssl.cipher.suites¶
A list of cipher suites used for SSL.
- Type: string
- Default: “”
- Importance: low
kafkastore.ssl.endpoint.identification.algorithm¶
The endpoint identification algorithm to validate the server hostname using the server certificate.
- Type: string
- Default: https
- Importance: low
kafkastore.ssl.keymanager.algorithm¶
The algorithm used by key manager factory for TLS connections.
- Type: string
- Default: “SunX509”
- Importance: low
kafkastore.ssl.trustmanager.algorithm¶
The algorithm used by the trust manager factory for TLS connections.
- Type: string
- Default: “PKIX”
- Importance: low
kafkastore.zk.session.timeout.ms¶
ZooKeeper session timeout
- Type: int
- Default: 30000
- Importance: low
kafkastore.ssl.key.password¶
The password of the key contained in the keystore.
- Type: string
- Default: “”
- Importance: high
kafkastore.ssl.keystore.location¶
The location of the TLS keystore file.
- Type: string
- Default: “”
- Importance: high
kafkastore.ssl.keystore.password¶
The password to access the keystore.
- Type: string
- Default: “”
- Importance: high
kafkastore.ssl.truststore.location¶
The location of the TLS trust store file.
- Type: string
- Default: “”
- Importance: high
kafkastore.ssl.truststore.password¶
The password to access the trust store.
- Type: string
- Default: “”
- Importance: high
kafkastore.topic¶
The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy.
- Type: string
- Default: “_schemas”
- Importance: high
kafkastore.topic.replication.factor¶
The desired replication factor of the schema topic. The actual replication factor will be the smaller of this value and the number of live Kafka brokers.
- Type: int
- Default: 3
- Importance: high
kafkastore.init.timeout.ms¶
The timeout for initialization of the Kafka store, including creation of the Kafka topic that stores schema data.
- Type: int
- Default: 60000
- Importance: medium
kafkastore.security.protocol¶
The security protocol to use when connecting with Kafka, the underlying persistent storage. Values can be PLAINTEXT
, SASL_PLAINTEXT
, SSL
or SASL_SSL
.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
kafkastore.ssl.enabled.protocols¶
The comma-separated list of protocols enabled for TLS connections. The default
value is TLSv1.2,TLSv1.3
when running with Java 11 or later, TLSv1.2
otherwise. With the default value for Java 11 (TLSv1.2,TLSv1.3
), Kafka
clients and brokers prefer TLSv1.3 if both support it, and falls back to
TLSv1.2 otherwise (assuming both support at least TLSv1.2).
- Type: string
- Default: “TLSv1.2, TLSv1.3”
- Importance: medium
kafkastore.ssl.keystore.type¶
The file format of the keystore.
- Type: string
- Default: “JKS”
- Importance: medium
kafkastore.ssl.protocol¶
The TLS protocol used to generate the SSLContext. The default is TLSv1.3
when running with Java 11 or newer, TLSv1.2
otherwise. This value should
be fine for most use cases. Allowed values in recent JVMs are TLSv1.2
and
TLSv1.3
. TLS
, TLSv1.1
, SSL
, SSLv2
and SSLv3
might be
supported in older JVMs, but their usage is discouraged due to known security
vulnerabilities. With the default value for this configuration and ssl.enabled.protocols
,
clients downgrade to TLSv1.2
if the server does not support TLSv1.3
.
If this configuration is set to TLSv1.2
, clients do not use TLSv1.3
,
even if it is one of the values in ssl.enabled.protocols
and the server
only supports TLSv1.3
.
- Type: string
- Default:
TLSv1.3
- Importance: medium
kafkastore.ssl.provider¶
The name of the security provider used for SSL.
- Type: string
- Default: “”
- Importance: medium
kafkastore.ssl.truststore.type¶
The file format of the trust store.
- Type: string
- Default: “JKS”
- Importance: medium
kafkastore.timeout.ms¶
The timeout for an operation on the Kafka store
- Type: int
- Default: 500
- Importance: medium
kafkastore.sasl.kerberos.service.name¶
The Kerberos principal name that the Kafka client runs as. This can be defined either in the JAAS config file or here.
- Type: string
- Default: “”
- Importance: medium
kafkastore.sasl.mechanism¶
The SASL mechanism used for Kafka connections. GSSAPI is the default.
- Type: string
- Default: “GSSAPI”
- Importance: medium
kafkastore.sasl.kerberos.kinit.cmd¶
The Kerberos kinit command path.
- Type: string
- Default: “/usr/bin/kinit”
- Importance: low
kafkastore.sasl.kerberos.min.time.before.relogin¶
The login time between refresh attempts.
- Type: long
- Default: 60000
- Importance: low
kafkastore.sasl.kerberos.ticket.renew.jitter¶
The percentage of random jitter added to the renewal time.
- Type: double
- Default: 0.05
- Importance: low
kafkastore.sasl.kerberos.ticket.renew.window.factor¶
Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket.
- Type: double
- Default: 0.8
- Importance: low
kafkastore.group.id¶
Use this setting to override the group.id for the KafkaStore consumer. This setting can become important when security is enabled, to ensure stability over Schema Registry consumer’s group.id
Without this configuration, group.id
will be schema-registry-<host>-<port>
.
- Type: string
- Default: “”
- Importance: low
leader.eligibility¶
If true
, this node can participate in primary election. In a multi-colocated setup, turn this off for clusters in the secondary data center.
- Type: boolean
- Default: true
- Importance: medium
listeners¶
Comma-separated list of listeners that listen for API requests over either HTTP or HTTPS. If a listener uses HTTPS, the appropriate TLS configuration parameters need to be set as well.
Schema Registry identities are stored in ZooKeeper and are made up of a hostname and port. If multiple listeners are configured, the first listener’s port is used for its identity.
- Type: list
- Default: “
http://0.0.0.0:8081
” - Importance: high
metadata.encoder.topic¶
The durable single partition topic that acts as the durable log for the encoder keyset.
- Type: string
- Default: “_schema_encoders”
- Importance: high
metric.reporters¶
A list of classes to use as metrics reporters. Implementing the MetricReporter
interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.
- Type: list
- Default: []
- Importance: low
metrics.jmx.prefix¶
Prefix to apply to metric names for the default JMX reporter.
- Type: string
- Default: “kafka.schema.registry”
- Importance: low
metrics.num.samples¶
The number of samples maintained to compute metrics.
- Type: int
- Default: 2
- Importance: low
metrics.sample.window.ms¶
The metrics system maintains a configurable number of samples over a fixed window size. This configuration controls the size of the window. For example you might maintain two samples each measured over a 30 second period. When a window expires, you erase and overwrite the oldest window.
- Type: long
- Default: 30000
- Importance: low
password.encoder.secret¶
The secret used for encoding dynamically configured passwords for this server. This option is specific to schema exporters.
- Type: secret
- Default: null
- Importance: medium
password.encoder.old.secret¶
The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when server starts up.
This option is specific to schema exporters.
- Type: password
- Default: null
- Importance: medium
port¶
DEPRECATED: port to listen on for new connections.
Use listeners instead.
- Type: int
- Default: 8081
- Importance: low
proxy.host¶
Hostname or IP address of the proxy server that will be used to connect to the Schema Registry instances.
- Type: string
- Default: “”
- Importance: low
proxy.port¶
Port number of the proxy server that will be used to connect to the Schema Registry instances.
- Type: string
- Default: “”
- Importance: low
response.mediatype.default¶
The default response media type that should be used if no specify types are requested in an Accept header.
- Type: string
- Default: “application/vnd.schemaregistry.v1+json”
- Importance: high
response.mediatype.preferred¶
An ordered list of the server’s preferred media types used for responses, from most preferred to least.
- Type: list
- Default: [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
- Importance: high
response.http.headers.config¶
Usetoselect which HTTP headers are returned in the HTTP response for Confluent Platform
components. Specify multiple values in a comma-separated string using the
format [action][header name]:[header value]
where [action]
is one of
the following: set
, add
, setDate
, or addDate
. You must use
quotation marks around the header value when the header value contains commas.
For example:
response.http.headers.config="add Cache-Control: no-cache, no-store, must-revalidate", add X-XSS-Protection: 1; mode=block, add Strict-Transport-Security: max-age=31536000; includeSubDomains, add X-Content-Type-Options: nosniff
- Type: string
- Default: “”
- Importance: low
request.logger.name¶
Name of the SLF4J logger to write the NCSA Common Log Format request log.
- Type: string
- Default: “io.confluent.rest-utils.requests”
- Importance: low
resource.extension.class¶
Fully qualified class name of a valid implementation of the interface SchemaRegistryResourceExtension
. This can be used to inject user defined resources like filters. Typically used to add custom capability like logging, security, etc. (Use resource.extension.class
instead of deprecated schema.registry.resource.extension.class
.)
- Type: list
- Default: []
- Importance: low
schema.canonicalize.on.consume¶
A list of Formats, Serializers, and Deserializers (AVRO, JSON, or PROTOBUF) to canonicalize on consume. Use this parameter if canonicalization changes.
- Type: string
- Default: “”
- Importance: high
schema.compatibility.level¶
The schema compatibility type.
Valid values are:
none
: New schema can be any valid schema.backward
: New schema can read data produced by latest registered schema.backward_transitive
: New schema can read data produced by all previously registered schemas.forward
: Latest registered schema can read data produced by the new schema.forward_transitive
: All previously registered schemas can read data produced by the new schema.full
: New schema is backward and forward compatible with latest registered schema.full_transitive
: New schema is backward and forward compatible with all previously registered schemas.
In Confluent Platform versions 5.5.0 and later, use schema.compatibility.level
instead of the deprecated avro-compatibility-level
instead.
- Type: string
- Default: “backward”
- Importance: high
See also
- Schema Evolution and Compatibility for Schema Registry on Confluent Platform
- The new property,
schema.compatibility.level
, is designed to support multiple schema formats introduced in Confluent Platform 5.5.0, as described in Formats, Serializers, and Deserializers.
schema.linking.rbac.enable¶
Whether or not to enable and enforce role-based access control (RBAC) for Link Schemas on Confluent Platform on Confluent Platform. To learn more, see Access Control (RBAC) for Schema Linking Exporters.
- Type: boolean
- Default: false
- Importance: medium
schema.registry.group.id¶
Schema Registry cluster ID takes its name from the Schema Registry group ID.
- Type: string
- Default: “schema-registry”
- Importance: low
schema.registry.inter.instance.protocol¶
DEPRECATED: The protocol used while making calls between the instances of Schema Registry. The secondary-to-primary node calls for writes and deletes will use the specified protocol. The default value would be http
. When https
is set, ssl.keystore
and ssl.truststore
configs are used while making the call.
If both schema.registry.inter.instance.protocol
and inter.instance.listener.name are both set, inter.instance.listener.name
takes precedence.
Use inter.instance.protocol instead.
- Type: string
- Default: “”
- Importance: low
schema.registry.resource.extension.class¶
DEPRECATED: Fully qualified class name of a valid implementation of the interface SchemaRegistryResourceExtension
. This can be used to inject user defined resources like filters. Typically used to add custom capability like logging, security, etc.
Use resource.extension.class
instead.
- Type: string
- Default: “”
- Importance: low
schema.registry.zk.namespace¶
DEPRECATED: Configure schema.registry.group.id
if you originally had schema.registry.zk.namespace
for multiple Schema Registry clusters.
Important
- ZooKeeper leader election was removed in Confluent Platform 7.0.0. Kafka leader election should be used instead.
- See Migration from ZooKeeper primary election to Kafka primary election for full details.
The string that is used as the ZooKeeper namespace for storing Schema Registry metadata. Schema Registry instances which are part of the same Schema Registry service should have the same ZooKeeper namespace.
- Type: string
- Default: “schema_registry”
- Importance: low
schema.search.default.limit¶
Default limit on number of schemas returned per schema search.
- Type: int
- Default: 1000
- Importance: low
schema.search.max.limit¶
Maximum number of schemas returned per schema search.
- Type: int
- Default: 1000
- Importance: low
shutdown.graceful.ms¶
Amount of time to wait after a shutdown request for outstanding requests to complete.
- Type: int
- Default: 1000
- Importance: low
ssl.keystore.location¶
Used for HTTPS. Location of the keystore file to use for SSL.
Important
Jetty requires that the key’s CN, stored in the keystore, must match the FQDN.
- Type: string
- Default: “”
- Importance: high
ssl.keystore.password¶
Used for HTTPS. The store password for the keystore file.
- Type: password
- Default: “”
- Importance: high
ssl.key.password¶
Used for HTTPS. The password of the private key in the keystore file.
- Type: password
- Default: “”
- Importance: high
ssl.truststore.location¶
Used for HTTPS. Location of the trust store. Required only to authenticate HTTPS clients.
- Type: string
- Default: “”
- Importance: high
ssl.truststore.password¶
Used for HTTPS. The store password for the trust store file.
- Type: password
- Default: “”
- Importance: high
ssl.keystore.type¶
Used for HTTPS. The type of keystore file.
- Type: string
- Default: “JKS”
- Importance: medium
ssl.truststore.type¶
Used for HTTPS. The type of trust store file.
- Type: string
- Default: “JKS”
- Importance: medium
ssl.protocol¶
The TLS protocol used to generate the SSLContext. The default is TLSv1.3
when running with Java 11 or newer, TLSv1.2
otherwise. This value should
be fine for most use cases. Allowed values in recent JVMs are TLSv1.2
and
TLSv1.3
. TLS
, TLSv1.1
, SSL
, SSLv2
and SSLv3
might be
supported in older JVMs, but their usage is discouraged due to known security
vulnerabilities. With the default value for this configuration and ssl.enabled.protocols
,
clients downgrade to TLSv1.2
if the server does not support TLSv1.3
.
If this configuration is set to TLSv1.2
, clients do not use TLSv1.3
,
even if it is one of the values in ssl.enabled.protocols
and the server
only supports TLSv1.3
.
- Type: string
- Default:
TLSv1.3
- Importance: medium
ssl.provider¶
Used for HTTPS. The TLS security provider name. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: medium
ssl.client.auth¶
DEPRECATED: Used for HTTPS. Whether or not to require the HTTPS client to authenticate via the server’s trust store.
Use ssl.client.authentication
instead.
- Type: boolean
- Default: false
- Importance: medium
ssl.client.authentication¶
Used for HTTPS. Whether to require the HTTPS client to authenticate using the server’s trust store.
Valid values are NONE, REQUESTED or REQUIRED. NONE disables TLS client authentication,
REQUESTED requests but does not require TLS client authentication, and REQUIRED requires TLS HTTPS clients to
authenticate using the server’s truststore. This configuration overrides the deprecated ssl.client.auth
.
- Type: string
- Default: NONE
- Importance: medium
ssl.enabled.protocols¶
Used for HTTPS. Leave blank (""
) to use Jetty’s defaults.
The comma-separated list of protocols enabled for TLS connections. The default
value is TLSv1.2,TLSv1.3
when running with Java 11 or later, TLSv1.2
otherwise. With the default value for Java 11 (TLSv1.2,TLSv1.3
), Kafka
clients and brokers prefer TLSv1.3 if both support it, and falls back to
TLSv1.2 otherwise (assuming both support at least TLSv1.2).
- Type: list
- Default: “” (Jetty’s default)
- Importance: medium
ssl.principal.mapping.rules¶
Used for HTTPS. A list of rules for mapping distinguished name (DN) from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, DN of the X.500 certificate is the principal. For details see mTLS to SASL Authentication.
For Schema Registry use: confluent.schema.registry.auth.ssl.principal.mapping.rules
.
- Type: list
- Default: “DEFAULT”
- Importance: low
ssl.keymanager.algorithm¶
Used for HTTPS. The algorithm used by the key manager factory for TLS connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
ssl.trustmanager.algorithm¶
Used for HTTPS. The algorithm used by the trust manager factory for TLS connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
ssl.cipher.suites¶
Used for HTTPS. A list of TLS cipher suites. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: low
ssl.endpoint.identification.algorithm¶
Used for HTTPS. The endpoint identification algorithm to validate the server hostname using the server certificate. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
zookeeper.set.acl¶
Whether or not to set an ACL in ZooKeeper when znodes are created and ZooKeeper SASL authentication is configured.
Important
If set to true
, the ZooKeeper SASL principal must be the same as the Kafka brokers.
- Type: boolean
- Default: false
- Importance: high
License for Schema Registry Security Plugin¶
A Confluent Platform enterprise license is required for the Schema Registry Security Plugin for Confluent Platform.
For details on how to configure the plugin, including confluent.license
, see
the configuration options in
Install and Configure the Schema Registry Security Plugin for Confluent Platform.