Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Control Center Parameter Reference¶
Base Settings¶
You can configure Confluent Control Center through a configuration file that is passed to Control Center on start.
A sample configuration is included at etc/confluent-control-center/control-center.properties
.
Parameters are provided in the form of key/value pairs. Lines beginning with #
are ignored.
bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Apache Kafka® cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping; this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,...
. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).- Type: list
- Default: “localhost:9092”
- Importance: high
zookeeper.connect
Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form
hostname1:port1,hostname2:port2,hostname3:port3,...
The server may also have a ZooKeeper chroot path as part of it’s ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example, for a chroot path of/chroot/path
, you would specify the connection string ashostname1:port1,hostname2:port2,hostname3:port3/chroot/path
.- Type: list
- Default: “localhost:2181”
- Importance: high
confluent.license
Confluent will issue a license key to each subscriber, allowing the subscriber to unlock the full functionality of Control Center. The license key will be a short snippet of text that you can copy and paste. Paste this value into the license key. If you do not provide a license key, Control Center will stop working after 30 days. If you are a subscriber, please contact Confluent Support for more information.
confluent.controlcenter.license
is a deprecated synonym for this configuration key.- Type: string
- Default: None
- Importance: high
Production Settings¶
In production, you should run Control Center in a cluster that is separate from the Kafka clusters being monitored. Set the following configuration parameters in the Kafka clusters being monitored.
confluent.controlcenter.streams.cache.max.bytes.buffering
Maximum number of memory bytes used for record caches across all threads.
Tip
Consider setting this config value proportional to the total num.partitions. Here is an example computation:
- Each per-partition metric is stored as a key-value pair that consists of “topic-string, partition-id, cluster-name”. Control Center compacts the values as four long variables, recording the min, max, count, and total.
- X is the total topic partitions and seven per-partition metrics that Control Center collects.
- Eight threads by default
The total cache size should be:
X (topic partitions) * 7 (per-partition metrics) * 8 (number of threads) * 150 (average bytes per metric).
For example, with 100000 topic partitions, the cache size should be 820000000 bytes.
- Type:
- Default: 10485760 bytes
- Importance: high
confluent.controlcenter.kafka.<name>.bootstrap.servers
Bootstrap servers for any additional Kafka cluster being monitored. Replace
<name>
with the name Control Center should use to identify this cluster. For example, usingconfluent.controlcenter.kafka.production-nyc.bootstrap.servers
, Control Center will show the addtional cluster with the nameproduction-nyc
in the cluster list.- Type: list
- Default: []
- Importance: high
confluent.controlcenter.kafka.<name>.<connection config>
Any additional connection configuration required to connect to the Kafka cluster identified by
<name>
can be specified using theconfluent.controlcenter.kafka.<name>.
prefix. For example, to specify thesecurity.protocol=SASL_SSL
configuration for the cluster namedproduction-nyc
, addconfluent.controlcenter.kafka.production-nyc.security.protocol=SASL_SSL
to the configuration.- Importance: medium
confluent.controlcenter.connect.bootstrap.servers
Bootstrap servers for Kafka cluster backing the Connect cluster. If left unspecified, falls back to the
bootstrap.servers
setting.- Type: list
- Default: []
- Importance: medium
confluent.controlcenter.connect.zookeeper.connect
ZooKeeper connection string for the Kafka cluster backing the Connect cluster. If left unspecified, falls back to the
zookeeper.connect
setting.- Type: string
- Default: “”
- Importance: medium
Logging¶
By default Control Center will output it’s logs to stdout. Logging configuration is defined in etc/confluent-control-center/log4j.properties
.
We also supply etc/confluent-control-center/log4j-rolling.properties
as an example of setting up Control Center with rolling log
files that may be easier to manage. You can select your desired log4j config by setting the CONTROL_CENTER_LOG4J_OPTS
env variable
when starting Control Center.
Optional Settings¶
We allow you to change some other parameters that change how Control Center behaves, such as internal topic names, data file locations, and replication settings. The default values for most of these settings are suitable for production use, but you can change these if needed.
General¶
confluent.controlcenter.connect.cluster
Comma-separated list of Connect worker URLs within a single cluster. Control Center will connect to a single worker, and if a worker fails it will try the request against a different worker. This must be set if you wish to manage a connect cluster.
- Type: string
- Default: “localhost:8083”
- Importance: high
confluent.controlcenter.data.dir
Location for Control Center specific data. Although the data stored in this directory can be recomputed, doing so is expensive and can affect the availability of Control Center’s stream monitoring functionality. For production, you should set this to a durable location.
- Type: path
- Default: “/var/lib/confluent-control-center”
- Importance: high
confluent.controlcenter.rest.listeners
Comma-separated list of listeners that listen for API requests over either http or https. If a listener uses https, the appropriate SSL configuration parameters need to be set as well. The first value will be used as a Control Center link in the body of eligible alert emails sent from Control Center. For details, see Alerts History.
- Type: list
- Default: “http://0.0.0.0:9021”
- Importance: high
confluent.controlcenter.schema.registry.url
Schema Registry URL. For more information, see the Schema Registry documentation.
- Type: string
- Default: “http://localhost:8081”
- Importance: high
confluent.controlcenter.id
Identifier used as a prefix so that multiple instances of Control Center can co-exist.
- Type: string
- Default: “1”
- Importance: low
confluent.controlcenter.name
Control Center Name
- Type: string
- Default: “_confluent-controlcenter-5.1.4”
- Importance: low
confluent.controlcenter.internal.topics.partitions
Number of partitions used internally by Control Center.
- Type: integer
- Default: 4
- Importance: low
confluent.controlcenter.internal.topics.replication
Replication factor used internally by Control Center. It is not recommended to reduce this value except in a development environment.
- Type: integer
- Default: 3
- Importance: low
confluent.controlcenter.internal.topics.retention.ms
Maximum time in milliseconds that internal data is stored in Kafka.
- Type: long
- Default: 86400000
- Importance: low
confluent.controlcenter.internal.topics.changelog.segment.bytes
Segment size in bytes for internal changelog topics in Kafka. This must be as small as broker settings
log.cleaner.dedupe.buffer.size
/log.cleaner.threads
to guarantee enough space in the broker’s dedupe buffer for compaction to work.- Type: long
- Default: 134217728
- Importance: low
confluent.controlcenter.connect.timeout.ms
Timeout in milliseconds for calls to connect cluster
- Type: long
- Default: 15000
- Importance: low
confluent.metrics.topic
Topic from which metrics data will be read.
- Type: string
- Default: “_confluent-metrics”
- Importance: low
confluent.metrics.topic.replication
Replication factor for metrics topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low
confluent.metrics.topic.partitions
Partition count for metrics topic
- Type: int
- Default: 12
- Importance: low
confluent.metrics.topic.skip.backlog.minutes
Skip backlog older than x minutes ago for broker metrics data. Set this to 0 if you want to process from the latest offsets. This config overrides
confluent.controlcenter.streams.consumer.auto.offset.reset
(deprecated) for the metrics input topic.- Type: long
- Default: 15
confluent.controlcenter.disk.skew.warning.min.bytes
Threshold for the max difference in disk usage across all brokers before disk skew warning is published
- Type: long
- Default: 1073741824
- Importance: low
confluent.support.metrics.enable
Enable support metrics collection.
- Type: boolean
- Default: true
confluent.controlcenter.alert.cluster.down.autocreate
Auto create a trigger and an email action for Control Center’s cluster down alerts
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.alert.cluster.down.to.email
Email to send alerts to when Control Center’s cluster is down
- Type: string
- Default: “”
- Importance: low
confluent.controlcenter.alert.cluster.down.send.rate
Send rate per hour for auto created cluster down email alerts (defaults to every 5 minutes)
- Type: long
- Default: 12
- Importance: low
Monitoring Settings¶
These optional settings are for the Stream Monitoring functionality. The default settings work for the majority of use cases and scales.
confluent.monitoring.interceptor.topic
The Kafka topic that stores monitoring interceptor data. This setting must match the
confluent.monitoring.interceptor.topic
configuration used by the interceptors in your application. Usually you should not change this setting unless you are running multiple instances of Control Center with client monitoring interceptor data being reported to the same Kafka cluster.- Type: string
- Default: “_confluent-monitoring”
- Importance: high
confluent.monitoring.interceptor.topic.partitions
Number of partitions for the monitoring interceptor data topic
- Type: integer
- Default: 12
- Importance: low
confluent.monitoring.interceptor.topic.replication
Replication factor for monitoring topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low
confluent.monitoring.interceptor.topic.retention.ms
Maximum time that interceptor data is stored in Kafka.
- Type: long
- Default: None
- Importance: low
confluent.monitoring.interceptor.topic.skip.backlog.minutes
Skip backlog older than x minutes ago for monitoring interceptor data. Set this to 0 if you want to process from the latest offsets. This config overrides
confluent.controlcenter.streams.consumer.auto.offset.reset
(deprecated) for the monitoring input topic.- Type: long
- Default: 15
- Importance: low
UI Authentication Settings¶
These optional settings allow you to enable and configure authentication for accessing the Control Center web interface. See the UI Authentication guide for more detail on configuring authentication.
confluent.controlcenter.rest.authentication.method
Authentication method to use. One of [NONE, BASIC].
- Type: string
- Default: NONE
- Importance: low
confluent.controlcenter.rest.authentication.realm
Realm to be used by Control Center when authenticating.
- Type: string
- Default: “”
- Importance: low
confluent.controlcenter.rest.authentication.roles
Roles that are authenticated to access Control Center.
- Type: string
- Default: “*”
- Importance: low
confluent.controlcenter.auth.restricted.roles
List of roles with limited access. No editing or creating via UI. Any role here must also be added to
confluent.controlcenter.rest.authentication.roles
.- Type: list
- Default: “”
- Importance: low
Email Settings¶
These optional settings control the SMTP server and account used when an alerts triggers the email action.
Important
The body of the email alert is populated with the first hostname specified in the
confluent.controlcenter.rest.listeners
property. The default value is localhost:9021
.
confluent.controlcenter.mail.enabled
Enable email alerts. If this setting is false you will not be able to add email alert actions in the web user interface.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.host.name
Hostname of outgoing SMTP server.
- Type: string
- Default: localhost
- Importance: low
confluent.controlcenter.mail.port
SMTP port open on
confluent.controlcenter.mail.host.name
.- Type: integer
- Default: 587
- Importance: low
confluent.controlcenter.mail.from
The originating address for emails sent from Control Center.
- Type: string
- Default: c3@confluent.io
- Importance: low
confluent.controlcenter.mail.bounce.address
Override for
confluent.controlcenter.mail.from
config to send message bounce notifications.- Type: string
- Importance: low
confluent.controlcenter.mail.ssl.checkserveridentity
Forces validation of server’s certificate when using STARTTLS or SSL.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.starttls.required
Forces using STARTTLS.
- Type: boolean
- Default: false
- Importance: low
confluent.controlcenter.mail.username
Username for username/password authentication. Authentication with your SMTP server will only be performed if this value is set.
- Type: string
- Importance: low
confluent.controlcenter.mail.password
Password for username/password authentication.
- Type: string
- Importance: low
Kafka Encryption, Authentication, Authorization Settings¶
These settings control the authentication and authorization between Control Center and the Kafka cluster containing its data, including the Stream Monitoring and System Health metrics. You will need to configure these settings if you have configured your Kafka cluster with any security features.
Note that these are the standard Kafka authentication and authorization settings prefixed with
confluent.controlcenter.streams.
.
confluent.controlcenter.streams.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: PLAINTEXT
- Importance: low
confluent.controlcenter.streams.ssl.keystore.location
The location of the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.keystore.password
The store password for the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.key.password
The password of the private key in the key store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.ssl.truststore.password
The password for the trust store file.
- Type: string
- Default: None
- Importance: low
confluent.controlcenter.streams.sasl.mechanism
SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
- Type: string
- Default: GSSAPI
- Importance: low
confluent.controlcenter.streams.sasl.kerberos.service.name
The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config.
- Type: string
- Default: GSSAPI
- Importance: low
Access Control Settings¶
These settings control whether Control Center users have access to message inspection, KSQL, and the Schema Registry. They apply to all clusters managed by the current Control Center installation. By default, all features are enabled.
confluent.controlcenter.topic.inspection.enable
Enable users to inspect topics.
- Type: boolean
- Default: true
- Importance: low
confluent.controlcenter.ksql.enable
Enable user access to the KSQL UI.
- Type: boolean
- Default: true
- Importance: low
confluent.controlcenter.schema.registry.enable
Enable user access to Schema Registry.
- Type: boolean
- Default: true
- Importance: low
HTTPS Settings¶
If you secure web access to Control Center with SSL, you may also need to configure the following parameters.
confluent.controlcenter.rest.listeners
Comma-separated list of listeners that listen for API requests over either http or https. If a listener uses https, the appropriate SSL configuration parameters need to be set as well. The first value will be used as a Control Center link in the body of eligible alert emails sent from Control Center. For details, see Alerts History.
- Type: list
- Default: “http://0.0.0.0:9021”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.location
Used for https. Location of the keystore file to use for SSL. IMPORTANT: Jetty requires that the key’s CN, stored in the keystore, must match the FQDN.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.password
Used for https. The store password for the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.key.password
Used for https. The password of the private key in the keystore file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.location
Used for https. Location of the trust store. Required only to authenticate https clients.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.truststore.password
Used for https. The store password for the trust store file.
- Type: password
- Default: “”
- Importance: high
confluent.controlcenter.rest.ssl.keystore.type
Used for https. The type of keystore file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.truststore.type
Used for https. The type of trust store file.
- Type: string
- Default: “JKS”
- Importance: medium
confluent.controlcenter.rest.ssl.protocol
Used for https. The SSL protocol used to generate the SslContextFactory.
- Type: string
- Default: “TLS”
- Importance: medium
confluent.controlcenter.rest.ssl.provider
Used for https. The SSL security provider name. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.client.auth
Used for https. Whether or not to require the https client to authenticate via the server’s trust store.
- Type: boolean
- Default: false
- Importance: medium
confluent.controlcenter.rest.ssl.enabled.protocols
Used for https. The list of protocols enabled for SSL connections. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: medium
confluent.controlcenter.rest.ssl.keymanager.algorithm
Used for https. The algorithm used by the key manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.trustmanager.algorithm
Used for https. The algorithm used by the trust manager factory for SSL connections. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.cipher.suites
Used for https. A list of SSL cipher suites. Comma-separated list. Leave blank to use Jetty’s defaults.
- Type: list
- Default: “” (Jetty’s default)
- Importance: low
confluent.controlcenter.rest.ssl.endpoint.identification.algorithm
Used for https. The endpoint identification algorithm to validate the server hostname using the server certificate. Leave blank to use Jetty’s default.
- Type: string
- Default: “” (Jetty’s default)
- Importance: low
KSQL Settings¶
You can use Control Center to interact with KSQL Server, which runs separately from your Kafka clusters. For access control configuration related to KSQL, see Access Control Settings.
confluent.controlcenter.ksql.advertised.url
The advertised URL to access the KSQL server on Control Center. By default this is set to the value specified in
confluent.controlcenter.ksql.url
. This hostname must be reachable from any browser that will use the KSQL web interface in Control Center.For example, if KSQL is communicating over an internal DNS that is not externally resolvable or routeable (e.g. if running in Docker for Mac), then the advertised URL must be set so the browser can resolve the externally available DNS that KSQL is available at. For more information, see Integrate KSQL with Confluent Control Center.
- Type: string
- Default: “”
- Importance: high
confluent.controlcenter.ksql.url
The KSQL server hostname and listener port. By default this is set to
http://localhost:8088
. If left empty, Control Center will use the value specified inconfluent.controlcenter.ksql.url
. This hostname must be reachable from the machine Control Center is installed on. For more information, see Integrate KSQL with Confluent Control Center.- Type: string
- Default: “http://localhost:8088”
- Important: high
Internal Kafka Streams Settings¶
Because Control Center reads and writes data to Kafka, we also allow you to change some producer and consumer configurations. We do not recommend changing these values unless advised by Confluent Support. Some examples of values used internally are given. These settings map 1:1 with producer/consumer configs used internally by Confluent Control Center and all use the prefix confluent.controlcenter.streams.{producer,consumer}.
.
confluent.controlcenter.streams.num.stream.threads
The number of threads to execute stream processing
- Type: integer
- Default: 8
- Importance: low
confluent.controlcenter.streams.retries
Number of times to retry client requests failing with transient errors. Does not apply to producer retries, which are defined using the setting below.
- Type: integer
- Default: maximum integer (effectively infinite)
- Importance: low
confluent.controlcenter.streams.producer.retries
Number of retries in case of production failure
- Type: integer
- Default: maximum integer (effectively infinite)
- Importance: low
confluent.controlcenter.streams.producer.retry.backoff.ms
Time to wait before retrying in case of production failure
- Type: long
- Default: 100
- Importance: low
confluent.controlcenter.streams.producer.compression.type
Compression type to use on internal topic production
- Type: string
- Default: lz4
- Importance: low
Internal Command Settings¶
The command topic is used to store Control Center’s internal configuration data. It will reuse the defaults/overrides for Kafka Streams, but allows the following overrides.
confluent.controlcenter.command.topic
Topic used to store Control Center configuration
- Type: string
- Default: “_confluent-command”
- Importance: low
confluent.controlcenter.command.topic.replication
Replication factor for command topic. It is not recommended to reduce this value except in a development environment.
- Type: int
- Default: 3
- Importance: low