Confluent Control Center Installation
This topic offers instruction for installing Control Center. For Confluent Platform installations, see Install Confluent Platform On-Premises.
For Control Center system requirements and compatibility with Confluent Platform, see System Requirements and Compatibility.
Single-node manual installation
Use these steps for single-node manual installation of Control Center with Confluent Platform.
Docker
The following steps install Confluent Platform 8.0 and Control Center 2.2.
To install Control Center with Docker
Clone Control Center public repo.
git clone --branch control-center https://github.com/confluentinc/cp-all-in-one.git
Change directory into: cp-all-in-one
cd cp-all-in-one/cp-all-in-one
Checkout branch:
8.0.0-postgit checkout 8.0.0-post
Run the docker compose command.
docker compose up -d
Archive
Install Control Center and Confluent Platform using archives on a single node.
- Considerations:
Control Center introduces a new directory structure that differs from the directory structure used with Control Center (Legacy).
In earlier versions of the Confluent Platform, there was a single main directory, commonly referenced as
CONFLUENT_HOMEand all components, including Control Center (Legacy), were inside this main directory (i.e.CONFLUENT_HOME/control-center).Control Center now has its own top-level directory,
CONTROL_CENTER_HOME.CONTROL_CENTER_HOMEis placed at the same hierarchical level asCONFLUENT_HOME, not inside it.The steps below offer the optimal order in which to install Confluent Platform with Control Center.
- Prerequisites:
Provision a new virtual machine (VM) for Control Center on the same network as the Confluent Platform clusters that you want to monitor.
For VM sizing recommendations, see System requirements.
Install the same version of openjdk that is on your existing Control Center (Legacy) (openjdk-8-jdk, openjdk-11-jdk, or openjdk-17-jdk).
On the Control Center VM, open ports 9090 (Control Center) and 9021 (Control Center user interface).
On every broker or KRaft controller, ensure that you can send outgoing http traffic to port 9090 on the Control Center VM.
- Considerations:
With local installations, the default port settings are as follows: Alertmanager uses port 9098 and controllers in KRaft mode use port 9093.
Download the Confluent Platform archive 8.1 and run these commands:
wget https://packages.confluent.io/archive/8.1/confluent-8.1.0.tar.gztar -xvf confluent-8.1.0.tar.gz
cd confluent-8.1.0
export CONFLUENT_HOME=`pwd`
Download the Control Center archive and run these commands:
wget https://packages.confluent.io/confluent-control-center-next-gen/archive/confluent-control-center-next-gen-2.3.0.tar.gztar -xvf confluent-control-center-next-gen-2.3.0.tar.gz
cd confluent-control-center-next-gen-2.3.0
export CONTROL_CENTER_HOME=`pwd`
Change directory to the
$CONFLUENT_HOME/bindirectory:export PATH=$PATH:$CONFLUENT_HOME/bin
Use the Confluent CLI to run the following command:
confluent local services start
- Considerations:
You must use a special command to start Prometheus on MacOS.
By default Alertmanager and controllers in KRaft mode use port 9093. To run Prometheus and Alertmanager and KRaft mode controllers on the same host, you must manually edit the provided Control Center scripts.
Download the Confluent Platform archive (7.7 to 8.0 supported) and run these commands:
wget https://packages.confluent.io/archive/8.0/confluent-8.0.0.tar.gztar -xvf confluent-8.0.0.tar.gz
cd confluent-8.0.0
export CONFLUENT_HOME=`pwd`
Update the broker and controller configurations to emit metrics to Prometheus by adding the following configurations to:
etc/kafka/controller.propertiesandetc/kafka/broker.propertiesThe fifth line (
confluent.telemetry.exporter._c3.metrics.include=<value>) is very long. Simply copy the code block as provided and append it to the end of the properties files. Pasting the fifth line results in a single line, even though it shows as wrapped in the documentation.metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3.client.base.url=http://localhost:9090/api/v1/otlp confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000 confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
Download the Control Center archive and run these commands:
wget https://packages.confluent.io/confluent-control-center-next-gen/archive/confluent-control-center-next-gen-2.3.0.tar.gztar -xvf confluent-control-center-next-gen-2.3.0.tar.gz
cd confluent-control-center-next-gen-2.3.0
export C3_HOME=`pwd`
Start Prometheus and Alertmanager
To start Control Center, you must have three dedicated command windows: one for Prometheus, another for the Control Center process, and a third dedicated command window for Alertmanager. Run the following commands from
$C3_HOMEin all command windows.Open
etc/confluent-control-center/prometheus-generated.ymland changelocalhost:9093tolocalhost:9098alerting: alertmanagers: - static_configs: - targets: - localhost:9098
Start Prometheus.
All operating systems except MacOS:
bin/prometheus-start
MacOS:
bash bin/prometheus-startNote
Prometheus runs but does not output any information to the screen.
Start Alertmanager.
Run this command:
export ALERTMANAGER_PORT=9098
All operating systems except MacOS:
bin/alertmanager-start
MacOS
bash bin/alertmanager-start
Start Control Center.
Open
etc/confluent-control-center/control-center-dev.propertiesand update port9093to9098:confluent.controlcenter.alertmanager.url=http://localhost:9098Run this command:
bin/control-center-start etc/confluent-control-center/control-center-dev.properties
Start Confluent Platform.
To start Confluent Platform, you must have two dedicated command windows, one for the controller and another for the broker process. All the following commands are meant to be run from
CONFLUENT_HOMEin both command windows. The Confluent Platform start sequence requires you to generate a single random ID and use that same ID for both the controller and the broker process.In the command window dedicated to running the controller, change directories into
CONFLUENT_HOME.cd CONFLUENT_HOME
Generate a random value for
KAFKA_CLUSTER_ID.KAFKA_CLUSTER_ID="$(bin/kafka-storage random-uuid)"
Use the following command to get the random ID and save the output. You need this value to start the controller and the broker.
echo $KAFKA_CLUSTER_ID
Format the log directories for the controller:
bin/kafka-storage format --cluster-id $KAFKA_CLUSTER_ID -c etc/kafka/kraft/controller.properties --standalone
Start the controller:
bin/kafka-server-start etc/kafka/kraft/controller.propertiesOpen a command window for the broker and navigate to
CONFLUENT_HOME.cd CONFLUENT_HOME
Set the
KAFKA_CLUSTER_IDvariable to the random ID you generated earlier withkafka-storage random-uuid.export KAFKA_CLUSTER_ID=<KAFKA-CLUSTER-ID>
Format the log directories for this broker:
bin/kafka-storage format --cluster-id $KAFKA_CLUSTER_ID -c etc/kafka/kraft/broker.properties
Start the broker:
bin/kafka-server-start etc/kafka/kraft/broker.properties
- Considerations:
You must use a special command to start Prometheus on MacOS.
Download the Confluent Platform archive (7.7 to 7.9 supported) and run these commands:
wget https://packages.confluent.io/archive/7.9/confluent-7.9.0.tar.gztar -xvf confluent-7.9.0.tar.gz
cd confluent-7.9.0
export CONFLUENT_HOME=`pwd`
Update broker configurations to emit metrics to Prometheus by adding the following configurations to:
etc/kafka/server.propertiesmetric.reporters=io.confluent.telemetry.reporter.TelemetryReporter confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3.client.base.url=http://localhost:9090/api/v1/otlp confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000 confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
Download the Control Center archive and run these commands:
wget https://packages.confluent.io/confluent-control-center-next-gen/archive/confluent-control-center-next-gen-2.3.0.tar.gztar -xvf confluent-control-center-next-gen-2.3.0.tar.gz
cd confluent-control-center-next-gen-2.3.0
Start Control Center.
To start Control Center, you must have three dedicated command windows: one for Prometheus, another for the Control Center process, and a third for Alertmanager. Run the following commands from
CONTROL_CENTER_HOMEin all command windows.Start Prometheus.
bin/prometheus-start
Start Alertmanager.
bin/alertmanager-start
Start Control Center.
bin/control-center-start etc/confluent-control-center/control-center-dev.properties
Start Confluent Platform.
Start ZooKeeper.
bin/zookeeper-server-start etc/kafka/zookeeper.propertiesStart Kafka.
bin/kafka-server-start etc/kafka/server.properties
Multi-node manual installation
Use these steps for multi-node manual installation of Control Center and Confluent Platform.
Provision a new node using any of the Confluent Platform supported operating systems. For more information, see Supported operating systems. Login to the VM on which you will install Confluent Platform.
Install Control Center on a new node/VM. To ensure a smooth transition, allow Control Center (Legacy) users to continue using Control Center (Legacy) until the Control Center has gathered 7-15 days of historical metrics. For more information, see Migration.
Login to the VM and install Control Center. For more information, see Compatibility with Confluent Platform.
Use the instructions for installing Confluent Platform but make sure to use the base URL and properties from these instructions to install Control Center.
For more information, see Confluent Platform System Requirements, Install Confluent Platform using Systemd on Ubuntu and Debian, and Install Confluent Platform using Systemd on RHEL, CentOS, and Fedora-based Linux.
Ubuntu and Debian
export BASE_URL=https://packages.confluent.io/confluent-control-center-next-gen/deb/ sudo apt-get update wget ${BASE_URL}archive.key sudo apt-key add archive.key sudo add-apt-repository -y "deb ${BASE_URL} stable main" sudo apt update
sudo apt install -y confluent-control-center-next-gen
RHEL, CentOS, and Fedora-based Linux
export base_url=https://packages.confluent.io/confluent-control-center-next-gen/rpm/ cat <<EOF | sudo tee /etc/yum.repos.d/Confluent.repo > /dev/null [Confluent] name=Confluent repository baseurl=${base_url} gpgcheck=1 gpgkey=${base_url}archive.key enabled=1 EOF
sudo yum install -y confluent-control-center-next-gen cyrus-sasl openssl-devel
Install Java for your operating system (if not installed).
sudo yum install java-17-openjdk -y ---- RHEL/CentOs/Fedora
sudo apt install openjdk-17-jdk -y ---- Ubuntu/Debian
Copy
/etc/confluent-control-center/control-center-production.propertiesfrom your current Control Center (Legacy) into the Control Center node on the VM and add this property:confluent.controlcenter.id=10 confluent.controlcenter.prometheus.enable=true confluent.controlcenter.prometheus.url=http://localhost:9090 confluent.controlcenter.prometheus.rules.file=/etc/confluent-control-center/trigger_rules-generated.yml confluent.controlcenter.alertmanager.config.file=/etc/confluent-control-center/alertmanager-generated.yml
If you are using SSL, copy the certs at
/var/ssl/privatefrom your current Control Center (Legacy) into the Control Center node on the VM. If you are not using SSL, skip this step.Change ownership of the configuration files. Give the Control Center process write permissions to the alert manager, so that the process can properly manage alert triggers. Use the
chowncommand to set the Control Center process as the owner of thetrigger_rules-generated.ymlandalertmanager-generated.ymlfiles.chown -c cp-control-center /etc/confluent-control-center/trigger_rules-generated.yml chown -c cp-control-center /etc/confluent-control-center/alertmanager-generated.yml
Start the following services on the Control Center node:
systemctl enable prometheus systemctl start prometheus systemctl enable alertmanager systemctl start alertmanager systemctl enable confluent-control-center systemctl start confluent-control-center
Login to each broker you intend to monitor and verify brokers can reach the Control Center node on port 9090.
curl http://<c3-internal-dns-url>:9090/-/healthyAll brokers must have access to the Control Center node on port 9090, but port 9090 does not require public access. Restrict access as you prefer.
Update the following properties for every Kafka broker and KRaft controller. Pay attention to the notes on the highlighted lines that follow the code example.
KRaft controller properties are located here:
/etc/controller/server.propertiesmetric.reporters=io.confluent.telemetry.reporter.TelemetryReporter,io.confluent.metrics.reporter.ConfluentMetricsReporter --- [1] confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed confluent.telemetry.exporter._c3.client.base.url=http://c3-internal-dns-hostname:9090/api/v1/otlp --- [2] confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 --- [3] confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 --- [4] confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 --- [5] confluent.telemetry.metrics.collector.interval.ms=60000 --- [6] confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true
[1] To enable metrics for both Control Center (Legacy) and Control Center, update your existing Control Center (Legacy) property
metric.reportersto use the following values:metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter,io.confluent.metrics.reporter.ConfluentMetricsReporterIf you decommission Control Center (Legacy), enable only TelemetryReporter plugin with the following value:
metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter[2] Ensure the URL in
confluent.telemetry.exporter._c3.client.base.urlis the actual Control Center URL, reachable from the broker host.confluent.telemetry.exporter._c3.client.base.url=http://c3-internal-dns-hostname:9090/api/v1/otlp[3] [4] [5] [6] Use the following configurations for clusters up to 100,000 or fewer replicas. To get an accurate count of replicas, use the sum of all replicas across all clusters monitored in Control Center (Legacy) (including the Control Center (Legacy) bootstrap cluster).
confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.metrics.collector.interval.ms=60000
Configurations for clusters with 100,000 to 400,000 replicas
Clusters with a replica count of 100,000 - 200,000:
confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=20 confluent.telemetry.metrics.collector.interval.ms=60000
Clusters with a replica count of 200,000 - 400,000:
confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=20 confluent.telemetry.metrics.collector.interval.ms=120000
For clusters with a replica count of 200,000 - 400,000, also update the following Control Center (Legacy) configuration:
confluent.controlcenter.prometheus.trigger.threshold.time=2mPerform a rolling restart for the brokers (zero downtime). For more information, see Rolling restart.
systemctl restart confluent-server
(Optional) Setup log rotation for Prometheus and Alertmanager.
Create a new configuration file at
/etc/logrotate.d/prometheuswith the following content:/var/log/confluent/control-center/prometheus.log { size 10MB rotate 5 compress delaycompress missingok notifempty copytruncate }
Create a script at
/usr/local/bin/logrotate-prometheus.sh:#!/bin/bash /usr/sbin/logrotate -s /var/lib/logrotate/status-prometheus /etc/logrotate.d/prometheus
Make the script executable
chmod +x /usr/local/bin/logrotate-prometheus.sh
To schedule with Cron, add the following line to your crontab (crontab -e):
*/10 * * * * /usr/local/bin/logrotate-prometheus.sh >> /tmp/prometheus-rotate.log 2>&1
Restart Prometheus
systemctl restart prometheus
Perform similar steps for Alertmanager logs.
Create a new configuration file at
/etc/logrotate.d/alertmanagerwith the following content:/var/log/confluent/control-center/alertmanager.log { size 10MB rotate 5 compress delaycompress missingok notifempty copytruncate }
Create a script at
/usr/local/bin/logrotate-alertmanager.sh:#!/bin/bash /usr/sbin/logrotate -s /var/lib/logrotate/status-alertmanager /etc/logrotate.d/alertmanager
Make the script executable
chmod +x /usr/local/bin/logrotate-alertmanager.sh
To schedule with Cron, add the following line to your crontab (crontab -e):
*/10 * * * * /usr/local/bin/logrotate-alertmanager.sh >> /tmp/alertmanager-rotate.log 2>&1
Restart Alertmanager
systemctl restart alertmanager
Verify Control Center is running
After the installation is complete, visit http(s)://<c3-url>:9021 and wait for the metrics to start showing up in Control Center. It may take
a couple of minutes. Control Center looks exactly like Control Center (Legacy).
To confirm Control Center is running, use the following steps:
Open the network tab in Control Center.
Reload Control Center.
Locate the following API call:
/2.0/feature/flagsVerify the following key is present in the response:
confluent.controlcenter.prometheus.enable: true
Confluent Ansible installation steps
For Confluent Ansible installation of Control Center, see Configure Ansible Playbooks for Confluent Platform.
Confluent for Kubernetes installation steps
For Confluent for Kubernetes (CFK) installation of Control Center, see Monitor Confluent Platform with Confluent for Kubernetes.
High-availability setup
Use these steps to configure Control Center for Active/Active high-availability deployment.
- Considerations:
You must manually duplicate alerts in one of your Control Center instances.
For a Confluent Ansible example of Control Center Active/Active high-availability setup, see: GitHub repo
For CFK example of Control Center Active/Active high-availability setup, see: GitHub repo
To configure Control Center Active/Active high-availability, use the following steps:
Configure two instances of Control Center for your Kafka cluster.
For every Kafka broker and KRaft controller, you must add and configure two HttpExporters.
Consider the following example HttpExporter configurations:
confluent.telemetry.exporter._c3-1.client.base.url=http://{C3-1-internal-dns-hostname}:9090/api/v1/otlp confluent.telemetry.exporter._c3-2.client.base.url=http://{C3-2-internal-dns-hostname}:9090/api/v1/otlpReplace
{C3-1-internal-dns-hostname}with the base URL for the corresponding Prometheus instance in your cluster.
For every Kafka broker and KRaft controller, add the following configurations:
#common configs confluent.telemetry.metrics.collector.interval.ms=60000 confluent.telemetry.remoteconfig._confluent.enabled=false confluent.consumer.lag.emitter.enabled=true metric.reporters=io.confluent.telemetry.reporter.TelemetryReporter # instance 1 configs confluent.telemetry.exporter._c3.type=http confluent.telemetry.exporter._c3.enabled=true confluent.telemetry.exporter._c3.client.base.url=http://{C3-1-internal-dns-hostname}:9090/api/v1/otlp confluent.telemetry.exporter._c3.client.compression=gzip confluent.telemetry.exporter._c3.api.key=dummy confluent.telemetry.exporter._c3.api.secret=dummy confluent.telemetry.exporter._c3.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3.buffer.inflight.submissions.max=10 confluent.telemetry.exporter._c3.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed # instance 2 configs confluent.telemetry.exporter._c3-2.type=http confluent.telemetry.exporter._c3-2.enabled=true confluent.telemetry.exporter._c3-2.client.compression=gzip confluent.telemetry.exporter._c3-2.api.key=dummy confluent.telemetry.exporter._c3-2.api.secret=dummy confluent.telemetry.exporter._c3-2.buffer.pending.batches.max=80 confluent.telemetry.exporter._c3-2.buffer.batch.items.max=4000 confluent.telemetry.exporter._c3-2.buffer.inflight.submissions.max=10 confluent.telemetry.exporter._c3-2.client.base.url=http://{C3-2-internal-dns-hostname}:9090/api/v1/otlp confluent.telemetry.exporter._c3-2.metrics.include=io.confluent.kafka.server.request.(?!.*delta).*|io.confluent.kafka.server.server.broker.state|io.confluent.kafka.server.replica.manager.leader.count|io.confluent.kafka.server.request.queue.size|io.confluent.kafka.server.broker.topic.failed.produce.requests.rate.1.min|io.confluent.kafka.server.tier.archiver.total.lag|io.confluent.kafka.server.request.total.time.ms.p99|io.confluent.kafka.server.broker.topic.failed.fetch.requests.rate.1.min|io.confluent.kafka.server.broker.topic.total.fetch.requests.rate.1.min|io.confluent.kafka.server.partition.caught.up.replicas.count|io.confluent.kafka.server.partition.observer.replicas.count|io.confluent.kafka.server.tier.tasks.num.partitions.in.error|io.confluent.kafka.server.broker.topic.bytes.out.rate.1.min|io.confluent.kafka.server.request.total.time.ms.p95|io.confluent.kafka.server.controller.active.controller.count|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.total|io.confluent.kafka.server.request.total.time.ms.p999|io.confluent.kafka.server.controller.active.broker.count|io.confluent.kafka.server.request.handler.pool.request.handler.avg.idle.percent.rate.1.min|io.confluent.kafka.server.session.expire.listener.zookeeper.disconnects.rate.1.min|io.confluent.kafka.server.controller.unclean.leader.elections.rate.1.min|io.confluent.kafka.server.replica.manager.partition.count|io.confluent.kafka.server.controller.unclean.leader.elections.total|io.confluent.kafka.server.partition.replicas.count|io.confluent.kafka.server.broker.topic.total.produce.requests.rate.1.min|io.confluent.kafka.server.controller.offline.partitions.count|io.confluent.kafka.server.socket.server.network.processor.avg.idle.percent|io.confluent.kafka.server.partition.under.replicated|io.confluent.kafka.server.log.log.start.offset|io.confluent.kafka.server.log.tier.size|io.confluent.kafka.server.log.size|io.confluent.kafka.server.tier.fetcher.bytes.fetched.total|io.confluent.kafka.server.request.total.time.ms.p50|io.confluent.kafka.server.tenant.consumer.lag.offsets|io.confluent.kafka.server.session.expire.listener.zookeeper.expires.rate.1.min|io.confluent.kafka.server.log.log.end.offset|io.confluent.kafka.server.broker.topic.bytes.in.rate.1.min|io.confluent.kafka.server.partition.under.min.isr|io.confluent.kafka.server.partition.in.sync.replicas.count|io.confluent.telemetry.http.exporter.batches.dropped|io.confluent.telemetry.http.exporter.items.total|io.confluent.telemetry.http.exporter.items.succeeded|io.confluent.telemetry.http.exporter.send.time.total.millis|io.confluent.kafka.server.controller.leader.election.rate.(?!.*delta).*|io.confluent.telemetry.http.exporter.batches.failed
Security configuration
Control Center introduces components like Prometheus and Alertmanager. The security configuration you use to secure communication for Control Center depends on the version of Confluent Platform you use.
Considerations:
Control Center supports TLS + Basic Auth for Confluent Platform versions 7.5.x and higher
Control Center supports mTLS for Confluent Platform versions 7.9.1 and higher
For more information, see Control Center Security on Confluent Platform.
Migration
Migration of metrics from Control Center (Legacy) to Control Center is not supported. For migration of alerts, see Control Center (Legacy) to Confluent Control Center Alert Migration.
Considerations:
For clusters where historical metrics are of no value, you can shut down Control Center (Legacy) as soon as Control Center is up and running.
For clusters where historical metrics are needed (say, for a period of N days), consider the following recommendations:
Run both Control Center (Legacy) and Control Center simultaneously for N days.
Control Center (Legacy) users should continue using Control Center (Legacy) until the N days of history is populated in Control Center.
Once historical metrics are available in Control Center, you can shut down Control Center (Legacy) and move users to Control Center.