Configure Load Balancers for Confluent Platform in Confluent for Kubernetes¶
External access to Kafka using load balancers¶
When a client accesses a Kafka cluster, it first connects to the bootstrap server to get the metadata list of all the brokers in the cluster. Then the client finds the address of the broker it is interested in and connects directly to the broker to produce or consume data. When configured to use load balancers, CFK creates a load balancer for each broker in addition to a load balancer for the bootstrap server. For N number of Kafka brokers, CFK creates N+1 number of load balancer services:
- One as the bootstrap service for the initial connection and for receiving the metadata about the Kafka cluster.
- Another N services, one for each broker, address the brokers directly.
To allow external access to Kafka using load balancers:
Set the following in the Kafka custom resource (CR) and apply the configuration:
spec: listeners: external: externalAccess: type: loadBalancer loadBalancer: domain: --- [1] bootstrapPrefix: --- [2] brokerPrefix: --- [3] advertisedPort: --- [4] externalTrafficPolicy: --- [5]
[1] Required. Set it to the domain where your Kubernetes cluster is running.
If you change this value on a running cluster, you must roll the cluster.
[2] Optional. Use
bootstrapPrefix
to change the default Kafka bootstrap prefix. The default bootstrap prefix is the Kafka component name (kafka
).The value is used for the DNS entry. The bootstrap DNS name becomes
<bootstrapPrefix>.<domain>
.If not set, the default bootstrap DNS name is
kafka.<domain>
.You may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.
[3] Optional. Use
brokerPrefix
to change the default Kafka broker prefixes. The default Kafka broker prefix isb
.These are used for DNS entries. The broker DNS names become
<brokerPrefix>0.<domain>
,<brokerPrefix>1.<domain>
, and so on.If not set, the default broker DNS names are
b0.<domain>
,b1.<domain>
, and so on.You may want to change the default prefixes to avoid DNS conflicts when running multiple Kafka clusters.
If you change this value on a running cluster, you must roll the cluster.
[4] Optional. Use
advertisedPort
to override the default port (9092
) for Kafka external access.[5] Optional. Specifies the external traffic policy for the service. Valid options are
Local
andCluster
. The default isLocal
.Cluster
routes external traffic to all endpoints on the cluster.Local
preserves the source IP of the external traffic by routing only to the endpoints on the same node as the traffic was received on.
For details, see Kubernetes documentation.
Add DNS entries for Kafka bootstrap server and brokers.
Once the external load balancers are created, you add DNS entries for Kafka brokers and the Kafka bootstrap service to your DNS table (or the method you use to get DNS entries recognized by your provider environment).
You need the following to derive Kafka broker DNS entries:
Domain name of your Kubernetes cluster (
domain
in Step #1)The external IP of the Kafka load balancers
You can retrieve the external IP using the following command:
kubectl get services -n <namespace> | grep LoadBalancer
Kafka bootstrap prefix and broker prefix
The example below shows the DNS table entries, using:
- Domain:
example.com
- Three broker replicas with the default prefix:
b
- The Kafka bootstrap prefix:
kafka
DNS name External IP b0.example.com 192.50.14.35 b1.example.com 192.50.28.28 b2.example.com 192.50.64.18 kafka.example.com 192.50.34.20
External access to MDS using load balancer¶
When you set up external access to MDS with role-based access control (RBAC) enabled, additional networking configuration steps required.
If you specify external access of the load balancer type, an additional load balancer service for MDS is created in Kubernetes.
To support external access to Kafka Metadata Service (MDS), configure the following in the Kafka custom resource (CR):
To allow external access to MDS using load balancers:
Set the following in the Kafka CR and apply the configuration:
spec
services
mds:
externalAccess:
type: loadBalancer
loadBalancer:
domain: --- [1]
port: --- [2]
prefix: --- [3]
advertisedURL: --- [4]
[1] Required. The domain name of the MDS.
[2] The port to externally access MDS. If not set, the endpoint to externally access MDS uses the default ports as below:
- The endpoint to externally access MDS over HTTPS using load balancer is
https://<prefix>.<domain>:443
. - The endpoint to externally access MDS over HTTP using load balancer is
http://<prefix>.<domain>:80
.
- The endpoint to externally access MDS over HTTPS using load balancer is
[3] If set, the MDS endpoint is
https://<prefix>.<domain>:<port>
. If omitted, the MDS endpoint is `` https://<domain>:<port>`` by default.[4] If set, instead of using the internal endpoint, the MDS advertised listener for each broker will be set to:
<httpSchema>://<advertisedUrl.prefix><podId>.<domain>
where<podId>
ranges from0
toreplicaCount -1
.Use this property only if you cannot add internal SANs to the TLS certificates for MDS, and the external DNS must be resolved inside the Kubernetes cluster.
External access to other Confluent Platform components using load balancers¶
The external clients can connect to other Confluent Platform components using load balancers.
The access endpoint of each Confluent Platform component is: <component CR
name>.<Kubernetes domain>
For example, in the example.com
domain with TLS enabled, you
access the Confluent Platform components at the following endpoints:
https://connect.example.com
https://replicator.example.com
https://schemaregistry.example.com
https://ksql.example.com
https://controlcenter.example.com
https://kafkarestproxy.example.com
To allow external access to Kafka using load balancers:
Set the following in the component CR and apply the configuration:
spec: externalAccess: type: loadBalancer loadBalancer: domain: --- [1] prefix: --- [2] sessionAffinity: --- [3] sessionAffinityConfig: --- [4] clientIP: timeoutSeconds: --- [5]
[1] Required. Set
domain
to the domain name of your Kubernetes cluster.If you change this value on a running cluster, you must roll the cluster.
[2] Optional. Set
prefix
to change the default load balancer prefixes. The default is the component name, such ascontrolcenter
,connect
,replicator
,schemaregistry
,ksql
.The value is used for the DNS entry. The component DNS name becomes
<prefix>.<domain>
.If not set, the default DNS name is
<component name>.<domain>
, for example,controlcenter.example.com
.You may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.
If you change this value on a running cluster, you must roll the cluster.
[3] Required for consumer REST Proxy to enable client IP-based session affinity.
For REST Proxy to be used for Kafka consumers, set to
ClientIP
. See Kubernetes Service for more information about session affinity.[4] Contains the configurations of session affinity if set
sessionAffinity: ClientIP
in [3].[5] Specifies the seconds of
ClientIP
type session sticky time. The value must be bigger than0
and less than or equal to86400
(1 day).Default value is
10800
(3 hours).
Add a DNS entry for each Confluent Platform component that you added a load balancer to.
Once the external load balancers are created, you add a DNS entry associated with component load balancers to your DNS table (or whatever method you use to get DNS entries recognized by your provider environment).
You need the following to derive Confluent Platform component DNS entries:
Domain name of your Kubernetes cluster as set in Step #1
The external IP of the component load balancers
You can retrieve the external IP using the following command:
kubectl get services -n <namespace> -ojson
The component
prefix
if set in Step #1 above. Otherwise, the default component name.
A DNS name is made up of the
prefix
and thedomain
name. For example,controlcenter.example.com
.
For a tutorial scenario on configuring external access using load balancers, see the quickstart tutorial for using load balancer.
Internal access to Confluent components using load balancers¶
If Kubernetes is running in private subnets or within VPC peered clusters, you can configure an internal load balancer for VPC-to-VPC peering connections among Kafka and other Confluent Platform components.
To set internal load balancers, use the external listener with the internal annotation.
In the Kafka CR:
spec:
listeners:
external:
externalAccess:
loadBalancer:
annotations: --- [*]
In the other Confluent component CRs:
spec:
externalAccess:
type: loadBalancer
loadBalancer:
annotations: --- [*]
[*] Add the following provider-specific annotation to create an internal load balancer that works for the provider.
- Azure
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
- AWS
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
You can specify more restrictive CIDRs if you don’t want to create a security group that allows all traffic from the internet.
- GCP
cloud.google.com/load-balancer-type: "Internal"
For the other load balancer settings, see External access to Kafka using load balancers for accessing Kafka and External access to other Confluent Platform components using load balancers for accessing other Confluent Platform components.