Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Configuring the network¶
Refer to the following information to understand and configure endpoints, ports, and load balancers for your Confluent Platform components and cluster.
Note
The terms external, internal, and route are used in this document to differentiate between the types of load balancers you can enable for a Confluent Operator and Confluent Platform cluster. Your platform provider may use different terms.
Ports configuration in Operator¶
Confluent Platform components deployed by Confluent Operator listen on several ports. Components which expose a RESTful endpoint, specifically components other than ZooKeeper and Kafka, listen on an external container port and an internal container port (in addition to other ports, e.g. JMX and Jolokia). Clients of these components can be running within Kubernetes or outside Kubernetes.
Ports for network traffic internal to Kubernetes¶
For clients running within other Kubernetes containers, they typically connect
to these Confluent Platform components via a ClusterIP Service created by the Operator for each
component, and can use either the external or internal port. The default port
values are listed below, and can be found in the component’s template file. For
example, the template file for KSQL is
<OperatorHome>/helm/confluent-operator/charts/ksql/templates/ksql-psc.yaml
.
These ports cannot be configured by the user.
- ksqlDB ports
- external: 8088
- internal: 9088
- Schema Registry ports
- external: 8081
- internal: 9081
- Connect ports
- external: 8083
- internal: 9083
- Schema Registry ports
- external: 8083
- internal: 9083
- Confluent Control Center ports
- external: 9021
- internal: 8021
Ports for network traffic originating external to Kubernetes¶
Clients running outside Kubernetes connect to Confluent Platform components via a LoadBalancer Service which has its own ports. Traffic to these ports will be routed to the external ports of the container, but the actual ports on the load balancer don’t necessarily match the container’s external ports.
Furthermore, the Operator configures the outward-facing port of the load balancer to use standard HTTP and HTTPS port values by default. The following is how the load balancers for ksqlDB, Connect, Replicator, Schema Registry, and Control Center will be configured by default:
- When TLS is enabled, the outward facing load balancer port will now be 443.
- When TLS is disabled, the outward facing load balancer port will now be 80.
Note
In Operator 5.3 and earlier releases, when TLS is enabled, the HTTPS endpoint of the component is hosted on port 8443 at the pod level. However, starting in the 5.4 release, the HTTPS endpoint of the component is hosted on the component’s default standard HTTPS (443) or HTTP (80) port. port.
The load balancer port defaults can be overridden by setting the
<component>.tls.port
in your Helm values file.
Load balancers¶
YAML files have a section that contains load balancer configuration parameters. The component-specific load balancer configuration parameters, options, and usage comments are provided in the component YAML files.
Note
Kafka brokers and ZooKeeper maintain a constant connection that can be broken if a load balancer is used for ZooKeeper. There are no configuration parameters provided for setting up a ZooKeeper load balancer. Do not configure a load balancer for ZooKeeper.
Kafka Load balancer configuration¶
The following snippet shows the default (and unmodified) load balancer
parameters from a Kafka values.yaml
file. All parameter modifications (or
additions) must be made in the Kafka section of the <provider>.yaml
.
loadBalancer:
## If use-case for Kafka-cluster is to access from out of kubernetes cluster then configured `enabled` to true.
enabled: false
## External will create public facing endpoints, setting this to internal will
## create a private-facing ELB with VPC peering
type: external
## Add other annotations on the ELB here
annotations: {}
## Domain name will configure in Kafka's external listener
##
domain: ""
## If configured the bootstrap fqdn will be .bootstrapPrefix.domain ( dot's are not supported in the prefix)
## If not the bootstrapPrefix will be .name.domain
bootstrapPrefix: ""
## If not configured, the default value will be 'b' appended to the domain name as prefix ( dot's are not supoorted in the prefix)
##
brokerPrefix: ""
- All load balancer component configurations default to
enabled: false
. - There are three load balancer types. External load balancers are external-facing and are used to enable access to components from outside the cluster. This is the default load balancer created if the
type
parameter is not specified. Internal load balancers are private-facing and are used to support VPC-to-VPC peering. - Annotations are used to add one or more key/value pairs (metadata) to the default component configuration. Typically, annotations are used to add application-specific or provider-specific settings. For an example of how annotations are used in this environment, see Configuring internal load balancers. For additional information, refer to your provider documentation and Kubernetes Concepts: Annotations.
- Domain is the provider domain where your cluster is running.
- The bootstrapPrefix and brokerPrefix parameters are used to change the default load balancer prefixes. The default bootstrap prefix is the component name (for example
kafka.<mydomain>
) and the default Kafka broker prefix isb
(for exampleb0.<mydomain
,b1.<mydomain>
, etc.). These are used for DNS entries in your domain. As part of your network plan, you may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.
The following shows the default parameters provided for load balancers in the
<provider>.yaml
file.
loadBalancer:
enabled: false
domain: ""
To enable an external load balancer (using the example domain
mydevplatform.gcp.cloud
), you change the default parameters to the
following:
loadBalancer:
enabled: true
domain: "mydevplatform.gcp.cloud"
Note that since you did not provide a load balancer type
, the load balancer
defaults to an external load balancer. To enable an internal load balancer, you
add the type: internal
parameter.
loadBalancer:
enabled: true
type: internal
domain: "mydevplatform.gcp.cloud"
You add additional parameters and annotations as needed. For example, if you are creating a load balancer for Amazon Web Services (AWS), you could add the following annotation to create an internal Network Load Balancer (NLB).
loadBalancer:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For more information about Kubernetes annotations, see Annotations.
For information about Kubernetes load balancer annotations for AWS, see Load Balancers.
Component load balancer configuration¶
The following snippet shows the default (and unmodified) load balancer YAML file
parameters for setting up access to other Confluent Platform components. All configuration
parameters are identical to the Kafka load balancer configuration, with the exception of the prefix parameters. Instead
of brokerPrefix
and bootstrapPrefix
, there is one prefix
parameter.
This allows you to set or update the default component DNS name, which defaults
to <component-name>.<providerdomain>
. If you add a prefix parameter, the
component DNS name is configured as <myprefix>.<providermdomain>
.
## External Access
##
loadBalancer:
## Create a LoadBalancer for external networking
enabled: false
## External will create public facing endpoints, setting this to internal will
## create a private-facing ELB with VPC peering
type: external
## Add other annotations here that you want on the ELB
annotations: {}
## If external access is enable, the FQDN must be provided
##
domain: ""
## If prefix is configured, external DNS name is configured as <prefix>.<domain>
## otherwise external DNS name is configured as <name>.<domain> where name is cluster-name.
prefix: ""
Configuring internal load balancers¶
Internal load balancers add component support for VPC peering. When you select type:
internal
, the installation automatically creates an internal load balancer
that works for the provider used. This is possible because the installer
automatically configures the following provider-specific annotations at
installation:
Azure:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
AWS:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
GCP:
annotations:
cloud.google.com/load-balancer-type = "Internal"
To create an internal load balancer, add the type: internal
entry to the
component section in the <provider>.yaml
file.
loadBalancer:
enabled: true
type: internal
domain: "<mydomain>"
Once the internal load balancer is available, you can use it as an internal-only access point for VPC-to-VPC component peering connections.
Configuring external load balancers¶
External-facing load balancers are used for access to cluster components from
outside the provider environment. External load balancers are configured in the
provider.yaml
file. For example:
loadBalancer:
enabled: true
domain: "<mydomain>"
If you set enabled: true
and add your provider domain, the installer
automatically creates an external load balancer for the component at
installation. See the following section for an example of how an external load
balancer is configured for your Kafka brokers.
Important
When you configure an external load balancer, it is good practice to protect the endpoints using TLS.
Kafka access¶
The following example shows the parameters that enable external load balancers
for the Kafka brokers (using the example domain mydevplatform.gcp.cloud
).
Note that the examples provided can be used to configure access for other Confluent Platform
components.
## Kafka Cluster
##
kafka:
name: kafka
replicas: 3
resources:
requests:
cpu: 200m
memory: 1Gi
loadBalancer:
enabled: true
domain: "mydevplatform.gcp.cloud"
Once the external load balancers are created, you add a DNS entry associated with the public IP for each Kafka broker load balancer and the bootstrap load balancer in your provider DNS table (or whatever method you use to get DNS entries recognized by your provider environment).
Note
See metricReporter for information about DNS entries that could affect Kafka brokers.
To get the information you need to add DNS entries, first list the services for components in the namespace.
kubectl get services -n <namespace>
The following example uses operator as the namespace:
kubectl get services -n operator
The list of Kafka services resembles the following:
kafka ClusterIP None <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-0-internal ClusterIP 10.47.247.181 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-0-lb LoadBalancer 10.47.245.192 192.50.14.35 9092:32183/TCP 21h
kafka-1-internal ClusterIP 10.47.251.31 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-1-lb LoadBalancer 10.47.251.8 192.50.28.28 9092:31028/TCP 21h
kafka-2-internal ClusterIP 10.47.242.124 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-2-lb LoadBalancer 10.47.250.236 192.50.64.18 9092:32636/TCP 21h
kafka-bootstrap-lb LoadBalancer 10.47.250.151 192.50.34.20 9092:30840/TCP 21h
Kafka broker load balancer DNS entries are derived from the component prefix, replica number, and the provider domain. The Kafka bootstrap load balancer DNS entry is derived from the component name (for example. kafka) and provider domain. The default Kafka component prefix is b for broker.
For example, the brokers in a default three-replica configuration have the
prefix/replica numbers b0, b1, and b2. The Kafka bootstrap load
balancer takes the name of the component, which defaults to kafka. The
following shows the DNS table entries you add (using the example domain
mydevplatform.gcp.cloud
):
DNS name Internal IP External IP
b0.mydevplatform.gcp.cloud 10.47.245.57 192.50.14.35
b1.mydevplatform.gcp.cloud 10.47.240.85 192.50.28.28
b2.mydevplatform.gcp.cloud 10.35.186.46 192.50.64.18
kafka.mydevplatform.gcp.cloud 10.47.250.36 192.50.34.20
Important
As part of your network plan, change the default prefixes for each component. If you add multiple clusters to one domain you can avoid DNS conflicts.
You access Kafka using the bootstrap load balancer DNS/port as shown in the example below:
http://kafka.mydevplatform.gcp.cloud:9092
The following sections provide additional information about networking and connecting to other components. Note that the default load balancer configuration for all components follows the pattern below.
loadBalancer:
enabled: false
domain: ""
Connect access¶
Enable access to Connect by updating the loadBalancer
parameters as shown
by the example in Kafka access. Note that you set a prefix
using the parameter described in Component load balancer configuration.
The following shows an example of the default services deployed for Connect:
connectors ClusterIP None <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
connectors-0-internal ClusterIP 10.47.255.235 <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
connectors-1-internal ClusterIP 10.47.244.122 <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
To add external access to Connect after installation, you need to update the Connect configuration. The example below shows the upgrade command to add an external load balancer:
helm upgrade -f ./providers/gcp.yaml \
--set connectors.enabled=true \
--set connectors.loadBalancer.enabled=true \
--set connectors.loadBalancer.domain=<provider-domain> connectors \
./confluent-operator
The following example shows the DNS table entry you add:
DNS name Internal IP External IP
connectors.mydevplatform.gcp.cloud 10.47.250.36 192.50.98.3
You access Connect using one of the endpoints below:
- TLS is enabled:
https://connectors.mydevplatform.gcp.cloud
- TLS is disabled:
http://connectors.mydevplatform.gcp.cloud
Replicator access¶
Enable access to Replicator by updating the loadBalancer
parameters as shown
by the example in Kafka access. Note that you set a prefix
using the parameter described in Component load balancer configuration.
The following shows an example of the default services deployed for Replicator:
replicator ClusterIP None <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
replicator-0-internal ClusterIP 10.47.251.204 <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
replicator-1-internal ClusterIP 10.47.250.214 <none> 8083/TCP,9083/TCP,7203/TCP,7777/TCP 21h
To add external access to Replicator after installation, you need to update the Replicator configuration. The example below shows the upgrade command to add an external load balancer:
helm upgrade -f ./providers/gcp.yaml \
--set replicator.enabled=true \
--set replicator.loadBalancer.enabled=true \
--set replicator.loadBalancer.domain=<provider-domain> replicator \
./confluent-operator
The following example shows the DNS table entry you add:
DNS name Internal IP External IP
replicator.mydevplatform.gcp.cloud 10.47.250.36 192.50.43.9
You access Replicator using one of the endpoints below:
- TLS is enabled:
https://replicator.mydevplatform.gcp.cloud
- TLS is disabled:
http://replicator.mydevplatform.gcp.cloud
Schema Registry access¶
Enable access to Schema Registry by updating the loadBalancer
parameters as shown
by the example in Kafka access. Note that you set a prefix
using the parameter described in Component load balancer configuration.
The following shows an example of the default services deployed for Schema Registry:
schemaregistry ClusterIP None <none> 9081/TCP,8081/TCP,7203/TCP,7777/TCP 21h
schemaregistry-0-internal ClusterIP 10.47.242.195 <none> 9081/TCP,8081/TCP,7203/TCP,7777/TCP 21h
schemaregistry-1-internal ClusterIP 10.47.241.243 <none> 9081/TCP,8081/TCP,7203/TCP,7777/TCP 21h
To add external access to Schema Registry after installation, you need to update the Schema Registry configuration. The example below shows the upgrade command to add an external load balancer:
helm upgrade -f ./providers/gcp.yaml \
--set schemaregistry.enabled=true \
--set schemaregistry.loadBalancer.enabled=true \
--set schemaregistry.loadBalancer.domain=<provider-domain> schemaregistry \
./confluent-operator
The following example shows the DNS table entry you add:
DNS name Internal IP External IP
schemaregistry.mydevplatform.gcp.cloud 10.47.250.36 192.50.75.32
You access Schema Registry using one of the endpoints below:
- TLS is enabled:
https://schemaregistry.mydevplatform.gcp.cloud
- TLS is disabled:
http://schemaregistry.mydevplatform.gcp.cloud
KSQL access¶
Enable access to KSQL by updating the loadBalancer
parameters as shown
by the example in Kafka access. Note that you set a prefix
using the parameter described in Component load balancer configuration.
The following shows an example of the default services deployed for KSQL:
ksql ClusterIP None <none> 8088/TCP,9088/TCP,7203/TCP,7777/TCP 21h
ksql-0-internal ClusterIP 10.3.254.15 <none> 8088/TCP,9088/TCP,7203/TCP,7777/TCP 21h
ksql-1-internal ClusterIP 10.3.255.33 <none> 8088/TCP,9088/TCP,7203/TCP,7777/TCP 21h
To add external access to KSQL after installation, you need to update the KSQL configuration. The example below shows the upgrade command to add an external load balancer:
helm upgrade -f ./providers/gcp.yaml \
--set ksql.enabled=true \
--set ksql.loadBalancer.enabled=true \
--set ksql.loadBalancer.domain=<provider-domain> ksql \
./confluent-operator
The following example shows the DNS table entry you add:
DNS name Internal IP External IP
ksql.mydevplatform.gcp.cloud 10.3.250.36 192.50.82.13
You access Schema Registry using one of the endpoints below:
- TLS is enabled:
https://ksql.mydevplatform.gcp.cloud
- TLS is disabled:
http://ksql.mydevplatform.gcp.cloud
Control Center access¶
Enable access to Confluent Control Center by updating the loadBalancer
parameters as shown
by the example in Kafka access. Note that you set a prefix
using the parameter described in Component load balancer configuration.
The following shows an example of the default services deployed for Confluent Control Center:
controlcenter ClusterIP None <none> 9021/TCP
controlcenter-0-internal ClusterIP 10.47.247.213 <none> 9021/TCP
To add external access to Confluent Control Center after installation, you need to update the Confluent Control Center configuration. The example below shows the upgrade command to add an external load balancer:
helm upgrade -f ./providers/gcp.yaml \
--set controlcenter.enabled=true \
--set controlcenter.loadBalancer.enabled=true \
--set controlcenter.loadBalancer.domain=<provider-domain> controlcenter \
./confluent-operator
The following example shows the DNS table entry you add:
DNS name Internal IP External IP
controlcenter.mydevplatform.gcp.cloud 10.47.250.36 192.50.34.22
You access Confluent Control Center using one of the endpoints below:
- TLS is enabled:
https://controlcenter.mydevplatform.gcp.cloud
- TLS is disabled:
http://controlcenter.mydevplatform.gcp.cloud
No-DNS development access¶
If you need immediate external access without going through your organization’s
DNS process, you can edit the hosts file on your local workstation. For Linux
distributions, this file is typically located at /etc/hosts
). In the example
deployment, the Kafka load balancer hosts file entries resemble the following:
192.50.14.35 b0.mydevplatform.gcp.cloud b0
192.50.28.28 b1.mydevplatform.gcp.cloud b1
192.168.0.192 b2.mydevplatform.gcp.cloud b2
192.50.64.18 kafka.mydevplatform.gcp.cloud kafka
Once the hosts file is configured, you should be able to get to the Kafka bootstrap load balancer from your local workstation.
Important
No-DNS access is for development and testing purposes only and should not be used in a production environment.