Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Configure Networking with Confluent Operator

Refer to the following information to understand and configure endpoints, ports, and load balancers for your Confluent Platform components and cluster.

Note

The terms external and internal are used in this document to differentiate between the types of load balancers you can enable for a Confluent Operator and Confluent Platform cluster. Your platform provider may use different terms.

The examples in this guide use the following assumptions:

  • $VALUES_FILE refers to the configuration file you set up in Create the global configuration file.

  • To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file ($VALUES_FILE). However, in your production deployments, use the --set or --set-file option when applying sensitive data with Helm. For example:

    helm upgrade --install kafka \
     --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \
     --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \
     --set kafka.enabled=true
    
  • operator is the namespace that Confluent Platform is deployed in.

  • All commands are executed in the helm directory under the directory Confluent Operator was downloaded to.

Kafka endpoints

For Confluent Platform components to communicate with Kafka, you set the Kafka broker endpoint using bootstrapEndpoint: in the component sections in the configuration ($VALUES_FILE) as below. For Confluent Control Center, the Kafka dependencies section is in c3KafkaCluster, and for other components, it is in kafka.

<component>
  dependencies:
    kafka:
      bootstrapEndpoint:
  • For the component to communicate with Operator-deployed Kafka over Kafka’s internal listener:

    • If Kafka cluster is deployed to the same namespace as this component: <kafka-cluster-name>:9071

    • If Kafka cluster is deployed to a different namespace as this component: <kafka-cluster-name>.<kafka-namespace>.svc.cluster.local:9071

      The <kafka-cluster-name> is the value set in name: under the kafka section in your config file ($VALUES_FILE).

  • For the component to communicate with Operator-deployed Kafka over Kafka’s external listener: <bootstrap-prefix>.<load-balancer-domain>:9092

    See Configure external load balancers about the <bootstrap-prefix> and <load-balancer-domain> values.

Load balancers

The following types of load balancers are supported in Confluent Platform:

  • External load balancers are external-facing and are used to enable access to components from outside the cluster. Outside access to the Kafka brokers is only available through an external load balancer. This is the default load balancer created if the type parameter is not specified.
  • Internal load balancers are private-facing and are used to support VPC-to-VPC peering.

By default, load balancers are disabled for all Confluent Platform components.

Follow the steps in the next sections to enable and configure load balancers in Confluent Platform.

Note

Kafka brokers and ZooKeeper maintain a constant connection that can be broken if a load balancer is used for ZooKeeper. There are no configuration parameters provided for setting up a ZooKeeper load balancer. Do not configure a load balancer for ZooKeeper.

Configure internal load balancers

You use an internal load balancer as an internal-only access point for VPC-to-VPC peering connections among Confluent Platform components.

To create an internal load balancer, add the type: internal entry to the component section in your configuration file ($VALUES_FILE) as below:

loadBalancer:
   enabled: true
   type: internal
   domain: "<provider-domain>"

Internal load balancers add component support for VPC peering. The installation automatically creates an internal load balancer that works for the provider used. This is possible because the installer automatically configures the following provider-specific annotations at installation:

Azure:

 annotations:
   service.beta.kubernetes.io/azure-load-balancer-internal: "true"

AWS:

 annotations:
   service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"

GCP:

 annotations:
   cloud.google.com/load-balancer-type = "Internal"

Configure external load balancers

You use external load balancers to access cluster components from outside the provider environment.

When you set loadbalalancer.enabled: true and add your provider domain, the installer automatically creates an external load balancer for the component at installation.

Important

When you configure an external load balancer, it is good practice to protect the endpoints using TLS.

External load balancers are configured in the global configuration file ($VALUES_FILE). In the loadBalancer: section of a Confluent Platform component, use the following parameters to configure an external load balancer for the component:

enabled
Set it to true to enable a load balancer.
type
Set to to external or omit it for an external load balancer. The load balancer defaults to an external load balancer.
domain
Set it to the provider domain where your cluster is running.
bootstrapPrefix and brokerPrefix

Use to change the default Kafka load balancer prefixes. The default bootstrap prefix is the Kafka component name (kafka), and the default Kafka broker prefix is b.

These are used for DNS entries in your domain. The bootstrap DNS name becomes <bootstrapPrefix>.<provider-domain>, and the broker DNS names become <brokerPrefix>0.<provider-domain>, <brokerPrefix>1.<provider-domain>, etc.

If these are not set, the default Kafka bootstrap DNS names is kafka.<provider-domain>, and the broker DNS names are b0.<provider-domain>, b1.<provider-domain>, etc.

As part of your network plan, you may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.

prefix
Use to set or update the default component (other than Kafka) DNS name. The default is the component name. If you have prefix: <myprefix>, the component DNS name is configured as <myprefix>.<provider-domain>. If prefix is not set, the component DNS name is configured to <component-name>.<provider-domain>.
annotations

Use to add application-specific or provider-specific settings. For an example of how annotations are used in this environment, see Configure internal load balancers.

For information about Kubernetes load balancer annotations for AWS, see Load Balancers.

You add additional parameters and annotations as needed. For example, to create a load balancer for Amazon Web Services (AWS), add the following annotation to create an internal Network Load Balancer (NLB).

loadBalancer:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb

Following is an example section of the configuration file to set an external load balancer for Kafka:

kafka:
  loadBalancer:
    enabled: true
    domain: mydevplatform.gcp.cloud
    bootstrapPrefix: kafka-lb
    brokerPrefix: kafka-broker

Enable load balancer after installation

To add external access to Confluent Platform components after installation, you can run the upgrade command to add an external load balancer:

helm upgrade --install <component-release-name> \
  --values $VALUES_FILE \
  --namespace <namespace> \
  --set <component-release-name>.enabled=true \
  --set <component-release-name>.loadBalancer.enabled=true \
  --set <component-release-name>.loadBalancer.domain=<provider-domain> \
  ./confluent-operator

For example, to enable a load balancer for Operator in the operator namespace in the mydevplatform.gcp.cloud domain:

helm upgrade --install connectors \
  --values $VALUES_FILE \
  --namespace operator \
  --set connectors.enabled=true \
  --set connectors.loadBalancer.enabled=true \
  --set connectors.loadBalancer.domain=mydevplatform.gcp.cloud \
  ./confluent-operator

Add DNS entries

Once the external load balancers are created, you add a DNS entry associated with the public IP for each Kafka broker load balancer, the Kafka bootstrap load balancer, and any component load balancer to your provider DNS table (or whatever method you use to get DNS entries recognized by your provider environment).

Note

See metricReporter for information about DNS entries that could affect Kafka brokers.

Important

To avoid DNS conflicts when you add multiple clusters to one domain, change the default prefixes for each component.

To get the information you need to add DNS entries, first list the services for components in the namespace.

kubectl get services -n <namespace>
Kafka DNS entries

The list of Kafka services resembles the following:

kafka                         ClusterIP      None            <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP   21h
kafka-0-internal              ClusterIP      10.47.247.181   <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP   21h
kafka-0-lb                    LoadBalancer   10.47.245.192   192.50.14.35     9092:32183/TCP                                 21h
kafka-1-internal              ClusterIP      10.47.251.31    <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP   21h
kafka-1-lb                    LoadBalancer   10.47.251.8     192.50.28.28     9092:31028/TCP                                 21h
kafka-2-internal              ClusterIP      10.47.242.124   <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP   21h
kafka-2-lb                    LoadBalancer   10.47.250.236   192.50.64.18     9092:32636/TCP                                 21h
kafka-bootstrap-lb            LoadBalancer   10.47.250.151   192.50.34.20     9092:30840/TCP                                 21h

Kafka broker load balancer DNS entries are derived from the component prefix, replica number, and the provider domain. The Kafka bootstrap load balancer DNS entry is derived from the component name (for example. kafka) and provider domain. The default Kafka component prefix is b for broker.

For example, the brokers in a default three-replica configuration have the prefix/replica numbers b0, b1, and b2. The Kafka bootstrap load balancer takes the name of the component, which defaults to kafka. The following shows the DNS table entries you add (using the example domain mydevplatform.gcp.cloud):

DNS name                           Internal IP             External IP
b0.mydevplatform.gcp.cloud         10.47.245.57            192.50.14.35
b1.mydevplatform.gcp.cloud         10.47.240.85            192.50.28.28
b2.mydevplatform.gcp.cloud         10.35.186.46            192.50.64.18
kafka.mydevplatform.gcp.cloud      10.47.250.36            192.50.34.20
Confluent Platform component DNS entries

Add the DNS entry for each Confluent Platform component that you added a load balancer to. The format is:

DNS name
The component name, such as controlcenter, connector, replicator, schemaregistry, ksql.
External IP
The component bootstrap load balancer External IP from the kubectl get services -n <namespace> output.
Internal IP
The component bootstrap load balancer Cluster IP from the kubectl get services -n <namespace> output.

Access Confluent Platform component via load balancer

You access Kafka using the bootstrap load balancer DNS/port:

kafka.<provider-domain>:9092

You access Confluent Platform components using one of the following endpoints:

  • TLS is enabled: https://<component>.<provider-domain>
  • TLS is disabled: http://<component>.<provider-domain>

For example, in the mydevplatform.gcp.cloud provider domain with TLS enabled, you access the Confluent Platform components at the following endpoints:

https://connectors.mydevplatform.gcp.cloud
https://replicator.mydevplatform.gcp.cloud
https://schemaregistry.mydevplatform.gcp.cloud
https://ksql.mydevplatform.gcp.cloud
https://controlcenter.mydevplatform.gcp.cloud

Ports configuration in Operator

Kafka advertised listener ports

Advertised listeners for Kafka brokers use the following ports:

  • Port 9071: Internal clients running inside Kubernetes cluster.
  • Port 9072: Inter-broker communication.
  • Port 9092: External clients running outside the Kubernetes cluster.

Ports for network traffic internal to Kubernetes

Clients running within other Kubernetes containers typically connect to the Confluent Platform components via a ClusterIP Service created by the Operator for each component, and can use either the external or internal port. The default port values are listed below, and can be found in the component’s template file. For example, the template file for ksqlDB is <OperatorHome>/helm/confluent-operator/charts/ksql/templates/ksql-psc.yaml. These ports cannot be configured by the user.

  • ksqlDB ports
    • external: 8088
    • internal: 9088
  • Schema Registry ports
    • external: 8081
    • internal: 9081
  • Connect ports
    • external: 8083
    • internal: 9083
  • Replicator ports
    • external: 8083
    • internal: 9083
  • Confluent Control Center ports
    • external: 9021
    • internal: 8021

Ports for network traffic originating external to Kubernetes

Clients running outside Kubernetes connect to Confluent Platform components via a LoadBalancer Service which has its own ports. Traffic to these ports will be routed to the external ports of the container, but the actual ports on the load balancer don’t necessarily match the container’s external ports.

Furthermore, the Operator configures the outward-facing port of the load balancer to use standard HTTP and HTTPS port values by default. The following is how the load balancers for ksqlDB, Connect, Replicator, Schema Registry, and Control Center will be configured by default:

  • When TLS is enabled, the outward facing load balancer port is 443.
  • When TLS is disabled, the outward facing load balancer port is 80.

Note

In Operator 5.3 and earlier releases, when TLS is enabled, the HTTPS endpoint of the component is hosted on port 8443 at the pod level. However, starting in the 5.4 release, the HTTPS endpoint of the component is hosted on the component’s default standard HTTPS (443) or HTTP (80) port. port.

The load balancer port defaults can be overridden by setting the <component>.tls.port in your configuratio file ($VALUES_FILE).

No-DNS development access

If you need immediate external access without going through your organization’s DNS process, you can edit the hosts file on your local workstation. For Linux distributions, this file is typically located at /etc/hosts.

  1. Run the following command to get the external IPs of the load balancers:

    kubectl get services -n <namespace>
    

    An example output is:

    NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)
    controlcenter                ClusterIP      None           <none>           9021/TCP,7203/TCP,7777/TCP
    controlcenter-0-internal     ClusterIP      10.77.12.166   <none>           9021/TCP,7203/TCP,7777/TCP
    controlcenter-bootstrap-lb   LoadBalancer   10.77.11.31    35.224.122.181   80:31358/TCP
    kafka                        ClusterIP      None           <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP
    kafka-0-internal             ClusterIP      10.77.8.20     <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP
    kafka-0-lb                   LoadBalancer   10.77.3.227    34.72.187.157    9092:31977/TCP
    kafka-1-internal             ClusterIP      10.77.4.190    <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP
    kafka-1-lb                   LoadBalancer   10.77.15.119   34.72.76.66      9092:30244/TCP
    kafka-2-internal             ClusterIP      10.77.4.234    <none>           9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP
    kafka-2-lb                   LoadBalancer   10.77.14.14    35.239.218.11    9092:30695/TCP
    kafka-bootstrap-lb           LoadBalancer   10.77.0.47     35.232.216.94    9092:30601/TC
    
  2. Add the the load balancer IPs to /etc/hosts file:

    Using the above example output, the Kafka load balancer /etc/hosts file entries are:

    34.72.187.157  b0.mydevplatform.gcp.cloud b0
    34.72.76.66    b1.mydevplatform.gcp.cloud b1
    35.239.218.11  b2.mydevplatform.gcp.cloud b2
    35.232.216.94  kafka.mydevplatform.gcp.cloud kafka
    35.224.122.181 controlcenter.arodoni.dev.gcp.devel.cpdev.cloud controlcenter
    

Once the hosts file is configured, you should be able to get to the Kafka bootstrap load balancer from your local workstation.

Important

No-DNS access is for development and testing purposes only and should not be used in a production environment.