Configure OpenShift Routes to Access Confluent Platform Components Using Confluent for Kubernetes¶
Confluent for Kubernetes (CFK) supports OpenShift routes for exposing Confluent Platform component services to the outside of the OpenShift platform.
When you configure Confluent components with routes, CFK creates a route resource for the Confluent component service, and external clients access the service at the HTTPS port, 443.
When using a route, you must configure the Confluent component with TLS.
NOTE: The examples in this section assume the default namespace you set in Create a namespace for CFK.
Configure external access to Kafka using routes¶
When configured to use routes, CFK creates a service for each broker in addition to a service for the bootstrap server. For N number of Kafka brokers, CFK creates N+1 number of route services:
- One as the bootstrap service for the initial connection and for receiving the metadata about the Kafka cluster.
- Another N services, one for each broker, address the brokers directly.
When a client accesses a Kafka cluster, it first connects to the bootstrap server route to get the metadata list of all the brokers in the cluster. Then the client finds the address of the broker it is interested in and connects directly to the broker to produce or consume data.
To allow external access to Kafka using a route:
Set the
route
property in the Kafka custom resource (CR).The following is a snippet of a Kafka CR:
spec: listeners: external: tls: enabled: true --- [1] externalAccess: type: route route: domain: --- [2] wildcardPolicy: --- [3] bootstrapPrefix: --- [4] brokerPrefix: --- [5] annotations: --- [6]
[1] Required.
[2] Required. Set
domain
to the OpenShift domain.If you change this value on a running cluster, you must roll the cluster.
[3] Optional. It defaults to
None
if not configured. Allowed values areSubdomain
andNone
.[4] Optional. The prefix for the bootstrap advertised endpoint and is added as
bootstrapPrefix.domain
. If not set, it will be the cluster name of the component,kafka
in this example.[5] Optional. The prefix for the broker advertised endpoint and is added as
brokerPrefix.domain
. If not configured, it will addb
as a prefix, for example,b1.domain
,b2.domain
.If you change this value on a running cluster, you must roll the cluster.
[6] Optional.
(Optional) If you want to set up more than one listener with routes, add one or more custom listeners in the Kafka CR.
Note
Multiple listeners with routes are supported for Confluent Platform 6.1 and later versions.
spec: listeners: custom: - name: --- [1] port: --- [2] tls: enabled: true --- [3] externalAccess: type: route --- [4] route: domain: --- [5] bootstrapPrefix: --- [6] brokerPrefix: --- [7]
- [1] Required. The name of the custom listener. Do not use the reserved
words,
internal
,external
,token
. - [2] Required. The port to be bound to the custom listener. Do not use the
reserved values that are less than
9093
. - [3] Required.
- [4] Required.
- [5] Required. The name for the OpenShift domain.
- [6] Required. The prefix for the bootstrap advertised endpoint. Must be unique among all listeners.
- [7] Required. The prefix for broker advertised endpoint. Must be unique among all listeners.
- [1] Required. The name of the custom listener. Do not use the reserved
words,
Apply the configuration:
oc apply -f <Kafka CR>
Add DNS entries for Kafka bootstrap server and brokers.
Once the routes are created, you add DNS entries for Kafka brokers and the Kafka bootstrap service to your DNS table (or the method you use to get DNS entries recognized by your provider environment).
You need the following to derive Kafka broker DNS entries:
- Domain name of your OpenShift cluster (
domain
in Step #1) - External IP of the OpenShift router load balancer
- Kafka bootstrap prefix and broker prefix
To add DNS entires for Kafka:
Get the IP address of the OpenShift router load balancer.
The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the
openshift-ingress
namespace.oc get svc --namespace openshift-ingress
An example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.84.52 20.189.181.8 80:31294/TCP,443:32145/TCP 42h router-internal-default ClusterIP 172.30.184.233 <none> 80/TCP,443/TCP,1936/TCP 42h
From the above example, we see the external-ip of the router is 20.189.181.8.
Get the Kafka DNS names:
oc get routes | awk '{print $2}'
Use the external IP to point to all the Kafka DNS names in your DNS service provider:
The example below shows the DNS table entry, using:
- Domain:
example.com
- Three broker replicas with the default prefix:
b
- The Kafka bootstrap prefix:
kafka
- Load balancer router external IP:
20.189.181.8
20.189.181.8 b0.example.com b1.example.com b2.example.com kafka.example.com
- Domain:
- Domain name of your OpenShift cluster (
Configure external access to MDS using routes¶
When you set up external access to MDS with role-based access control (RBAC) enabled, additional networking configuration steps required.
To support external access to Kafka Metadata Service (MDS), configure the following in the Kafka custom resource (CR):
spec
services
mds:
externalAccess:
type: route
route:
domain: --- [1]
prefix: --- [2]
- [1] Required. Domain name of the MDS.
- [2] The prefix for the MDS. If set, the MDS endpoint is
<prefix>.<domain>
. If omitted, the MDS endpoint is<domain>
by default.
The endpoint to externally access MDS using route is:
- Over HTTP:
http://<domain>:80
- Over HTTPS:
https://<domain>:443
Configure external access to other Confluent Platform components using routes¶
The external clients can connect to other Confluent Platform components using routes.
The access endpoint of each Confluent Platform component is:
<component CR name>.<Kubernetes domain>:443
For example, in the example.com
domain with TLS enabled, you access the Confluent Platform
components at the following endpoints:
https://connect.example.com:443
https://replicator.example.com:443
https://schemaregistry.example.com:443
https://ksqldb.example.com:443
https://controlcenter.example.com:443
To allow external access to Confluent components using routes:
Enable TLS for the component as described Configure Network Encryption for Confluent Platform Using Confluent for Kubernetes.
Set the following in the component custom resource (CR) and apply the configuration:
spec: externalAccess: type: route route: domain: --- [1] prefix: --- [2] wildcardPolicy: --- [3] annotations: --- [4]
[1] Required. Set
domain
to the domain name of your Kubernetes cluster.If you change this value on a running cluster, you must roll the cluster.
[2] Optional. Set
prefix
to change the default route prefixes. The default is the component name, such ascontrolcenter
,connector
,replicator
,schemaregistry
,ksql
.The value is used for the DNS entry. The component DNS name becomes
<prefix>.<domain>
.If not set, the default DNS name is
<component name>.<domain>
, for example,controlcener.example.com
.You may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.
If you change this value on a running cluster, you must roll the cluster.
[3] Optional. It defaults to
None
if not configured. Allowed values areSubdomain
andNone
.[4] Required for REST Proxy to be used as consumers. Otherwise, optional.
Openshift routes support cookie based sticky sessions by default. To use the clientIp based session affinity that REST Proxy requires:
- Disable cookies by setting the
annotation
haproxy.router.openshift.io/disable_cookies: true
- Enable sourceIP based load balancing by setting the annotation
haproxy.router.openshift.io/balance: source
- Disable cookies by setting the
annotation
Apply the configuration:
oc apply -f <component CR>
Add a DNS entry for each Confluent Platform component that you added a route to.
Once the routes are created, you add a DNS entry associated with component routes to your DNS table (or whatever method you use to get DNS entries recognized by your provider environment).
You need the following to derive Confluent Platform component DNS entries:
THe domain name of your OpenShift cluster as set in Step #1.
External IP of the OpenShift router load balancer
The component
prefix
if set in Step #1 above. Otherwise, the default component name.A DNS name is made up of the
prefix
and thedomain
name. For example,controlcenter.example.com
.
To add DNS entires for Confluent components:
Get the IP address of the OpenShift router load balancer.
The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the
openshift-ingress
namespace.oc get svc --namespace openshift-ingress
An example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.84.52 20.189.181.8 80:31294/TCP,443:32145/TCP 42h router-internal-default ClusterIP 172.30.184.233 <none> 80/TCP,443/TCP,1936/TCP 42h
Get the component DNS names:
oc get routes | awk '{print $2}'
Use the external IP to point to all the DNS names for the Confluent components in your DNS service provider.
The example below shows the DNS table entry, using:
- Domain:
example.com
- Default component prefixes
- Load balancer router external IP:
20.189.181.8
20.189.181.8 connect.example.com controlcenter.example.com ksqldb.example.com schemaregistry.example.com
- Domain:
Validate connections¶
The following are example steps to validate external accesses to Confluent Platform
components, using the example.com
domain and default component prefixes.
- Control Center UI
- In your browser, navigate to https://controlcenter.example.com:443.
- Kafka
Get the external endpoints of Kafka.
To get the broker endpoints:
oc get kafka kafka -ojsonpath='{.status.listeners.external.advertisedExternalEndpoints}'
To get the Kafka bootstrap server endpoint:
oc get kafka kafka -ojsonpath='{.status.listeners.external.externalEndpoint}'
Create a topic.
For this step, you need the Confluent CLI tool on your local system. Install Confluent CLI on your local system to get access to the tool.
For example:
confluent kafka topic create mytest \ --partitions 3 --replication-factor 2 \ --url kafka.example.com:443
In Control Center, validate that the
mytest
topic was created.
- Connect
Get the external endpoint of the component:
oc get connect connect -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://connect.example.com:443 -ik -s -H "Content-Type: application/json"
- ksqlDB
Get the external endpoint of the component:
oc get ksqldb ksqldb -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://ksqldb.example.com:443/ksql -ik -s -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" -X POST --data '{"ksql": "LIST ALL TOPICS;", "streamsProperties": {}}'
- Schema Registry
Get the external endpoint of the component:
oc get schemaregistry schemaregistry -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl -ik https://schemaregistry.example.com:443/subjects
- Control Center
Get the external endpoint of the component:
oc get controlcenter controlcenter -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://controlcenter.example.com:443/2.0/health/status -ik -s -H "Content-Type: application/json"
- REST Proxy
Get the external endpoint of the component:
oc get kafkarestproxy kafkarestproxy -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl -ik https://kafkarestproxy.example.com:443/v3/clusters