Configure OpenShift Routes to Access Confluent Components¶
Confluent for Kubernetes (CFK) supports OpenShift routes for exposing Confluent Platform component services to the outside of the OpenShift platform.
When you configure Confluent components with routes, CFK creates a route resource for the Confluent component service, and external clients access the service at the HTTPS port, 443.
When using a route, you must configure the Confluent component with TLS.
NOTE: The examples in this section assume the default namespace you set in Create a namespace for CFK.
Configure external access to Kafka using routes¶
When configured to use routes, CFK creates a service for each broker in addition to a service for the bootstrap server. For N number of Kafka brokers, CFK creates N+1 number of route services:
- One as the bootstrap service for the initial connection and for receiving the metadata about the Kafka cluster.
- Another N services, one for each broker, address the brokers directly.
When a client accesses a Kafka cluster, it first connects to the bootstrap server route to get the metadata list of all the brokers in the cluster. Then the client finds the address of the broker it is interested in and connects directly to the broker to produce or consume data.
For the additional configuration steps required to allow external access to Metadata Service (MDS), see Configure Networking for RBAC.
To use a route for external access to Kafka:
Set the
route
property in the Kafka custom resource (CR).The following is a snippet of a Kafka CR:
listeners: external: tls: enabled: true --- [1] externalAccess: type: route route: domain: --- [2] wildcardPolicy: --- [3] bootstrapPrefix: --- [4] brokerPrefix: --- [5] annotations: --- [6]
[1] Required.
[2] Required. Set
domain
to the OpenShift domain.If you change this value on a running cluster, you must roll the cluster.
[3] Optional. It defaults to
None
if not configured. Allowed values areSubdomain
andNone
.[4] Optional.
bootstrapPrefix
defines the prefix for the bootstrap advertised endpoint. The value is used to derive the bootstrap host asbootstrapPrefix.domain
. If not set, it will be the cluster name of the component,kafka
in this example.If you change this value on a running cluster, you must roll the cluster.
[5] Optional.
brokerPrefix
defines the prefix for the broker advertised endpoint and will be used to derive the broker host asbrokerPrefix.domain
. If not configured, it will addb
as a prefix, for example,b1.domain
,b2.domain
.If you change this value on a running cluster, you must roll the cluster.
[6] Optional.
Apply the configuration:
oc apply -f <Kafka CR>
Add DNS entries for Kafka bootstrap server and brokers.
Once the routes are created, you add DNS entries for Kafka brokers and the Kafka bootstrap service to your DNS table (or the method you use to get DNS entries recognized by your provider environment).
You need the following to derive Kafka broker DNS entries:
- Domain name of your OpenShift cluster (
domain
in Step #1) - External IP of the OpenShift router load balancer
- Kafka bootstrap prefix and broker prefix
To add DNS entires for Kafka:
Get the IP address of the OpenShift router load balancer.
The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the
openshift-ingress
namespace.oc get svc --namespace openshift-ingress
An example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.84.52 20.189.181.8 80:31294/TCP,443:32145/TCP 42h router-internal-default ClusterIP 172.30.184.233 <none> 80/TCP,443/TCP,1936/TCP 42h
From above example, we see the external-ip of router is 20.189.181.8.
Get the Kafka DNS names:
oc get routes | awk '{print $2}'
Use the external IP to point to all the Kafka DNS names for in your DNS service provider:
The example below shows the DNS table entry, using:
- Domain:
example.com
- Three broker replicas with the default prefix:
b
- The Kafka bootstrap prefix:
kafka
- Load balancer router external IP:
20.189.181.8
20.189.181.8 b0.example.com b1.example.com b2.example.com kafka.example.com
- Domain:
- Domain name of your OpenShift cluster (
Configure external access to other Confluent Platform components using routes¶
The external clients can connect to other Confluent Platform components using routes.
The access endpoint of each Confluent Platform component is:
<component CR name>.<Kubernetes domain>:443
For example, in the example.com
domain with TLS enabled, you access the Confluent Platform
components at the following endpoints:
- https://connect.example.com:443
- https://replicator.example.com:443
- https://schemaregistry.example.com:443
- https://ksqldb.example.com:443
- https://controlcenter.example.com:443
To allow external access to Confluent components using routes:
Enable TLS for the component as described Configure Network Encryption with Confluent for Kubernetes.
Set the following in the component custom resource (CR) and apply the configuration:
spec: externalAccess: type: route route: domain: --- [1] prefix: --- [2] wildcardPolicy: --- [3] annotations: --- [4]
[1] Required. Set
domain
to the domain name of your Kubernetes cluster.If you change this value on a running cluster, you must roll the cluster.
[2] Optional. Set
prefix
to change the default route prefixes. The default is the component name, such ascontrolcenter
,connector
,replicator
,schemaregistry
,ksql
.The value is used for the DNS entry. The component DNS name becomes
<prefix>.<domain>
.If not set, the default DNS name is
<component name>.<domain>
, for example,controlcener.example.com
.You may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.
If you change this value on a running cluster, you must roll the cluster.
[3] Optional. It defaults to
None
if not configured. Allowed values areSubdomain
andNone
.[4] Optional.
Apply the configuration:
oc apply -f <component CR>
Add a DNS entry for each Confluent Platform component that you added a route to.
Once the routes are created, you add a DNS entry associated with component routes to your DNS table (or whatever method you use to get DNS entries recognized by your provider environment).
You need the following to derive Confluent Platform component DNS entries:
Domain name of your OpenShift cluster as set in Step #1.
External IP of the OpenShift router load balancer
The component
prefix
if set in Step #1 above. Otherwise, the default component name.A DNS name is made up of the
prefix
and thedomain
name. For example,controlcenter.example.com
.
To add DNS entires for Confluent components:
Get the IP address of the OpenShift router load balancer.
The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the
openshift-ingress
namespace.oc get svc --namespace openshift-ingress
An example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.84.52 20.189.181.8 80:31294/TCP,443:32145/TCP 42h router-internal-default ClusterIP 172.30.184.233 <none> 80/TCP,443/TCP,1936/TCP 42h
Get the component DNS names:
oc get routes | awk '{print $2}'
Use the external IP to point to all the DNS names for the Confluent components in your DNS service provider.
The example below shows the DNS table entry, using:
- Domain:
example.com
- Default component prefixes
- Load balancer router external IP:
20.189.181.8
20.189.181.8 connect.example.com controlcenter.example.com ksqldb.example.com schemaregistry.example.com
- Domain:
Validate connections¶
The following are example steps to validate external accesses to Confluent Platform
components, using the example.com
domain and default component prefixes.
- Control Center UI
- In your browser, navigate to https://controlcenter.example.com:443.
- Kafka
Get the external endpoints of Kafka.
To get the broker endpoints:
oc get kafka kafka -ojsonpath='{.status.listeners.external.advertisedExternalEndpoints}'
To get the Kafka bootstrap server endpoint:
oc get kafka kafka -ojsonpath='{.status.listeners.external.externalEndpoint}'
Create a topic.
For this step, you need the
kafka-topics
tool on your local system. Install Confluent Platform on your local system to get access to the tool.For example:
kafka-topics --create --topic mytest --partitions 3 --replication-factor 2 --bootstrap-server kafka.example.com:443
In Control Center, validate that the
mytest
topic was created.
- Connect
Get the external endpoint of the component:
oc get connect connect -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://connect.example.com:443 -ik -s -H "Content-Type: application/json"
- ksqlDB
Get the external endpoint of the component:
oc get ksqldb ksqldb -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://ksqldb.example.com:443/ksql -ik -s -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" -X POST --data '{"ksql": "LIST ALL TOPICS;", "streamsProperties": {}}'
- Schema Registry
Get the external endpoint of the component:
oc get schemaregistry schemaregistry -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl -ik https://schemaregistry.example.com:443/subjects
- Control Center
Get the external endpoint of the component:
oc get controlcenter controlcenter -ojsonpath='{.status.restConfig.externalEndpoint}'
Verify that you can reach the component endpoint. For example:
curl https://controlcenter.example.com:443/2.0/health/status -ik -s -H "Content-Type: application/json"