Configure OpenShift Routes to Access Confluent Components

Confluent for Kubernetes (CFK) supports OpenShift routes for exposing Confluent Platform component services to the outside of the OpenShift platform.

When you configure Confluent components with routes, CFK creates a route resource for the Confluent component service, and external clients access the service at the HTTPS port, 443.

When using a route, you must configure the Confluent component with TLS.

NOTE: The examples in this section assume the default namespace you set in Create a namespace for CFK.

Configure external access to Kafka using routes

When configured to use routes, CFK creates a service for each broker in addition to a service for the bootstrap server. For N number of Kafka brokers, CFK creates N+1 number of route services:

  • One as the bootstrap service for the initial connection and for receiving the metadata about the Kafka cluster.
  • Another N services, one for each broker, address the brokers directly.

When a client accesses a Kafka cluster, it first connects to the bootstrap server route to get the metadata list of all the brokers in the cluster. Then the client finds the address of the broker it is interested in and connects directly to the broker to produce or consume data.

For the additional configuration steps required to allow external access to Metadata Service (MDS), see Configure Networking for RBAC.

To allow external access to Kafka using a route:

  1. Set the route property in the Kafka custom resource (CR).

    The following is a snippet of a Kafka CR:

    spec:
      listeners:
        external:
          tls:
            enabled: true       --- [1]
          externalAccess:
            type: route
            route:
              domain:           --- [2]
              wildcardPolicy:   --- [3]
              bootstrapPrefix:  --- [4]
              brokerPrefix:     --- [5]
              annotations:      --- [6]
    
    • [1] Required.

    • [2] Required. Set domain to the OpenShift domain.

      If you change this value on a running cluster, you must roll the cluster.

    • [3] Optional. It defaults to None if not configured. Allowed values are Subdomain and None.

    • [4] Optional. The prefix for the bootstrap advertised endpoint and is added as bootstrapPrefix.domain. If not set, it will be the cluster name of the component, kafka in this example.

    • [5] Optional. The prefix for the broker advertised endpoint and is added as brokerPrefix.domain. If not configured, it will add b as a prefix, for example, b1.domain, b2.domain.

      If you change this value on a running cluster, you must roll the cluster.

    • [6] Optional.

  2. (Optional) If you want to set up more than one listener with routes, add one or more custom listeners in the Kafka CR.

    Note

    Multiple listeners with routes are supported for Confluent Platform 6.1 and later versions.

    spec:
      listeners:
        custom:
        - name:                 --- [1]
          port:                 --- [2]
          tls:
            enabled: true       --- [3]
          externalAccess:
            type: route         --- [4]
            route:
              domain:           --- [5]
              bootstrapPrefix:  --- [6]
              brokerPrefix:     --- [7]
    
    • [1] Required. The name of the custom listener. Do not use the reserved words, internal, external, token.
    • [2] Required. The port to be bound to the custom listener. Do not use the reserved values that are less than 9093.
    • [3] Required.
    • [4] Required.
    • [5] Required. The name for the OpenShift domain.
    • [6] Required. The prefix for the bootstrap advertised endpoint. Must be unique among all listeners.
    • [7] Required. The prefix for broker advertised endpoint. Must be unique among all listeners.
  3. Apply the configuration:

    oc apply -f <Kafka CR>
    
  4. Add DNS entries for Kafka bootstrap server and brokers.

    Once the routes are created, you add DNS entries for Kafka brokers and the Kafka bootstrap service to your DNS table (or the method you use to get DNS entries recognized by your provider environment).

    You need the following to derive Kafka broker DNS entries:

    • Domain name of your OpenShift cluster (domain in Step #1)
    • External IP of the OpenShift router load balancer
    • Kafka bootstrap prefix and broker prefix

    To add DNS entires for Kafka:

    1. Get the IP address of the OpenShift router load balancer.

      The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the openshift-ingress namespace.

      oc get svc --namespace openshift-ingress
      

      An example output:

      NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
      router-default            LoadBalancer   172.30.84.52     20.189.181.8   80:31294/TCP,443:32145/TCP   42h
      router-internal-default   ClusterIP      172.30.184.233   <none>         80/TCP,443/TCP,1936/TCP      42h
      

      From the above example, we see the external-ip of the router is 20.189.181.8.

    2. Get the Kafka DNS names:

      oc get routes | awk '{print $2}'
      
    3. Use the external IP to point to all the Kafka DNS names in your DNS service provider:

      The example below shows the DNS table entry, using:

      • Domain: example.com
      • Three broker replicas with the default prefix: b
      • The Kafka bootstrap prefix: kafka
      • Load balancer router external IP: 20.189.181.8
      20.189.181.8 b0.example.com b1.example.com b2.example.com kafka.example.com
      
  5. Validate the connections.

Configure external access to other Confluent Platform components using routes

The external clients can connect to other Confluent Platform components using routes.

The access endpoint of each Confluent Platform component is: <component CR name>.<Kubernetes domain>:443

For example, in the example.com domain with TLS enabled, you access the Confluent Platform components at the following endpoints:

To allow external access to Confluent components using routes:

  1. Enable TLS for the component as described Configure Network Encryption with Confluent for Kubernetes.

  2. Set the following in the component custom resource (CR) and apply the configuration:

    spec:
      externalAccess:
        type: route
        route:
          domain:          --- [1]
          prefix:          --- [2]
          wildcardPolicy:  --- [3]
          annotations:     --- [4]
    
    • [1] Required. Set domain to the domain name of your Kubernetes cluster.

      If you change this value on a running cluster, you must roll the cluster.

    • [2] Optional. Set prefix to change the default route prefixes. The default is the component name, such as controlcenter, connector, replicator, schemaregistry, ksql.

      The value is used for the DNS entry. The component DNS name becomes <prefix>.<domain>.

      If not set, the default DNS name is <component name>.<domain>, for example, controlcener.example.com.

      You may want to change the default prefixes for each component to avoid DNS conflicts when running multiple Kafka clusters.

      If you change this value on a running cluster, you must roll the cluster.

    • [3] Optional. It defaults to None if not configured. Allowed values are Subdomain and None.

    • [4] Required for REST Proxy to be used as consumers. Otherwise, optional.

      Openshift routes support cookie based sticky sessions by default. To use the clientIp based session affinity that REST Proxy requires:

      • Disable cookies by setting the annotation haproxy.router.openshift.io/disable_cookies: true
      • Enable sourceIP based load balancing by setting the annotation haproxy.router.openshift.io/balance: source
  3. Apply the configuration:

    oc apply -f <component CR>
    
  4. Add a DNS entry for each Confluent Platform component that you added a route to.

    Once the routes are created, you add a DNS entry associated with component routes to your DNS table (or whatever method you use to get DNS entries recognized by your provider environment).

    You need the following to derive Confluent Platform component DNS entries:

    • THe domain name of your OpenShift cluster as set in Step #1.

    • External IP of the OpenShift router load balancer

    • The component prefix if set in Step #1 above. Otherwise, the default component name.

      A DNS name is made up of the prefix and the domain name. For example, controlcenter.example.com.


    To add DNS entires for Confluent components:

    1. Get the IP address of the OpenShift router load balancer.

      The HAProxy load balancer serves as the router for route services, and generally, HAProxy runs in the openshift-ingress namespace.

      oc get svc --namespace openshift-ingress
      

      An example output:

      NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
      router-default            LoadBalancer   172.30.84.52     20.189.181.8   80:31294/TCP,443:32145/TCP   42h
      router-internal-default   ClusterIP      172.30.184.233   <none>         80/TCP,443/TCP,1936/TCP      42h
      
    2. Get the component DNS names:

      oc get routes | awk '{print $2}'
      
    3. Use the external IP to point to all the DNS names for the Confluent components in your DNS service provider.

      The example below shows the DNS table entry, using:

      • Domain: example.com
      • Default component prefixes
      • Load balancer router external IP: 20.189.181.8
      20.189.181.8 connect.example.com controlcenter.example.com ksqldb.example.com schemaregistry.example.com
      
  5. Validate the connections.

Validate connections

The following are example steps to validate external accesses to Confluent Platform components, using the example.com domain and default component prefixes.

Control Center UI
In your browser, navigate to https://controlcenter.example.com:443.
Kafka
  1. Get the external endpoints of Kafka.

    • To get the broker endpoints:

      oc get kafka kafka -ojsonpath='{.status.listeners.external.advertisedExternalEndpoints}'
      
    • To get the Kafka bootstrap server endpoint:

      oc get kafka kafka -ojsonpath='{.status.listeners.external.externalEndpoint}'
      
  2. Create a topic.

    For this step, you need the kafka-topics tool on your local system. Install Confluent Platform on your local system to get access to the tool.

    For example:

    kafka-topics --create --topic mytest --partitions 3 --replication-factor 2 --bootstrap-server kafka.example.com:443
    
  3. In Control Center, validate that the mytest topic was created.

Connect
  1. Get the external endpoint of the component:

    oc get connect connect -ojsonpath='{.status.restConfig.externalEndpoint}'
    
  2. Verify that you can reach the component endpoint. For example:

    curl https://connect.example.com:443 -ik -s -H "Content-Type: application/json"
    
ksqlDB
  1. Get the external endpoint of the component:

    oc get ksqldb ksqldb -ojsonpath='{.status.restConfig.externalEndpoint}'
    
  2. Verify that you can reach the component endpoint. For example:

    curl https://ksqldb.example.com:443/ksql -ik -s -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" -X POST --data '{"ksql": "LIST ALL TOPICS;", "streamsProperties": {}}'
    
Schema Registry
  1. Get the external endpoint of the component:

    oc get schemaregistry schemaregistry -ojsonpath='{.status.restConfig.externalEndpoint}'
    
  2. Verify that you can reach the component endpoint. For example:

    curl -ik https://schemaregistry.example.com:443/subjects
    
Control Center
  1. Get the external endpoint of the component:

    oc get controlcenter controlcenter -ojsonpath='{.status.restConfig.externalEndpoint}'
    
  2. Verify that you can reach the component endpoint. For example:

    curl https://controlcenter.example.com:443/2.0/health/status -ik -s -H "Content-Type: application/json"
    
REST Proxy
  1. Get the external endpoint of the component:

    oc get kafkarestproxy kafkarestproxy -ojsonpath='{.status.restConfig.externalEndpoint}'
    
  2. Verify that you can reach the component endpoint. For example:

    curl -ik https://kafkarestproxy.example.com:443/v3/clusters