.. _co-deployment-examples: =============================== Deploying other configurations =============================== :ref:`co-deployment` provides a basic |co-long| and |cp| deployment example. The following sections provide information about other deployment configurations. .. _co-multiple-cp-clusters: Deploy Multiple |cp| clusters ------------------------------ To deploy multiple clusters, you deploy the additional |cp| cluster to a different namespace. Make sure to name the clusters differently than any of your other clusters in the same Kubernetes cluster. Note that when running multiple clusters in a single Kubernetes cluster, you **do not** install additional |co-long| instances. For example, you have one |co| instance and you want to deploy two |ak-tm| clusters, you could name the first |ak| cluster **kafka1** and the second |ak| cluster **kafka2**. Once this is done, you deploy each one in a different namespace. Additionally, since you are not installing a second |co| instance, you need to make sure the Docker registry secret is installed in the new namespace. To do this with the Helm install command, you add ``global.injectPullSecret=true`` when you enable the component. .. note:: The parameter ``global.injectPullSecret=true`` is only required if the Docker secret does not exist in the new namespace *or* if a Docker secret is actually required to pull images. If you attempt run ``global.injectPullSecret=true`` to install a new component in the namespace where a secret exists, helm returns an error saying that resources are already found. Using ``kafka2`` in namespace ``operator2`` as an example, the command would resemble the following: :: helm install -f gcp.yaml \ --name kafka2 \ --namespace operator2 \ --set kafka.enabled=true,global.injectPullSecret=true \ ./confluent-operator/ Patch the service account once the Docker secret is created in the new namespace. .. sourcecode:: properties kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}' If you are using a private or local registry with basic authentication, use the following command: .. sourcecode:: properties kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "" }]}' .. _co-multiple-availability-zones: Use Multiple availability zones -------------------------------- To use multiple availability zones (AZs), you first need to configure the **zones** values block in the ``.yaml`` file. The example below shows three zones (us-central-a, -b, and -c): :: provider: name: gcp region: us-central1 kubernetes: deployment: zones: - us-central1-a - us-central1-b - us-central1-c .. important:: If your Kubernetes cluster spans zones a, b, and c and you configure only zones a and b in the yaml block shown above, |co-long| schedules pods in zones a, b, and c, not just a and b. Additionally, if you do this, storage disks for all pods are in zones a and b only, but the pods are spread out over zones a, b, *and* c. .. note:: Kubernetes nodes in public clouds are tagged with their AZs. Kubernetes automatically attempts to spread pods across these zones. For more information, see `Running in multiple zones `_. Add components to an existing |ak| cluster ------------------------------------------- To add additional components to a cluster, you modify the yaml file and then run the ``helm install`` commannd for the added component. For example, if you want to add KSQL to your cluster, you would add the KSQL values block to your ``.yaml`` file and run the install command for KSQL only. For example: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name ksql \ --namespace operator \ --set ksql.enabled=true \ ./confluent-operator Use Node Affinity and Pod Anti-Affinity ---------------------------------------- The following provides information about using Node Affinity and Pod Anti-Affinity. .. note:: To use these features you need to add labels to resources in your Kubernetes environment. See `Assigning Pods to Nodes `_ for details. * **Node Affinity:** If you have special hardware were you want one or more pods to run, you use Node Affinity. This pins these pods to these special hardware nodes. The ``values.yaml`` file for each component has the following section that you can use for this purpose. See `Affinity and anti-affinity `_ for more information. :: ## The node Affinity configuration uses preferredDuringSchedulingIgnoredDuringExecution ## https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity nodeAffinity : {} #nodeAffinity: # key: components # values: # - connect # - app * **Pod Anti-Affinity:** |zk| and |ak| should run on their own nodes. You can force |zk| and |ak| to never run on the same node by using the section in the ``values.yaml`` below. Note that your Kubernetes cluster should have a rack topology domain label for this to work. See `anti-affinity `_ for more information. :: ## Pod Anti-Affinity ## It uses preferredDuringSchedulingIgnoredDuringExecution ## https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels ## Use this capability, if the cluster has some kind of concept of racks rack: {} #rack: # topology: kubernetes.io/hostname Use |crep| with multiple |ak| clusters ---------------------------------------- The following steps guide you through deploying |crep| on multiple |ak| clusters. This example is useful for testing and development purposes only. Prerequisite * Make sure external DNS is configured and running on your development platform. * Make sure you are allowed to create DNS names. The following steps: * Are based on a |gcp| environment. * Use the example ``mydevplatform.gcp.cloud`` DNS name. * Use two-way TLS security. ---------------- Deploy clusters ---------------- Complete the following steps to deploy the clusters. Use the :ref:`example deployment instructions `. #. Deploy Operator in namespace ``operator``. #. Deploy the destination |ak| and |zk| clusters in namespace ``kafka-dest``. Use the following names: * |zk| cluster: ``zookeeper-dest`` * |ak| cluster: ``kafka-dest`` #. Deploy the source |ak| and |zk| clusters in namespace ``kafka-src``. Use the following names: * |zk| cluster: ``zookeeper-src`` * |ak| cluster: ``kafka-src`` #. Deploy |crep| in namespace ``kafka-dest`` using the default name ``replicator``. Set the |crep| endpoint to ``replicator.dependencies.kafka.bootstrapEndpoint:kafka-dest:9071``. Configure the endpoint for TLS one-way security. ----------------- Configure |crep| ----------------- Complete the following steps to configure |crep|. #. On your local machine, use `kubectl exec `_ to start a bash session on one of the pods in the cluster. The example uses the default pod name ``replicator-0`` on a |ak| cluster using the name ``kafka-dest``. :: kubectl -n kafka-dest exec -it replicator-0 bash #. On the |crep| pod, create and populate a file named ``test.json``. There is no text editor installed in the containers, so you use the **cat** command as shown below to create this file. Use ``CTRL+D`` to save the file. :: cat <> test.json { "name": "test-replicator", "config": { "connector.class": "io.confluent.connect.replicator.ReplicatorSourceConnector", "tasks.max": "4", "topic.whitelist": "example", "key.converter": "io.confluent.connect.replicator.util.ByteArrayConverter", "value.converter": "io.confluent.connect.replicator.util.ByteArrayConverter", "src.kafka.bootstrap.servers": "kafka-src.mydevplatform.gcp.cloud:9092", "src.kafka.security.protocol": "SSL", "src.kafka.ssl.keystore.location": "/tmp/keystore.jks", "src.kafka.ssl.keystore.password": "mystorepassword", "src.kafka.ssl.key.password": "mykeypassword", "src.kafka.ssl.truststore.location": "/tmp/truststore.jks", "src.kafka.ssl.truststore.password": "mystorepassword", "dest.kafka.bootstrap.servers": "kafka-dest:9071", "dest.kafka.security.protocol": "PLAINTEXT", "confluent.license": "", "confluent.topic.replication.factor": "3" } } EOF #. From one of the |crep| pods, enter the command below to POST test the |crep| configuration. :: curl -XPOST -H "Content-Type: application/json" --data @test.json https://localhost:8083/connectors -kv #. Verify that the |crep| connector is created. The command below should return ``[''test-replicator'']``. :: curl -XGET -H "Content-Type: application/json" https://localhost:8443/connectors -kv ---------------- Test Replicator ---------------- Complete the following steps to test |crep|. #. On your local machine, use `kubectl exec `_ to start a bash session on one of the pods in the cluster. The example uses the default pod name ``kafka-src-0`` on a |ak| cluster using the name ``kafka-src``. :: kubectl -n kafka-src exec -it kafka-src-0 bash #. On the pod, create and populate a file named ``kafka.properties``. There is no text editor installed in the containers, so you use the **cat** command as shown below to create this file. Use ``CTRL+D`` to save the file. :: cat < kafka.properties bootstrap.servers=kafka-src.mydevplatform.gcp.cloud:9092 security.protocol=SSL ssl.keystore.location=/tmp/keystore.jks ssl.keystore.password=mystorepassword ssl.key.password=mystorepassword ssl.truststore.location=/tmp/truststore.jks ssl.truststore.password=mystorepassword EOF #. Validate the ``kafka.properties`` (make sure your DNS is configured correctly). :: kafka-broker-api-versions \ --command-config kafka.properties \ --bootstrap-server kafka-src.mydevplatform.gcp.cloud:9092 #. Create a topic on the source |ak| cluster. Enter the following command on the |ak| pod. :: kafka-topics --create --topic test-topic \ --replication-factor 1 --partitions 4 \ --bootstrap-server kafka-src.mydevplatform.gcp.cloud:9092 #. Produce in the source |ak| cluster ``kafka-src``. :: seq 10000 | kafka-console-producer --topic test-topic \ --broker-list kafka-src.mydevplatform.gcp.cloud:9092 \ --producer.config kafka.properties #. From a new terminal, start a bash session on ``kafka-dest-0``. :: kubectl -n kafka-dest exec -it kafka-dest-0 bash #. On the pod, create the following ``kafka.properties`` file. :: cat < kafka.properties sasl.mechanism=PLAIN bootstrap.servers=kafka-dest:9071 security.protocol=PLAINTEXT EOF #. Validate the ``kafka.properties`` (make sure your DNS is configured correctly). :: kafka-broker-api-versions --command-config kafka.properties \ --bootstrap-server kafka-dest:9071 #. Validate that the test-topic is created in ``kafka-dest``. :: kafka-topics --describe --topic test-topic.replica \ --bootstrap-server kafka-dest:9071 #. Confirm delivery in the destination |ak| cluster ``kafka-dest``. :: kafka-console-consumer --from-beginning --topic test-topic \ --bootstrap-server kafka-dest:9071 \ --consumer.config kafka.properties