.. _connect_gcs_source: Google Cloud Storage (GCS) Source Connector for |cp| ==================================================== The |kconnect-long| GCS Source Connector provides the capability to read data exported to GCS by the |ak| :ref:`Connect GCS Sink connector ` and publish it back to a |ak| topic. Depending on the format and partitioner used to write the data to GCS, this connector can write to the destination topic using the same partitions as the original messages exported to GCS and maintain the same message order. The connector selects folders based on the partitioner configuration and reads each folders GCS objects in alphabetical order. Each record is read based on the format selected. Configuration is designed to mirror the |ak| :ref:`Connect GCS Sink connector ` and should be possible to create source connector configs with only minor changes to the original sink configuration. .. important:: The recommended practice is to create topics manually in the destination |ak| cluster with the correct number of partitions before running the source connector. If the topics do not exist, |kconnect| relies on auto-topic creation and the number of partitions are based upon the |ak| broker defaults. If there are more partitions in the destination cluster, the extra partitions are not used. If there are fewer partitions in the destination cluster, the connector task throws an exception and stops the moment it tries to write to a |ak| partition that does not exist. Be aware of the following connector actions: * The connector ignores any GCS object with a name that does not start with the configured topics directory. This name is ``/topics/`` by default. * The connector ignores any GCS object that is below the topics directory but has an extension that does not match the configured format. For example, a JSON file is ignored when ``format.class`` is set for Avro files. * The connector stops and fails if the GCS object's name does not match the expected format or is in an unexpected location. * The connector ignores any GCS object that is below the topics directory but has a partitioner structure that does not match the configured class. For example, ``DefaultPartitioner`` structure is ignored when ``partitioner.class`` is set for ``TimeBasedPartitioner``. Avoid the following configuration issues: * A file with the correct extension and a valid name format e.g. ``++.``, placed in a folder of a different topic will be read normally and written to whatever topic as defined by its filename. * If a field partitioner is incorrectly configured to match the expected folder, it can break the ordering guarantees of GCS sink that used a deterministic sink partitioner. .. _gcs_source_connector_features: Features -------- The GCS Source Connector offers a variety of features: * **Pluggable Data Format with or without Schema**: Out of the box, the connector supports reading data from GCS in Avro and JSON format. Besides records with schema, the connector supports importing plain JSON records without schema in text files, one record per line. In general, the connector may accept any format that provides an implementation of the Format interface. * **At Least Once Delivery**: In the event of a task failure the connector guarantees no messages are lost, although the last few messages may be processed again. * **Matching Source Partitioning**: Messages will be put back on to the same |ak| partition for that topic when it was written. * **Source Partition Ordering**: The connector will read records back in time order in each topic-source partition if the ``DefaultPartitioner`` or a ``TimeBasedPartitioner`` is used. If a FieldPartitioner is used it isn't possible to guarantee the order of these messages. * **Pluggable Partitioner**: The connector comes out of the box with partitioners that support default partitioning based on |ak| partitions, field partitioning, and time-based partitioning in days or hours. You may implement your own partitioners by extending the Partitioner class. Additionally, you can customize time based partitioning by extending the TimeBasedPartitioner class. .. important:: All partitioners will notice new topic folders with the inbuilt task reconfiguration thread. The ``DefaultPartitioner`` will detect new partition folders. The FieldPartitioner will notice new folders for the fields specified. However, the TimeBasedPartititoner will not currently detect new files for a new time period. .. important:: Be careful when both the :ref:`Connect GCS sink connector ` and the GCS Source Connector use the same |ak| cluster, since this results in the source connector writing to the same topic being consumed by the sink connector. This causes a continuous feedback loop that creates an ever-increasing number of duplicate |ak| records and GCS objects. It is possible to avoid this feedback loop by writing to a different topic than the one being consumed by the sink connector. Use the :ref:`RegexRouter ` with the source connector to change the names of the topics where the records are written. Or, use the :ref:`Extract Topic SMT` with the source connector to change the topic name based upon a field in each message. Prerequisites ------------- The following are required to run the |kconnect-long| GCS Source Connector: * |ak| Broker: |cp| 3.3.0 or above, or |ak| 0.11.0 or above * |kconnect|: |cp| 4.0.0 or above, or |ak| 1.0.0 or above * Java 1.8 .. _gcs_source_connector_install: Install the GCS Source Connector -------------------------------- .. include:: ../../includes/connector-install.rst .. include:: ../../includes/connector-install-hub.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-gcs-source:latest .. include:: ../../includes/connector-install-version.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-gcs-source:1.0.0-preview -------------------------- Install Connector Manually -------------------------- `Download and extract the ZIP file `_ for your connector and then follow the manual connector installation :ref:`instructions `. License ------- .. include:: ../../includes/enterprise-license.rst See :ref:`gcs_source_connector_license_config` for license properties and :ref:`gcs-src-license-topic-configuration` for information about the license topic. .. _gcs_source_connector_quickstart: Quick Start ----------- The following uses the ``GcsSinkConnector`` to write a file from the |ak| topic named ``gcs_topic`` to GCS. Then, the ``GcsSourceConnector`` loads that Avro file from GCS to the |ak| topic named ``copy_of_gcs_topic``. #. Follow the instructions from :ref:`the GCS Sink Connector quick start ` to set up the data to use below. #. Install the connector through the `Confluent Hub Client `__. .. codewithvars:: bash # run from your Confluent Platform installation directory confluent-hub install confluentinc/kafka-connect-gcs-source:latest .. tip:: By default, the plugin is installed into ``share/confluent-hub-components`` and the directory is added to the plugin path. If this is the first connector you have installed, you may need to restart the connect server for the plugin path change to take effect. ---------------------- Property-based example ---------------------- #. Create a ``gcs-source-connector.properties`` file with the following contents. This file is included with the connector in ``etc/kafka-connect-gcs/gcs-source-connector.properties``. This configuration is used typically along with :ref:`standalone workers `.: .. codewithvars:: properties name=gcs-source tasks.max=1 connector.class=io.confluent.connect.gcs.GcsSourceConnector # enter the bucket name and GCS credentials here gcs.bucket.name= gcs.credentials.path= format.class=io.confluent.connect.gcs.format.avro.AvroFormat confluent.topic.bootstrap.servers=localhost:9092 confluent.topic.replication.factor=1 # for production environments, enter the Confluent license here # confluent.license= .. tip:: The following define the Confluent license stored in |ak|, so we need the |ak| bootstrap addresses. ``replication.factor`` may not be larger than the number of |ak| brokers in the destination cluster, so here we set this to '1' for demonstration purposes. Always use at least '3' in production configurations. #. Edit the ``gcs-source-connector.properties`` to add the following properties: .. codewithvars:: properties transforms=AddPrefix transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter transforms.AddPrefix.regex=.* transforms.AddPrefix.replacement=copy_of_$0 .. important:: Adding this renames the output of topic of the messages to ``copy_of_gcs_topic``. This prevents a continuous feedback loop of messages. #. Load the GCS Source Connector. .. include:: ../../../includes/confluent-local-connector-limit.rst .. codewithvars:: bash |confluent_load| gcs-source|dash| -d gcs-source-connector.properties .. important:: Don't use the :ref:`cli` in production environments. #. Confirm that the connector is in a ``RUNNING`` state. .. codewithvars:: bash |confluent_status| gcs-source #. Confirm that the messages are being sent to |ak|. .. codewithvars:: bash kafka-avro-console-consumer \ --bootstrap-server localhost:9092 \ --property schema.registry.url=http://localhost:8081 \ --topic copy_of_gcs_topic \ --from-beginning | jq '.' #. The response should be 9 records as follows. .. sourcecode:: bash {"f1": "value1"} {"f1": "value2"} {"f1": "value3"} {"f1": "value4"} {"f1": "value5"} {"f1": "value6"} {"f1": "value7"} {"f1": "value8"} {"f1": "value9"} ------------------ REST-based example ------------------ #. Use this setting with :ref:`distributed workers `. Write the following JSON to `config.json`, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the |kconnect-long| :ref:`REST API `. .. code-block:: json { "name" : "GCSSourceConnector", "config" : { "format.class": "io.confluent.connect.gcs.format.avro.AvroFormat", "connector.class" : "io.confluent.connect.gcs.GcsSourceConnector", "gcs.bucket.name" : "confluent-kafka-connect-gcs-testing", "gcs.credentials.path" : "#/path/to/credentials/keys.json", "tasks.max" : "1", "confluent.topic.bootstrap.servers" : "localhost:9092", "confluent.topic.replication.factor" : "1", "confluent.license" : " Omit to enable trial mode ", "transforms" : "AddPrefix", "transforms.AddPrefix.type" : "org.apache.kafka.connect.transforms.RegexRouter", "transforms.AddPrefix.regex" : ".*", "transforms.AddPrefix.replacement" : "copy_of_$0" } } .. note:: Change the ``confluent.topic.bootstrap.servers`` property to include your broker address(es), and change the ``confluent.topic.replication.factor`` to 3 for staging or production use. #. Use curl to post a configuration to one of the |kconnect-long| Workers. Change ``http://localhost:8083/`` to the endpoint of one of your |kconnect-long| worker(s). .. code-block:: bash curl -s -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors #. Use the following command to update the configuration of existing connector. .. code-block:: bash curl -s -X PUT -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors/GCSSourceConnector/config #. To consume records written by connector to the configured |ak| topic, run the following command: .. codewithvars:: bash kafka-avro-console-consumer --bootstrap-server localhost:9092 --property schema.registry.url=http://localhost:8081 --topic copy_of_gcs_topic --from-beginning .. _gcs_source_connector_partitioners: Google Cloud Storage Source Connector Partitions ------------------------------------------------ The connector supports a variety of partitions. The GCS Source connector's partitioner determines how records read from Blob Items are partitioned into |ak| topics. Messages will be put back on to the same |ak| partition for that topic when it was written. The partitioner is specified in the connector configuration with the ``partitioner.class`` configuration property. The GCS Source connector comes with the following partitioners: * **Default Partitioner**: For supporting Default Partitioner You must configure the ``partitioner.class`` = ``io.confluent.connect.gcs.source.partitioner.DefaultPartitioner``. * **Field Partitioner**: A partitioner that uses record values of the configured ``partition.field.name`` to determine partitions. For supporting Field Partitioner You must configure the ``partitioner.class`` = ``io.confluent.connect.gcs.source.partitioner.FieldPartitioner``. * **Time Based Partitioner**: For supporting Time Based Partitioner You must configure the ``partitioner.class`` = ``io.confluent.connect.gcs.source.partitioner.TimeBasedPartitioner``. * **Daily Partitioner**: A subclass of the ``TimeBasedPartitioner``. For supporting Daily Partitioner You must configure the ``partitioner.class`` = ``io.confluent.connect.gcs.source.partitioner.DailyPartitioner``. * **Hourly Partitioner**: A subclass of the ``TimeBasedPartitioner``. For supporting Hourly Partitioner You must configure the ``partitioner.class`` = ``io.confluent.connect.gcs.source.partitioner.HourlyPartitioner``. .. _gcs_source_connector_data_formats: Google Cloud Storage Source Connector Data Formats -------------------------------------------------- GCS source connector supports several data formats: * **Avro Format**: For supporting Avro Format. You must configure the ``format.class`` = ``io.confluent.connect.gcs.format.avro.AvroFormat``. * **JSON Format**: For supporting JSON Format. You must configure the ``format.class`` = ``io.confluent.connect.gcs.format.json.JsonFormat``. * **Raw Bytes Format**: For supporting Raw Bytes Format. You must configure the ``format.class`` = ``io.confluent.connect.gcs.format.bytearray.ByteArrayFormat``. Troubleshooting Connector and Task Failures ------------------------------------------- ----------- Stack Trace ----------- You can use the |kconnect| :ref:`REST API ` to check the status of the connectors and tasks. If a task or connector has failed, the ``trace`` field will include a reason and a stack trace. ----------------------------------------- Fewer Partitions than in the Source Topic ----------------------------------------- If there are fewer partitions in the destination cluster than are in the source topic, the connector task throws an exception and immmediately stops before it tries to write to a |ak| partition that does not exist. You will see the following error messages in the |kconnect| worker log. The recommended practice is to create topics manually in the destination |ak| cluster with the correct number of partitions before running the source connector. :: INFO WorkerSourceTask{id=gcs-source-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:409) INFO WorkerSourceTask{id=gcs-source-0} flushing 1 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:426) ERROR WorkerSourceTask{id=gcs-source-0} Failed to flush, timed out while waiting for producer to flush outstanding 1 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:431) ERROR WorkerSourceTask{id=gcs-source-0} Failed to commit offsets (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:114) .. include:: ../../kafka-connect-s3-source/includes/error-handling.rst .. note:: The connector always ignores a file which is not in ``++.`` format. Additional Documentation ------------------------ .. toctree:: :maxdepth: 1 configuration_options changelog