Configuration Reference for Google Cloud Storage Source Connector for Confluent Platform¶
To use this connector, specify the name of the connector class in the connector.class
configuration property.
connector.class=io.confluent.connect.gcs.GcsSourceConnector
Connector-specific configuration properties are described below.
Note
These are properties for the self-managed connector. If you are using Confluent Cloud, see Google Cloud Storage Source connector for Confluent Cloud.
GCS Parameters¶
gcs.bucket.name
The name of the GCS bucket.
- Type: string
- Importance: high
gcs.part.retries
Number of download retries of a single GCS part. Zero means no retries.
- Type: int
- Default: 3
- Valid Values: [0,…]
- Importance: medium
gcs.retry.backoff.ms
How long to wait in milliseconds before attempting the first retry of a failed GCS request. Upon a failure, this connector may wait up to twice as long as the previous wait, up to the maximum number of retries. This avoids retrying in a tight loop under failure scenarios.
- Type: long
- Default: 200
- Valid Values: [0,…]
- Importance: low
gcs.credentials.path
Path to credentials file. If empty, credentials are read from the GOOGLE_APPLICATION_CREDENTIALS environment variable.
- Type: string
- Importance: high
gcs.credentials.provider.class
Set to a class that implements the
com.google.api.gax.core.CredentialsProvider
interface to use a custom credentials provider. Configure the class to the fully qualified name of your custom credentials provider class.- Type: class
- Importance: high
Connector Parameters¶
gcs.poll.interval.ms
Frequency, in milliseconds, to poll for new or removed folders. This may result in updated task configurations that start polling for data in added folders or stop polling for data in removed folders.
- Type: long
- Default: 60000
- Importance: medium
format.class
Class responsible for converting storage objects to source records.
- Type: class
- Valid Values: Any class that implements these three classes:
io.confluent.connect.cloud.storage.source.format.CloudStorageAvroFormat
io.confluent.connect.cloud.storage.source.format.CloudStorageJsonFormat
io.confluent.connect.cloud.storage.source.format.CloudStorageByteArrayFormat
- Importance: high
Note
Confluent provides the following three classes for you to use with the GCS Source connector. You can use the values below, or create your own implementation of the classes listed in Valid Values above.
io.confluent.connect.gcs.format.avro.AvroFormat
io.confluent.connect.gcs.format.json.JsonFormat
io.confluent.connect.gcs.format.bytearray.ByteArrayFormat
schema.cache.size
The size of the schema cache used in the Avro converter.
- Type: int
- Default: 50
- Valid Values: [1,…]
- Importance: low
record.batch.max.size
The maximum number of records to return each time storage is polled.
- Type: int
- Default: 200
- Valid Values: [1,…]
- Importance: medium
Generalized Connector Parameters¶
The following properties are applicable to the Generalized GCS Source connector only.
mode
The connector’s operation mode.
- Type: string
- Default: RESTORE_BACKUP
- Valid Values: [GENERIC, RESTORE_BACKUP]
- Importance: high
- Dependents:
task.batch.size
,starting.timestamp
,bucket.listing.max.objects.threshold
,topic.regex.list
,partitioner.class
,partition.field.name
,path.format
topic.regex.list
A comma-separated list of pairs of type
<kafka topic>:<regex>
that is used to map file paths to Kafka topics. For example, if"topic.regex.list": "kafka-topic-for-json:*"
,"kafka-topic-for-json:.*"
sends all files to"kafka-topic-for-json"
. The expression"special-topic:.*\.json+*"
” sends only files ending with".json"
to"special-topic"
. The connector ignores (doesn’t source) other files not matching any patterns. The connector sends files that match multiple mappings to the first topic in the list that maps the file.- Type: list
- Default: “”
- Valid Values: A list of pairs in the form
<kafka topic1>:<regex1>, <kafka topic2>:<regex2>, ...
- Importance: high
task.batch.size
The number of files assigned to each task at a time.
- Type: int
- Default: 10
- Valid Values: [1,…,2000]
- Importance: medium
file.discovery.starting.timestamp
A Unix timestamp that denotes where to start processing files. Any file encountered with a creation time earlier than this will be ignored.
- Type: long
- Default: 0
- Valid Values: [0,…]
- Importance: medium
bucket.listing.max.objects.threshold
An integer that represents the maximum number of objects the connector will index from the bucket at a time before failing. A value of
-1
indicates no limit.- Type: int
- Default: -1
- Importance: medium
Auto topic creation¶
For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.
Configuration properties accept regular expressions (regex) that are defined as Java regex.
topic.creation.groups
A list of group aliases that are used to define per-group topic configurations for matching topics. A
default
group always exists and matches all topics.- Type: List of String types
- Default: empty
- Possible Values: The values of this property refer to any additional groups. A
default
group is always defined for topic configurations.
topic.creation.$alias.replication.factor
The replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the
default
group. This property is optional for any other group defined intopic.creation.groups
. Other groups use the Kafka broker default value.- Type: int
- Default: n/a
- Possible Values:
>= 1
for a specific valid value or-1
to use the Kafka broker’s default value.
topic.creation.$alias.partitions
The number of topic partitions created by this connector. This is a required property for the
default
group. This property is optional for any other group defined intopic.creation.groups
. Other groups use the Kafka broker default value.- Type: int
- Default: n/a
- Possible Values:
>= 1
for a specific valid value or-1
to use the Kafka broker’s default value.
topic.creation.$alias.include
A list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics.
$alias
applies to any group defined intopic.creation.groups
. This property does not apply to thedefault
group.- Type: List of String types
- Default: empty
- Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.exclude
A list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration.
$alias
applies to any group defined intopic.creation.groups
. This property does not apply to thedefault
group. Note that exclusion rules override any inclusion rules for topics.- Type: List of String types
- Default: empty
- Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.${kafkaTopicSpecificConfigName}
Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule.
$alias
applies to thedefault
group as well as any group defined intopic.creation.groups
.- Type: property values
- Default: Kafka broker value
Storage Parameters¶
format.bytearray.extension
Output file extension for Byte Array Format. Defaults to
.bin
.- Type: string
- Default: .bin
- Importance: low
format.bytearray.separator
String inserted between records for Byte Array Format. Defaults to
System.lineSeparator()
and may contain escape sequences like\n
. An input record that contains the line separator looks like multiple records in the storage object output.- Type: string
- Default: null
- Importance: low
topics.dir
Top level directory where data was stored to be re-ingested by Kafka.
- Type: string
- Default: topics
- Importance: high
directory.delim
Directory delimiter pattern.
- Type: string
- Default: /
- Importance: medium
behavior.on.error
Sets how the connector handles errors that occur when processing records.
- Type: string
- Default: fail
- Valid Values:
fail
,ignore
,log
- Importance: medium
Partitioner Parameters (Backup and Restore Connector Only)¶
partitioner.class
The partitioner to use when reading data to the store.
- Type: class
- Default: io.confluent.connect.gcs.source.partitioner.DefaultPartitioner
- Importance: high
partition.field.name
The name of the partitioning field when FieldPartitioner is used.
- Type: list
- Default: “”
- Importance: medium
path.format
The configuration that was used to set the format of the data directories when partitioning with a TimeBasedPartitioner. For example, if you set
path.format
to'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
, then valid data directories would be:/year=2015/month=12/day=07/hour=15/
.- Type: string
- Default: “”
- Importance: medium
Confluent Platform license¶
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form
host1:port1,host2:port2,...
. Because these servers are only used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties¶
While it is possible to include license-related properties in the connector configuration, starting with Confluent Platform version 6.0, you can now put license-related properties in the Connect worker configuration instead of in each connector configuration.
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.*
properties
as described below.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration¶
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add theconfluent.license
property or leave this property empty (for example,confluent.license=
). - Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the_confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
License topic ACLs¶
The _confluent-command
topic contains the license that corresponds to the
license key supplied through the confluent.license
property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
DESCRIBE, READ, and WRITE on the
_confluent-command
topic.Important
You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command
topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-command
topic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Override Default Configuration Properties¶
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
producer.override.*
prefix (for source connectors) and consumer-specific
properties by using the consumer.override.*
prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.