Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Configuration Properties¶
To use this connector, specify the name of the connector class in the connector.class
configuration property.
connector.class=io.confluent.connect.gcs.GcsSinkConnector
Connector-specific configuration properties are described below.
Connector¶
format.class
The format class to use when writing data to the store.
- Type: class
- Importance: high
flush.size
Number of records written to store before invoking file commits.
- Type: int
- Importance: high
rotate.interval.ms
The time interval in milliseconds to invoke file commits. This configuration ensures that file commits are invoked every configured interval. This configuration is useful when data ingestion rate is low and the connector didn’t write enough messages to commit files. The default value -1 means that this feature is disabled.
- Type: long
- Default: -1
- Importance: high
rotate.schedule.interval.ms
The time interval in milliseconds to periodically invoke file commits. This configuration ensures that file commits are invoked every configured interval. Time of commit will be adjusted to 00:00 of selected timezone. Commit will be performed at scheduled time regardless previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of every hour. The default value -1 means that this feature is disabled.
- Type: long
- Default: -1
- Importance: medium
schema.cache.size
The size of the schema cache used in the Avro converter.
- Type: int
- Default: 1000
- Importance: low
enhanced.avro.schema.support
Enable enhanced avro schema support in AvroConverter: Enum symbol preservation and Package Name awareness
- Type: boolean
- Default: false
- Importance: low
connect.meta.data
Allow connect converter to add its meta data to the output schema
- Type: boolean
- Default: true
- Importance: low
retry.backoff.ms
The retry backoff in milliseconds. This config is used to notify Kafka Connect to retry delivering a message batch or performing recovery in case of transient exceptions.
- Type: long
- Default: 5000
- Importance: low
filename.offset.zero.pad.width
Width to zero-pad offsets in store’s filenames if offsets are too short in order to provide fixed-width filenames that can be ordered by simple lexicographic sorting.
- Type: int
- Default: 10
- Valid Values: [0,…]
- Importance: low
avro.codec
The Avro compression codec to be used for output files. Available values: null, deflate, snappy and bzip2 (CodecSource is org.apache.avro.file.CodecFactory)
- Type: string
- Default: null
- Valid Values: [null, deflate, snappy, bzip2]
- Importance: low
Schema¶
schema.compatibility
The schema compatibility rule to use when the connector is observing schema changes. The supported configurations are NONE, BACKWARD, FORWARD and FULL.
- Type: string
- Default: NONE
- Importance: high
GCS¶
gcs.bucket.name
The GCS Bucket.
- Type: string
- Importance: high
gcs.part.size
The Part Size in GCS Multi-part Uploads.
- Type: int
- Default: 26214400
- Valid Values: [5242880,…,2147483647]
- Importance: high
gcs.part.retries
Number of upload retries of a single GCS part. Zero means no retries.
- Type: int
- Default: 3
- Valid Values: [0,…]
- Importance: medium
gcs.parts.prefix
DEPRECATED. Multipart uploads no longer use a part prefix.
- Type: string
- Default: parts
- Valid Values: non-empty string
- Importance: low
gcs.retry.backoff.ms
How long to wait in milliseconds before attempting the first retry of a failed GCS request. Upon a failure, this connector may wait up to twice as long as the previous wait, up to the maximum number of retries. This avoids retrying in a tight loop under failure scenarios.
- Type: long
- Default: 200
- Valid Values: [0,…]
- Importance: low
format.bytearray.extension
Output file extension for ByteArrayFormat. Defaults to ‘.bin’
- Type: string
- Default: .bin
- Importance: low
format.bytearray.separator
String inserted between records for ByteArrayFormat. Defaults to ‘System.lineSeparator()’ and may contain escape sequences like ‘n’. An input record that contains the line separator will look like multiple records in the output GCS object.
- Type: string
- Default: null
- Importance: low
gcs.credentials.path
Path to credentials file. If empty, credentials are read from theGOOGLE_APPLICATION_CREDENTIALS environment variable.
- Type: string
- Default: “”
- Importance: low
gcs.compression.type
Compression type for file written to GCS. Applied when using JsonFormat or ByteArrayFormat. Available values: none, gzip.
- Type: string
- Default: none
- Valid Values: [none, gzip]
- Importance: low
Storage¶
storage.class
The underlying storage layer.
- Type: class
- Importance: high
topics.dir
Top level directory to store the data ingested from Kafka.
- Type: string
- Default: topics
- Importance: high
store.url
Store’s connection URL, if applicable.
- Type: string
- Default: null
- Importance: high
directory.delim
Directory delimiter pattern
- Type: string
- Default: /
- Importance: medium
file.delim
File delimiter pattern
- Type: string
- Default: +
- Importance: medium
Partitioner¶
partitioner.class
The partitioner to use when writing data to the store. You can use
DefaultPartitioner
, which preserves the Kafka partitions;FieldPartitioner
, which partitions the data to different directories according to the value of the partitioning field specified inpartition.field.name
;TimeBasedPartitioner
, which partitions data according to ingestion time.- Type: class
- Default: io.confluent.connect.storage.partitioner.DefaultPartitioner
- Importance: high
- Dependents:
partition.field.name
,partition.duration.ms
,path.format
,locale
,timezone
partition.field.name
The name of the partitioning field when FieldPartitioner is used.
- Type: list
- Default: “”
- Importance: medium
partition.duration.ms
The duration of a partition milliseconds used by
TimeBasedPartitioner
. The default value -1 means that we are not usingTimeBasedPartitioner
.- Type: long
- Default: -1
- Importance: medium
path.format
This configuration is used to set the format of the data directories when partitioning with
TimeBasedPartitioner
. The format set in this configuration converts the Unix timestamp to proper directories strings. For example, if you setpath.format='year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
, the data directories will have the format/year=2015/month=12/day=07/hour=15/
.- Type: string
- Default: “”
- Importance: medium
locale
The locale to use when partitioning with
TimeBasedPartitioner
. Used to format dates and times. For example, useen-US
for US English,en-GB
for UK English, orfr-FR
for French (in France). These may vary by Java version. See the available locales.- Type: string
- Default: “”
- Importance: medium
timezone
The timezone to use when partitioning with
TimeBasedPartitioner
. Used to format and compute dates and times. Use standard short names for timezones such asUTC
or (without daylight savings)PST
,EST
, andECT
, or longer standard names such asAmerica/Los_Angeles
,America/New_York
, andEurope/Paris
. These may vary by Java version. See the available timezones within each locale, such as those within the US English locale.- Type: string
- Default: “”
- Importance: medium
timestamp.extractor
The extractor that gets the timestamp for records when partitioning with
TimeBasedPartitioner
. It can be set toWallclock
,Record
orRecordField
in order to use one of the built-in timestamp extractors or be given the fully-qualified class name of a user-defined class that extends theTimestampExtractor
interface.- Type: string
- Default: Wallclock
- Importance: medium
timestamp.field
The record field to be used as timestamp by the timestamp extractor.
- Type: string
- Default: timestamp
- Importance: medium
Confluent Platform license¶
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties¶
Note
This connector is proprietary and requires a license. The license information
is stored in the _confluent-command
topic. If the broker requires SSL for
connections, you must include the security-related confluent.topic.*
properties as described below.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, please contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration¶
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property.
Note
No public keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add theconfluent.license
property or leave this property empty (for example,confluent.license=
). - Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the_confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
Overriding Default Configuration Properties¶
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using an Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
confluent.topic.producer.
prefix and consumer-specific properties by using
the confluent.topic.consumer.
prefix.
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.