.. _gcs_config_options: Configuration Properties ======================== To use this connector, specify the name of the connector class in the ``connector.class`` configuration property. .. codewithvars:: properties connector.class=io.confluent.connect.gcs.GcsSinkConnector Connector-specific configuration properties are described below. Connector ^^^^^^^^^ ``format.class`` The format class to use when writing data to the store. * Type: class * Importance: high ``flush.size`` Number of records written to store before invoking file commits. * Type: int * Importance: high ``rotate.interval.ms`` The time interval in milliseconds to invoke file commits. This configuration ensures that file commits are invoked every configured interval. This configuration is useful when data ingestion rate is low and the connector didn't write enough messages to commit files. The default value -1 means that this feature is disabled. * Type: long * Default: -1 * Importance: high ``rotate.schedule.interval.ms`` The time interval in milliseconds to periodically invoke file commits. This configuration ensures that file commits are invoked every configured interval. Time of commit will be adjusted to 00:00 of selected timezone. Commit will be performed at scheduled time regardless previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of every hour. The default value -1 means that this feature is disabled. * Type: long * Default: -1 * Importance: medium ``schema.cache.size`` The size of the schema cache used in the Avro converter. * Type: int * Default: 1000 * Importance: low ``enhanced.avro.schema.support`` Enable enhanced avro schema support in AvroConverter: Enum symbol preservation and Package Name awareness * Type: boolean * Default: false * Importance: low ``connect.meta.data`` Allow connect converter to add its meta data to the output schema * Type: boolean * Default: true * Importance: low ``retry.backoff.ms`` The retry backoff in milliseconds. This config is used to notify |kconnect-long| to retry delivering a message batch or performing recovery in case of transient exceptions. * Type: long * Default: 5000 * Importance: low ``filename.offset.zero.pad.width`` Width to zero-pad offsets in store's filenames if offsets are too short in order to provide fixed-width filenames that can be ordered by simple lexicographic sorting. * Type: int * Default: 10 * Valid Values: [0,...] * Importance: low ``avro.codec`` The Avro compression codec to be used for output files. Available values: null, deflate, snappy and bzip2 (CodecSource is org.apache.avro.file.CodecFactory) * Type: string * Default: null * Valid Values: [null, deflate, snappy, bzip2] * Importance: low Schema ^^^^^^ ``schema.compatibility`` The schema compatibility rule to use when the connector is observing schema changes. The supported configurations are NONE, BACKWARD, FORWARD and FULL. * Type: string * Default: NONE * Importance: high GCS ^^^ ``gcs.bucket.name`` The GCS Bucket. * Type: string * Importance: high ``gcs.part.size`` The Part Size in GCS Multi-part Uploads. * Type: int * Default: 26214400 * Valid Values: [5242880,...,2147483647] * Importance: high ``gcs.part.retries`` Number of upload retries of a single GCS part. Zero means no retries. * Type: int * Default: 3 * Valid Values: [0,...] * Importance: medium ``gcs.parts.prefix`` DEPRECATED. Multipart uploads no longer use a part prefix. * Type: string * Default: parts * Valid Values: non-empty string * Importance: low ``gcs.retry.backoff.ms`` How long to wait in milliseconds before attempting the first retry of a failed GCS request. Upon a failure, this connector may wait up to twice as long as the previous wait, up to the maximum number of retries. This avoids retrying in a tight loop under failure scenarios. * Type: long * Default: 200 * Valid Values: [0,...] * Importance: low ``format.bytearray.extension`` Output file extension for ByteArrayFormat. Defaults to '.bin' * Type: string * Default: .bin * Importance: low ``format.bytearray.separator`` String inserted between records for ByteArrayFormat. Defaults to 'System.lineSeparator()' and may contain escape sequences like '\n'. An input record that contains the line separator will look like multiple records in the output GCS object. * Type: string * Default: null * Importance: low ``gcs.credentials.path`` Path to credentials file. If empty, credentials are read from theGOOGLE_APPLICATION_CREDENTIALS environment variable. * Type: string * Default: "" * Importance: low ``gcs.compression.type`` Compression type for file written to GCS. Applied when using JsonFormat or ByteArrayFormat. Available values: none, gzip. * Type: string * Default: none * Valid Values: [none, gzip] * Importance: low Storage ^^^^^^^ ``storage.class`` The underlying storage layer. * Type: class * Importance: high ``topics.dir`` Top level directory to store the data ingested from Kafka. * Type: string * Default: topics * Importance: high ``store.url`` Store's connection URL, if applicable. * Type: string * Default: null * Importance: high ``directory.delim`` Directory delimiter pattern * Type: string * Default: / * Importance: medium ``file.delim`` File delimiter pattern * Type: string * Default: + * Importance: medium Partitioner ^^^^^^^^^^^ ``partitioner.class`` The partitioner to use when writing data to the store. You can use ``DefaultPartitioner``, which preserves the Kafka partitions; ``FieldPartitioner``, which partitions the data to different directories according to the value of the partitioning field specified in ``partition.field.name``; ``TimeBasedPartitioner``, which partitions data according to ingestion time. * Type: class * Default: io.confluent.connect.storage.partitioner.DefaultPartitioner * Importance: high * Dependents: ``partition.field.name``, ``partition.duration.ms``, ``path.format``, ``locale``, ``timezone`` ``partition.field.name`` The name of the partitioning field when FieldPartitioner is used. * Type: list * Default: "" * Importance: medium ``partition.duration.ms`` The duration of a partition milliseconds used by ``TimeBasedPartitioner``. The default value -1 means that we are not using ``TimeBasedPartitioner``. * Type: long * Default: -1 * Importance: medium ``path.format`` This configuration is used to set the format of the data directories when partitioning with ``TimeBasedPartitioner``. The format set in this configuration converts the Unix timestamp to proper directories strings. For example, if you set ``path.format='year'=YYYY/'month'=MM/'day'=dd/'hour'=HH``, the data directories will have the format ``/year=2015/month=12/day=07/hour=15/``. * Type: string * Default: "" * Importance: medium ``locale`` The locale to use when partitioning with ``TimeBasedPartitioner``. Used to format dates and times. For example, use ``en-US`` for US English, ``en-GB`` for UK English, or ``fr-FR`` for French (in France). These may vary by Java version. See the `available locales `__. * Type: string * Default: "" * Importance: medium ``timezone`` The timezone to use when partitioning with ``TimeBasedPartitioner``. Used to format and compute dates and times. Use standard short names for timezones such as ``UTC`` or (without daylight savings) ``PST``, ``EST``, and ``ECT``, or longer standard names such as ``America/Los_Angeles``, ``America/New_York``, and ``Europe/Paris``. These may vary by Java version. See the `available timezones within each locale `__, such as `those within the US English locale `__. * Type: string * Default: "" * Importance: medium ``timestamp.extractor`` The extractor that gets the timestamp for records when partitioning with ``TimeBasedPartitioner``. It can be set to ``Wallclock``, ``Record`` or ``RecordField`` in order to use one of the built-in timestamp extractors or be given the fully-qualified class name of a user-defined class that extends the ``TimestampExtractor`` interface. * Type: string * Default: Wallclock * Importance: medium ``timestamp.field`` The record field to be used as timestamp by the timestamp extractor. * Type: string * Default: timestamp * Importance: medium .. _gcs-sink-connector-license-config: |cp| license ^^^^^^^^^^^^ ``confluent.topic.bootstrap.servers`` A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). * Type: list * Importance: high ``confluent.topic`` Name of the Kafka topic used for Confluent Platform configuration, including licensing information. * Type: string * Default: _confluent-command * Importance: low ``confluent.topic.replication.factor`` The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1). * Type: int * Default: 3 * Importance: low ---------------------------- Confluent license properties ---------------------------- .. include:: ../includes/security-info.rst .. include:: ../includes/platform-license.rst .. include:: ../includes/security-configs.rst .. _gcs_license-topic-configuration: .. include:: ../includes/platform-license-detail.rst .. include:: ../includes/overriding-default-config-properties.rst