Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Google Cloud Spanner Sink Connector Configuration Properties

To use this connector, specify the name of the connector class in the connector.class configuration property.

connector.class=io.confluent.connect.spanner.SpannerSinkConnector

Connector-specific configuration properties are described below.

Connection

gcp.spanner.instance.id

The ID of the Spanner instance to connect to.

  • Type: string
  • Importance: high
gcp.spanner.database.id

The ID of the database where topic tables are located or will be created.

  • Type: string
  • Importance: high
gcp.spanner.credentials.path

The path to the JSON service key file. If empty, credentials are read from the gcp.spanner.credentials.json setting.

  • Type: string
  • Default: “”
  • Importance: high
gcp.spanner.credentials.json

The contents of the JSON service key file. If empty, credentials are read from the gcp.spanner.credentials.path setting.

  • Type: password
  • Default: null
  • Importance: high
gcp.spanner.proxy.url

Google Cloud Spanner proxy settings encoded in URL syntax. Set this property only if you need to access Spanner through a proxy.

  • Type: string
  • Default: “”
  • Importance: low
gcp.spanner.proxy.user

Spanner proxy user. Set this property only if you need to access Spanner through a proxy. Using gcp.spanner.proxy.user instead of embedding the username and password in gcp.spanner.proxy.url enables the password to be hidden in the logs.

  • Type: string
  • Default: null
  • Importance: low
gcp.spanner.proxy.password

Spanner proxy password. Set this property only if you need to access Spanner through a proxy. Using gcp.spanner.proxy.password instead of embedding the username and password in gcp.spanner.proxy.url enables the password to be hidden in the logs.

  • Type: password
  • Default: null
  • Importance: low

Writes

insert.mode

The insertion mode to use. Supported modes are:

insert

Use the standard INSERT functionality. An error is thrown if the row to be inserted already exists in the table.

update

Use the UPDATE functionality. An error is thrown if the row to update doesn’t exist in the table.

upsert

This mode works like INSERT, except that if the row already exists, its column values are overwritten with the ones provided.

  • Type: string
  • Default: INSERT
  • Valid Values: one of [UPSERT, INSERT, UPDATE]
  • Importance: high
max.batch.size

The maximum number of records that can be batched into a single insert, update, or upsert to Spanner.

  • Type: int
  • Default: 1000
  • Valid Values: [1,…]
  • Importance: medium
error.mode

Specifies how to handle errors that result from writes to Spanner.

  • Type: string
  • Default: fail
  • Valid Values: one of [IGNORE, FAIL, WARN]
  • Importance: medium

Data Mapping

table.name.format

A format string for the destination table name, which may contain ${topic} as a placeholder for the originating topic name.

For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.

Spanner constraints for table names are {a—z|A—Z}[{a—z|A—Z|0—9|_}+].

  • Type: string
  • Default: ${topic}
  • Valid Values: Any non empty string value.
  • Importance: medium
pk.mode

The primary key mode, also refer to pk.fields documentation for interplay. Supported modes are:

kafka

Kafka coordinates are used as the PK.

record_key

Field(s) from the record key are used, which may be a primitive or a struct.

record_value

Field(s) from the record value are used, which must be a struct.

  • Type: string
  • Default: kafka
  • Valid Values: one of [RECORD_VALUE, KAFKA, RECORD_KEY]
  • Importance: high
pk.fields

List of comma-separated primary key field names. The runtime interpretation of this setting depends on the pk.mode:

kafka

A trio representing the Kafka coordinates. If empty, the default values are connect_topic__, connect_partition__, connect_offset__.

record_key

Extracts the specified key fields. If empty, all fields from the key struct are used. For a primitive key, only a single field name must be configured.

record_value

Extracts the desired value fields. If empty, all fields from the value struct are used.

Spanner does not allow Array type columns to be selected as PKs.

  • Type: list
  • Default: “”
  • Importance: medium
fields.whitelist

Filters to the specified fields. List of comma-separated record value field names. If empty, all fields from the record value are used.

Note

The pk.fields setting is applied independently in the context of which field(s) form the primary key columns in the destination database, while the fields.whitelist is applicable for the other columns.

  • Type: list
  • Default: “”
  • Importance: medium

DDL Support

auto.create

Whether to create the destination table automatically based on record schema if it is found to be missing by issuing CREATE.

  • Type: boolean
  • Default: false
  • Importance: medium
auto.evolve

Whether to add columns automatically in the table schema when found to be missing relative to the record schema by issuing ALTER.

  • Type: boolean
  • Default: false
  • Importance: medium

Retries

retry.timeout.ms

The maximum amount of time to retry on errors before failing the task.

  • Type: long
  • Default: 60000
  • Valid Values: [0,…]
  • Importance: medium
request.timeout.ms

The amount of time to wait on a request to Spanner before timing out and retrying.

  • Type: long
  • Default: 6000
  • Valid Values: [0,…]
  • Importance: medium

Confluent Platform license

confluent.topic.bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

  • Type: list
  • Importance: high

confluent.topic Name of the Kafka topic used for Confluent Platform configuration, including licensing information.

  • Type: string
  • Default: _confluent-command
  • Importance: low

confluent.topic.replication.factor The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).

  • Type: int
  • Default: 3
  • Importance: low

Confluent license properties

Note

This connector is proprietary and requires a license. The license information is stored in the _confluent-command topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties as described below.

confluent.license

Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.

If you are a subscriber, please contact Confluent Support for more information.

  • Type: string
  • Default: “”
  • Valid Values: Confluent Platform license
  • Importance: high
confluent.topic.ssl.truststore.location

The location of the trust store file.

  • Type: string
  • Default: null
  • Importance: high
confluent.topic.ssl.truststore.password

The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.ssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.

  • Type: string
  • Default: null
  • Importance: high
confluent.topic.ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.ssl.key.password

The password of the private key in the key store file. This is optional for client.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

  • Type: string
  • Default: “PLAINTEXT”
  • Importance: medium

License topic configuration

A Confluent enterprise license is stored in the _confluent-command topic. This topic is created by default and contains the license that corresponds to the license key supplied through the confluent.license property.

Note

No public keys are stored in Kafka topics.

The following describes how the default _confluent-command topic is generated under different scenarios:

  • A 30-day trial license is automatically generated for the _confluent command topic if you do not add the confluent.license property or leave this property empty (for example, confluent.license=).
  • Adding a valid license key (for example, confluent.license=<valid-license-key>) adds a valid license in the _confluent-command topic.

Here is an example of the minimal properties for development and testing.

You can change the name of the _confluent-command topic using the confluent.topic property (for instance, if your environment has strict naming conventions). The example below shows this change and the configured Kafka bootstrap server.

confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092

The example above shows the minimally required bootstrap server property that you can use for development and testing. For a production environment, you add the normal producer, consumer, and topic configuration properties to the connector properties, prefixed with confluent.topic..

License topic ACLs

The _confluent-command topic contains the license that corresponds to the license key supplied through the confluent.license property. It is created by default. Connectors that access this topic require the following ACLs configured:

  • CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
  • DESCRIBE, READ, and WRITE on the _confluent-command topic.

You can provide access either individually for each principal that will use the license or use a wildcard entry to allow all clients. The following examples show commands that you can use to configure ACLs for the resource cluster and _confluent-command topic.

  1. Set a CREATE and DESCRIBE ACL on the resource cluster:

    kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
    --add --allow-principal User:<principal> \
    --operation CREATE --operation DESCRIBE --cluster
    
  2. Set a DESCRIBE, READ, and WRITE ACL on the _confluent-command topic:

    kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
    --add --allow-principal User:<principal> \
    --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
    

Overriding Default Configuration Properties

You can override the replication factor using confluent.topic.replication.factor. For example, when using a Kafka cluster as a destination with less than three brokers (for development and testing) you should set the confluent.topic.replication.factor property to 1.

You can override producer-specific properties by using the confluent.topic.producer. prefix and consumer-specific properties by using the confluent.topic.consumer. prefix.

You can use the defaults or customize the other properties as well. For example, the confluent.topic.client.id property defaults to the name of the connector with -licensing suffix. You can specify the configuration settings for brokers that require SSL or SASL for client connections using this prefix.

You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.