Google BigQuery Sink V2 Connector for Confluent Cloud

You can use the Kafka Connect Google BigQuery Sink V2 connector for Confluent Cloud to export Avro, JSON Schema, Protobuf, or JSON (schemaless) data from Apache Kafka® topics to BigQuery. The BigQuery table schema is based upon information in the Apache Kafka® schema for the topic.

With version 2 of the Google BigQuery Sink connector, you can perform upsert and delete actions on ingested data. Additionally, this version allows you to use the Storage Write API to ingest data directly into BigQuery tables. For more details, see the following section.

Features

  • The connector supports the following functionalities for data ingestion:

    • Upsert functionality: With the upsert functionality, you can insert new data or update existing matching key data. For more details, see Stream table updates with change data capture.

    • Upsert and delete functionality: With the upsert and delete functionality, you can insert new data, update existing matching key data, or delete matching key data for tombstone records. For more details, see Stream table updates with change data capture.

    • Google Cloud BigQuery Storage Write API. The BigQuery Storage Write API combines streaming ingestion and batch loading into a single high-performance API. For more information, see Batch load and stream data with BigQuery Storage Write API. Note that using the Storage Write API may provide a cost-benefit for your BigQuery project. Also, note that BigQuery API quotas apply.

      Review the following table for details about the differences between using BATCH LOADING versus STREAMING mode with the BigQuery API. For more information, see Introduction to loading data.

      BATCH LOADING STREAMING
      Records are available after the commit interval has expired and may not be provided in real time Records are available immediately after an append call (minimal latency)
      Requires creating application streams Default stream is used
      May cost less May cost more
      More API quota limits (for example, max streams and buffering) Less quota limits
  • The connector supports OAuth 2.0 for connecting to BigQuery. Note that OAuth is only available when creating a connector using the Cloud Console.

  • The connector supports streaming from a list of topics into corresponding tables in BigQuery.

  • Even though the connector streams records by default (as opposed to running in batch mode), the connector is scalable because it contains an internal thread pool that allows it to stream records in parallel. The internal thread pool defaults to 10 threads. Note that this is only applicable for BATCH LOADING and STREAMING mode and not UPSERT and UPSERT_DELETE mode.

  • The connector supports several time-based table partitioning strategies.

  • The connector supports routing invalid records to the DLQ. This includes any records that have gRPC status code INVALID_ARGUMENT from the BigQuery Storage Write API.

    Note

    DLQ routing does not work if Auto update schemas (auto.update.schemas) is enabled and the connector detects that the failure is due to schema mismatch.

  • The connector supports default_missing_value_interpretation. For more information about default value interpretation for missing value, see Google BigQuery default value settings. Contact Confluent Support to enable default value settings in your connector.

  • The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

  • For Avro, JSON_SR, and PROTOBUF, the connector provides the following configuration properties that support automated table creation and schema updates. You can select these properties in the UI or add them to the connector configuration, if using the Confluent CLI.

    • auto.create.tables: Automatically create BigQuery tables if they don’t already exist. The connector expects that the BigQuery table name is the same as the topic name. If you create the BigQuery tables manually, make sure the table name matches the topic name. This property adds/updates Kafka record keys as primary keys if ingestion.mode is set to UPSERT or UPSERT_DELETE. Note that you must adhere to the BiqQuery’s primary key constraints.
    • auto.update.schemas: Automatically update BigQuery schemas. Note that new fields are added as NULLABLE in the BigQuery schema. This property adds/updates Kafka record keys as primary keys if ingestion.mode is set to UPSERT or UPSERT_DELETE. Note that you must adhere to the BiqQuery’s primary key constraints.
    • sanitize.topics: Automatically sanitize topic names before using them as BigQuery table names. If not enabled, topic names are used as table names. If enabled, the table names created may be different from the topic names.
    • sanitize.field.names Automatically sanitize field names before using them as column names in BigQuery.

    Note

    New tables and schema updates may take a few minutes to be detected by the Google Client Library. For more information see the Google Cloud BigQuery API guide.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Supported data types

The following sections describe supported BigQuery data types and the associated connector mapping.

General mapping

The following is a general mapping of supported data types for the Google BigQuery Sink V2 connector. Note that this mapping applies even if a table is created or updated outside of the connector lifecycle. For automatic table creation and schema update mapping, see the following section.

BigQuery Data Type Connector Mapping Conditions
JSON STRING  
GEOGRAPHY STRING  
INTEGER STRING  
INTEGER INT32  
INTEGER INT64  
FLOAT INT8  
FLOAT INT16  
FLOAT INT32  
FLOAT INT64  
FLOAT STRING  
FLOAT FLOAT32  
FLOAT FLOAT64  
BOOL BOOLEAN  
BOOL STRING “true” or “false”
BYTES BYTES  
STRING STRING  
BIGNUMERIC INT16  
BIGNUMERIC INT32  
BIGNUMERIC INT64  
BIGNUMERIC STRING  
BIGNUMERIC FLOAT32  
BIGNUMERIC FLOAT64  
NUMERIC INT16  
NUMERIC INT32  
NUMERIC INT64  
NUMERIC STRING  
NUMERIC FLOAT32  
NUMERIC FLOAT64  
DATE STRING YYYY-MM-DD
DATE INT32 Number of days since epoch. The valid range is -719162 (0001-01-01) to 2932896 (9999-12-31).
DATE INT64 Number of days since epoch. The valid range is -719162 (0001-01-01) to 2932896 (9999-12-31).
DATETIME STRING YYYY-MM-DD[t|T]HH:mm:ss[.F]
TIMESTAMP STRING YYYY-MM-DD HH:mm:SS[.F]
TIMESTAMP INT64 microseconds since epoch
TIME STRING HH:mm:SS[.F]
TIMESTAMP Logical TIMESTAMP  
TIME Logical TIME  
DATE Logical DATE  
DATE Debezium Date  
TIME Debezium MicroTime  
TIME Debezium Time  
TIMESTAMP Debezium MicroTimestamp  
TIMESTAMP Debezium TIMESTAMP  
TIMESTAMP Debezium ZonedTimestamp  

Table creation and schema update mapping

The following mapping applies if the connector creates a table automatically or updates schemas (that is, if either auto.create.tables or auto.update.schemas is not disabled).

BigQuery Data Type Connector Mapping
STRING String
FLOAT INT8
FLOAT INT16
INTEGER INT32
INTEGER INT64
FLOAT FLOAT32
FLOAT FLOAT64
BOOLEAN Boolean
BYTES Bytes
TIMESTAMP Logical TIMESTAMP
TIME Logical TIME
DATE Logical DATE
FLOAT Logical Decimal
DATE Debezium Date
TIME Debezium MicroTime
TIME Debezium Time
TIMESTAMP Debezium MicroTimestamp
TIMESTAMP Debezium TIMESTAMP
TIMESTAMP Debezium ZonedTimestamp

Quick Start

Use this quick start to get up and running with the Confluent Cloud Google BigQuery Storage API Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a BigQuery data warehouse.

Prerequisites
  • An active Google Cloud account with authorization to create resources.

  • A BigQuery project is required. The project can be created using the Google Cloud Console. A BigQuery dataset is required in the project.

  • To use the Storage Write API, the connector user account authenticating with BigQuery must have bigquery.tables.updateData permissions. The minimum permissions are:

    bigquery.datasets.get
    bigquery.tables.create
    bigquery.tables.get
    bigquery.tables.getData
    bigquery.tables.list
    bigquery.tables.update
    bigquery.tables.updateData
    
  • You must create a BigQuery table before using the connector, if you leave Auto create tables (auto.create.tables) disabled (the default).

  • You may need to create a schema in BigQuery, depending on how you set the Auto update schemas property (auto.update.schemas).

    • Auto update schemas set to ADD NEW FIELDS: You do not have to create a schema.

    • Auto update schemas set to DISABLED (the default): You must create a schema in BigQuery (as shown below). The connector does not automatically update the table.

      Auto update schemas set to false
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
  • The Confluent CLI installed and configured for the cluster.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). For more information, see Schema Registry Enabled Environments.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Google BigQuery Sink V2 connector card.

Google BigQuery Storage API Sink Connector Card

Step 4: Enter the connector details

Note

  • Be sure you have all the prerequisites completed.
  • An asterisk ( * ) in the Cloud Console designates a required entry.

At the Add Google BigQuery Sink V2 screen, complete the following:

If you’ve already added Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check the results in BigQuery

  1. From the Google Cloud Console, go to your BigQuery project.
  2. Query your datasets and verify that new records are being added.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Be sure you have all the prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows example connector properties.

{
    "name" : "confluent-bigquery-storage-sink",
    "connector.class" : "BigQueryStorageSink",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key" : "<my-kafka-api-key>",
    "kafka.api.secret" : "<my-kafka-api-secret>",
    "keyfile" : "....",
    "project" : "<my-BigQuery-project>",
    "datasets" : "<my-BigQuery-dataset>",
    "ingestion.mode" : "STREAMING"
    "input.data.format" : "AVRO",
    "auto.create.tables" : "DISABLED"
    "sanitize.topics" : "true"
    "sanitize.field.names" : "false"
    "tasks.max" : "1"
    "topics" : "pageviews",
}

Note the following property definitions:

  • "name": Sets a name for the new connector.
  • "connector.class": Identifies the connector plugin name.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "keyfile": This contains the contents of the downloaded Google Cloud credentials JSON file for a Google Cloud service account. See Format service account keyfile credentials for details about how to format and use the contents of the downloaded credentials file as the "keyfile" property value.

  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "ingestion.mode": Sets the connector to use either STREAMING or BATCH LOADING. Defaults to STREAMING.

    Review the following table for details about the differences between using BATCH LOADING versus STREAMING mode with the BigQuery API. For more information, see Introduction to loading data.

    BATCH LOADING STREAMING
    Records are available after a stream commit and may not be real time Records are available for reading immediately after append call (minimal latency)
    Requires creation of application streams Default stream is used
    May cost less May cost more
    More API quota limits (max streams, buffering, etc.) Less quota limits

Note that BigQuery API quotas apply.

The following are additional properties you can use. See Configuration Properties for all property values and definitions.

  • "auto.create.tables": Designates whether to automatically create BigQuery tables. Defaults to DISABLED. Note that this property is applicable for AVRO, JSON_SR, and PROTOBUF message formats only. The other options available are listed below.

    • NON-PARTITIONED: The connector creates non-partitioned tables.
    • PARTITION by INGESTION TIME: The connector creates tables partitioned by ingestion time. Uses partitioning.type to set partitioning.
    • PARTITION by FIELD: The connector creates tables partitioned using a field in a Kafka record value. The field name is set using the property timestamp.partition.field.name. Uses partitioning.type to set partitioning.

    Note

    New tables and schema updates may take a few minutes to be detected by the Google Client Library. For more information see the Google Cloud BigQuery API guide.

  • "auto.update.schemas": Defaults to DISABLED. Designates whether or not to automatically update BigQuery schemas. If ADD NEW FIELDS is selected, new fields are added with mode NULLABLE in the BigQuery schema. Note that this property is applicable for AVRO, JSON_SR, and PROTOBUF message formats only.

    • "partitioning.type": The time partitioning type to use when creating new partitioned tables. Existing tables are not altered to use this partitioning type. Defaults to DAY.
    • "timestamp.partition.field.name": The name of the field in the value that contains the timestamp to partition by in BigQuery. Used when PARTITION by FIELD is used.
  • "sanitize.topics": Designates whether to automatically sanitize topic names before using them as table names. If not enabled, topic names are used as table names. If enabled, the table names created may be different from the topic names. Source topic names must comply with BigQuery naming conventions even if sanitize.topics is set to true.

  • "sanitize.field.names": Designates whether to automatically sanitize field names before using them as column names in BigQuery. BigQuery specifies that field names can only contain letters, numbers, and underscores. The sanitizer replaces invalid symbols with underscores. If the field name starts with a digit, the sanitizer adds an underscore in front of the field name. Defaults to false.

    Caution

    Fields a.b and a_b will have the same value after sanitizing, which could cause a key duplication error. If not used, field names are used as column names.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and definitions.

Step 4: Load the configuration file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file bigquery-storage-sink-config.json

Example output:

Created connector confluent-bigquery-storage-sink lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |       Name                      | Status  | Type
+-----------+---------------------------------+---------+------+
lcc-ix4dl   | confluent-bigquery-storage-sink | RUNNING | sink

Step 6: Check the results in BigQuery.

  1. From the Google Cloud Console, go to your BigQuery project.
  2. Query your datasets and verify that new records are being added.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Connector authentication

You can use a Google Cloud service account or Google OAuth to authenticate the connector with BigQuery. Note that OAuth is only available when using the Confluent Cloud UI to create the connector.

Create a Service Account

For details about how to create a Google Cloud service account and get a JSON credentials file, see Create and delete service account keys. Note the following when creating the service account:

  • The service account must have write access to the BigQuery project containing the dataset.

  • To use the Storage Write API, the connector must have bigquery.tables.updateData permissions. The minimum permissions are:

    bigquery.datasets.get
    bigquery.tables.create
    bigquery.tables.get
    bigquery.tables.getData
    bigquery.tables.list
    bigquery.tables.update
    bigquery.tables.updateData
    
  • You create and download a key when creating a service account. The key must be downloaded as a JSON file. It resembles the example below:

    {
      "type": "service_account",
      "project_id": "confluent-123456",
      "private_key_id": ".....",
      "private_key": "-----BEGIN PRIVATE ...omitted... =\n-----END PRIVATE KEY-----\n",
      "client_email": "confluent2@confluent-123456.iam.gserviceaccount.com",
      "client_id": "....",
      "auth_uri": "https://accounts.google.com/oauth2/auth",
      "token_uri": "https://oauth2.googleapis.com/token",
      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/certs",
      "client_x509_cert_url": "https://www.googleapis.com/robot/metadata/confluent2%40confluent-123456.iam.gserviceaccount.com"
    }
    

Format service account keyfile credentials

Formatting the keyfile is only required when using the CLI or Terraform to create the connector, where you must add the service account key directly into the connector configuration.

The contents of the downloaded service account credentials JSON file must be converted to string format before it can be used in the connector configuration.

  1. Convert the JSON file contents into string format.

  2. Add the escape character \ before all \n entries in the Private Key section so that each section begins with \\n (see the highlighted lines below). The example below has been formatted so that the \\n entries are easier to see. Most of the credentials key has been omitted.

    Tip

    A script is available that converts the credentials to a string and also adds the additional escape characters where needed. See Stringify Google Cloud Credentials.

      {
          "name" : "confluent-bigquery-sink",
          "connector.class" : "BigQueryStorageSink",
          "kafka.api.key" : "<my-kafka-api-key>",
          "kafka.api.secret" : "<my-kafka-api-secret>",
          "topics" : "pageviews",
          "keyfile" : "{\"type\":\"service_account\",\"project_id\":\"connect-
          1234567\",\"private_key_id\":\"omitted\",
          \"private_key\":\"-----BEGIN PRIVATE KEY-----
          \\nMIIEvAIBADANBgkqhkiG9w0BA
          \\n6MhBA9TIXB4dPiYYNOYwbfy0Lki8zGn7T6wovGS5\opzsIh
          \\nOAQ8oRolFp\rdwc2cC5wyZ2+E+bhwn
          \\nPdCTW+oZoodY\\nOGB18cCKn5mJRzpiYsb5eGv2fN\/J
          \\n...rest of key omitted...
          \\n-----END PRIVATE KEY-----\\n\",
          \"client_email\":\"pub-sub@connect-123456789.iam.gserviceaccount.com\",
          \"client_id\":\"123456789\",\"auth_uri\":\"https:\/\/accounts.google.com\/o\/oauth2\/
          auth\",\"token_uri\":\"https:\/\/oauth2.googleapis.com\/
          token\",\"auth_provider_x509_cert_url\":\"https:\/\/
          www.googleapis.com\/oauth2\/v1\/
          certs\",\"client_x509_cert_url\":\"https:\/\/www.googleapis.com\/
          robot\/v1\/metadata\/x509\/pub-sub%40connect-
          123456789.iam.gserviceaccount.com\"}",
          "project": "<my-BigQuery-project>",
          "datasets":"<my-BigQuery-dataset>",
          "data.format":"AVRO",
          "tasks.max" : "1"
      }
    
  3. Add all the converted string content to the "keyfile" credentials section of your configuration file as shown in the example above.

Set up Google OAuth (Bring your own app)

Complete the following steps to set up Google OAuth and get the Client ID and Client secret. This is applicable for the OAuth 2.0 (Bring your own app) BigQuery authentication option. For additional information, see Setting up OAuth 2.0.

  1. Create a Project in Google Cloud Console. If you haven’t already, create a new project in the Google Cloud Console.

  2. Enable the BigQuery API.

    • Navigate to the API Library in the Google Cloud Console.
    • Search for the BigQuery API.
    • Select it and enable it for your project.
  3. Configure the OAuth Consent Screen.

    • Go to the OAuth consent screen in the Credentials section of the Google Cloud Console.

    • Select Internal when prompted to set the User Type. This means only users within your organization can consent to the connector accessing data in BigQuery.

    • In the Scopes section of the Consent screen, enter the following scope:

      https://www.googleapis.com/auth/bigquery
      
    • Fill in the remaining required fields and save.

  4. Create OAuth 2.0 Credentials.

    • In the Google Cloud Console, go to the Credentials page.

    • Click on Create Credentials and select OAuth client ID.

    • Choose the application type Web application.

    • Set up the authorized redirect URI. Enter the following Confluent-provided URI:

      https://confluent.cloud/api/connect/oauth/callback
      
  5. Click Create. You are provided with the Client ID and Client secret.

Authorization Code grant flow

The following are the two stages of the authorization code grant flow when using OAuth 2.0.

  1. Get the authorization code:
    • The Google OAuth API endpoint is invoked by passing query parameters (for example, client_id, redirect_uri, and scope).
    • The user handles authentication (AuthN) and authorization (AuthZ) with Google. Note that Confluent does not have access to the username and password.
    • Google delivers the authorization code as a query parameter to the redirect_uri passed in the original API request.
  2. Exchange the authorization code for an access_token and refresh_token.
    • After invoking the Google OAuth API endpoint and passing the authorization code and other parameters, an access_token is returned and is used as the Bearer Token to access Google BigQuery.
    • When the access_token expires, a new token is requested by passing the refresh_token request to the OAuth API endpoint. No human intervention occurs.

OAuth Limitations

Note the following limitations when using OAuth for the connector.

  • If you enter the wrong Client ID when authenticating, the UI attempts to launch the consent screen, and then displays the error for an invalid client. Click the back button and re-enter the connector configuration. This is applicable for the OAuth 2.0 (Bring your own app) BigQuery authentication option.
  • If you enter the correct Client ID but enter the wrong Client secret, the UI successfully launches the consent screen and the user is able to authorize. However, the API request fails. The user is redirected to the connect UI provisioning screen. This is applicable for the OAuth 2.0 (Bring your own app) BigQuery authentication option.
  • You must manually revoke permissions for the connector from Third-party apps & services if you delete the connector or switch to a different auth mechanism.
  • Google Cloud allows a maximum of 100 refresh tokens for a single user using a single OAuth 2.0 client ID. For example, a single user can create a maximum of 100 connectors using the same client ID. When this user creates connector 101 using the same client ID, the refresh token for the first connector created is revoked. If you want to run more than 100 connectors, you can use multiple Client IDs or User IDs. Or, you can use a service account.

Legacy to V2 Connector Migration

The BigQuery Sink Legacy connector uses the BigQuery legacy streaming API. The BigQuery Sink V2 connector uses the BiqQuery storage write API. The BigQuery storage write API is not fully compatible with the BigQuery legacy streaming API. The following sections provide the known API changes and migration recommendations.

Breaking API changes

The following table lists API differences that were identified during connector development and testing. Be aware that there may be additional breaking changes that were not identified.

BigQuery Schema Types Legacy InsertAll API Storage Write API
TIMESTAMP Input value is considered seconds since epoch Input value is considered microseconds since epoch
DATE INT is not supported INT in range -719162 (0001-01-01) to 2932896 (9999-12-31) is supported
TIMESTAMP, JSON, INTEGER, BIGNUMERIC, NUMERIC, STRING, DATE, TIME, DATETIME, BYTES Supports most primitive data type inputs See Supported data types
DATE, TIME, DATETIME, TIMESTAMP Supports entire datetime canonical format Supports a subset of the datetime canonical format

Migration recommendations

Customers using the legacy BigQuery Sink connector should not move their existing pipelines directly to the new connector. Create a test environment and run the new connector against a test pipeline. Once you confirm that there are no backward compatibility issues, then move your existing pipelines to use the new connector.

A recommended migration strategy is to create a new topic to use for the BigQuery Sink V2 connector. This will ensure that there are no topic-related data type and schema issues. This migration strategy creates minimal data duplication in destination table because it only picks the offsets written for the new topic.

Configuration Properties

Use the following configuration properties with this connector.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.key.format

Sets the input Kafka record key format. Valid entries are AVRO, BYTES, JSON, JSON_SR, PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: BYTES
  • Valid Values: AVRO, BYTES, JSON, JSON_SR, PROTOBUF, STRING
  • Importance: high
input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, and JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, or PROTOBUF.

  • Type: string
  • Default: JSON
  • Importance: high
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

GCP credentials

authentication.method

Select how you want to authenticate with BigQuery.

  • Type: string
  • Default: Google cloud service account
  • Importance: high
keyfile

GCP service account JSON file with write permissions for BigQuery.

  • Type: password
  • Importance: high
oauth.client.id

Client ID of your Google OAuth application.

  • Type: string
  • Importance: high
oauth.client.secret

Client secret of your Google OAuth application.

  • Type: password
  • Importance: high
oauth.refresh.token

OAuth 2.0 refresh token for BigQuery.

  • Type: password
  • Importance: high

BigQuery details

project

ID for the GCP project where BigQuery is located.

  • Type: string
  • Importance: high
datasets

Name of the BigQuery dataset where table(s) is located.

  • Type: string
  • Importance: high

Ingestion Mode details

ingestion.mode

Select a mode to ingest data into the table. Select STREAMING for reduced latency. Select BATCH LOADING for cost savings. Select UPSERT for upserting records. Select UPSERT_DELETE for upserting and deleting records.

  • Type: string
  • Default: STREAMING
  • Importance: high

Insertion and DDL support

commit.interval

The interval, in seconds, in which to attempt to commit the streamed records. The interval can be set to a value, in seconds, between 1 minute (60 seconds) and 4 hours (14,400 seconds).

  • Type: int
  • Default: 60
  • Valid Values: [60,…,14400]
  • Importance: high
topic2table.map

Map of topics to tables (optional). The required format is comma-separated tuples. For example, <topic-1>:<table-1>,<topic-2>:<table-2>,… Note that a topic name must not be modified using a regex SMT while using this option. Note that if this property is used, sanitize.topics is ignored. Also, if the topic-to-table map doesn’t contain the topic for a record, the connector creates a table with the same name as the topic name.

  • Type: string
  • Default: “”
  • Importance: medium
sanitize.topics

Designates whether to automatically sanitize topic names before using them as table names in BigQuery. If not enabled, topic names are used as table names.

  • Type: boolean
  • Default: true
  • Importance: high
sanitize.field.names

Whether to automatically sanitize field names before using them as field names in BigQuery. BigQuery specifies that field names can only contain letters, numbers, and underscores. The sanitizer replaces invalid symbols with underscores. If the field name starts with a digit, the sanitizer adds an underscore in front of field name. Caution: Key duplication errors can occur if different fields are named a.b and a_b, for instance. After being sanitized, field names a.b and a_b will have same value.

  • Type: boolean
  • Default: false
  • Importance: high
auto.create.tables

Designates whether or not to automatically create BigQuery tables. Note: Supports AVRO, JSON_SR, and PROTOBUF message format only.

  • Type: string
  • Default: DISABLED
  • Importance: high
auto.update.schemas

Designates whether or not to automatically update BigQuery schemas. New fields in record schemas must be nullable. Note: Supports AVRO, JSON_SR, and PROTOBUF message format only.

  • Type: string
  • Default: DISABLED
  • Importance: high
partitioning.type

The time partitioning type to use when creating new partitioned tables. Existing tables will not be altered to use this partitioning type.

  • Type: string
  • Default: DAY
  • Importance: low
timestamp.partition.field.name

The name of the field in the value that contains the timestamp to partition by in BigQuery. This also enables timestamp partitioning for each table.

  • Type: string
  • Importance: low

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png