Snowflake Sink Connector for Confluent Cloud

The fully-managed Snowflake Sink connector for Confluent Cloud maps and persists events from Apache Kafka® topics directly to a Snowflake database. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) data from Apache Kafka® topics. It ingests events from Kafka topics directly into a Snowflake database, exposing the data to services for querying, enrichment, and analytics.

Note

This is the Quick Start for the fully-managed cloud connector. If you are installing the connector locally for Confluent Platform, see the Snowflake Connector for Kafka documentation.

Features

The Snowflake Sink connector provides the following features:

  • Database authentication: Uses private key authentication.
  • Snowflake Ingestion Methods: The connector supports the Snowpipe (default) and Snowpipe Streaming for Kafka data ingestion methods. Using Snowpipe Streaming may provide a cost-benefit for your Snowflake project.
  • Confluent Cloud provides version 2.1.2 of the fully-managed Snowflake Sink connector. This version supports Snowflake schematization (snowflake.enable.schematization). When set to TRUE the connector provides schema detection and evolution when using Snowpipe Streaming for Kafka. The default value is FALSE. For more information, see Schema detection and schema evolution.
  • Input data formats: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
  • Select configuration properties: The following properties determine what metadata is included in the RECORD_METADATA column in the Snowflake database table.
    • snowflake.metadata.createtime: If this value is set to false, the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.topic: If this value is set to false, the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.offset.and.partition: If the value is set to false, the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.all: If the value is set to false, the metadata in the RECORD_METADATA column is empty. The default value is true.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Limitations

Be sure to review the following information.

Target table naming guidelines

Note the following table naming guidelines and limitations:

  • The fully-managed Snowflake Sink connector allows you to configure topic:table name mapping. This feature is also supported by the self-managed Snowflake Sink connector.

  • Snowflake itself has limitations on object (table) naming conventions. See Identifier Requirements for details.

  • Kafka is much more permissive with topic naming conventions. You are allowed to use Kafka topic names that break the table name mapping in the Confluent Cloud Snowflake Sink connector.

    When a Kafka topic name does not conform to Snowflake’s table naming limitations (for example, my-topic-name), the connector will rename the topic to a safe name with an appended hash (for example, my_topic_name_021342). A conforming topic name (for example, my_topic_name) will send results to the expected table named my_topic_name.

  • If the connector needs to adjust the name of the table created for a Kafka topic, there is the potential for identical table names. For example, if you are reading data from Kafka topics numbers+x and numbers-x, the tables created for these topics will both be named NUMBERS_X. To avoid table name duplication, the connector appends a suffix to the table name. The suffix is an underscore followed by a generated hash.

Generate a Snowflake key pair

Before the connector can sink data to Snowflake, you need to generate a key pair. Snowflake authentication requires 2048-bit (minimum) RSA. You add the public key to a Snowflake user account. You add the private key to the connector configuration (when completing the Quick Start instructions).

Note

  • This procedure generates an unencrypted private key. You can generate and use an encrypted key. If you generate an encrypted key, add the passphrase to your connector configuration in addition to the private key. For information about generating an encrypted key, see Using Key Pair Authentication in the Snowflake documentation.
  • When you use a non-encrypted private key, you might see the following configuration validation error. Check whether your private key is valid or consider using an encrypted private key.
Private key validation error

Creating the key pair

Complete the following steps to generate a key pair.

  1. Generate a private key using OpenSSL.

    openssl genrsa -out snowflake_key.pem 2048
    
  2. Generate the public key referencing the private key.

    openssl rsa -in snowflake_key.pem  -pubout -out snowflake_key.pub
    
  3. List the generated Snowflake key files.

    ls -l snowflake_key*
    
    -rw-r--r--  1  1679 Jun  8 17:04 snowflake_key.pem
    -rw-r--r--  1   451 Jun  8 17:05 snowflake_key.pub
    
  4. Show the contents of the public key file.

    cat snowflake_key.pub
    
    -----BEGIN PUBLIC KEY-----
    MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2zIuUb62JmrUAMoME+SX
    vsz9KUCp/cC+Y+kTGfYB3jRDQ06O0UT+yUKMO/KWuc0dUxZ8s9koW5l/n+TBfxIQ
    
    ... omitted
    
    1tD+Ktd/CTXPoVEI2tgCC9Avf/6/9HU3IpV0gL8SZ8U0N5ot4Uw+CSYB3JjMagEG
    bBWZ8Qc26pFk7Fd17+ykH6rEdLeQ9OElc0ZruVwSsa4AxaZOT+rqCCP7FQPzKTtA
    JQIDAQAB
    -----END PUBLIC KEY-----
    
  5. Copy the key. You will add it to a new user in Snowflake. Copy only the part of the key between --BEGIN PUBLIC KEY-- and --END PUBLIC KEY--). You can do this manually or you can use the following command:

    grep -v "BEGIN PUBLIC" snowflake_key.pub | grep -v "END PUBLIC"|tr -d '\r\n'
    

    In the following section you create a user and add the public key.

Creating a user and adding the public key

Open your Snowflake project. Complete the following steps to create a user account and add the public key to this account.

Note

The following steps show Snowflake UI screen captures. You can use the Feedback button at the bottom of the screen to let us know if the screens are not current.

  1. Go to the Worksheets panel and switch to the SECURITYADMIN role.

    Important

    Be sure to set the SECURITYADMIN role in the Worksheets panel (shown below) and not by using the user account drop-down selection. For additional information, see User Management.

    Snowflake security admin role
  2. Run the following query in Worksheets to create a user, and add the public key copied earlier.

    CREATE USER confluent RSA_PUBLIC_KEY='<public-key>';
    

    Make sure to add the public key as a single line in the statement.The following shows what this looks like in Snowflake Worksheets:

    Snowflake sysadmin role creation statements

    Tip

    If you did not set the role to SECURITYADMIN, or if you set the role using the user account drop-down menu, an SQL access control error is displayed.

    SQL access control error: Insufficient privileges to operate on account '<account-name>'
    

Configuring user privileges

Complete the following steps to set the correct privileges for the user added.

For example: Suppose you want to send Apache Kafka® records to a database named PRODUCTION using the schema PUBLIC. The following shows the required queries to configure the necessary user privileges.

// Use a role that can create and manage roles and privileges:
use role securityadmin;

// Create a Snowflake role with the privileges to work with the connector
create role kafka_connector_role;

// Grant privileges on the database:
grant usage on database PRODUCTION to role kafka_connector_role;

// Grant privileges on the schema:
grant usage on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create table on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create stage on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create pipe on schema PRODUCTION.PUBLIC to role kafka_connector_role;

// Grant the custom role to an existing user:
grant role kafka_connector_role to user confluent;

// Make the new role the default role:
alter user confluent set default_role=kafka_connector_role;

Extracting the private key

You add the private key to your Snowflake connector configuration. Extract the key and put it in a safe place until you set up your connector.

  1. List the generated Snowflake key files.

    ls -l snowflake_key*
    
    -rw-r--r--  1  1679 Jun  8 17:04 snowflake_key.pem
    -rw-r--r--  1   451 Jun  8 17:05 snowflake_key.pub
    
  2. Show the contents of the private key file.

    cat snowflake_key.pem
    
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpQIBAAKCAQEA2zIuUb62JmrUAMoME+SXvsz9KUCp/cC+Y+kTGfYB3jRDQ06O
    0UT+yUKMO/KWuc0dUxZ8s9koW5l/n+TBfxIQx+24C2+l9t3TxxaLdf/YCgQwKNR9
    dO9/c+SkX8NfcwUynGEo3wpmdb4hp0X9TfWKX9vG//zK2tndmMUrFY5OcGSSVJYJ
    Wv3gk04sVxhINo5knpgZoUVztxcRLm/vNvIX1tD+Ktd/CTXPoVEI2tgCC9Avf/6/
    9HU3IpV0gL8SZ8U0N5ot4Uw+CSYB3JjMagEGbBWZ8Qc26pFk7Fd17+ykH6rEdLeQ
    
    ... omitted
    
    UfrYj7+p03yVflrsB+nyuPETnRJx41b01GrwJk+75v5EIg8U71PQDWfy1qOrUk/d
    9u25iaVRzi6DFM0ppE76Lh72SKy+m0iEZIXWbV9q6vf46Oz1PrtffAzyi4pyJbe/
    ypQ53f0CgYEA7rE6Dh0tG7EnYfFYrnHLXFC2aVtnkfCMIZX/VIZPX82VGB1mV43G
    qTDQ/ax1tit6RHDBk7VU4Xn545Tgj1z6agYPvHtkhxYTq50xVBXr/xwlMnzUZ9s3
    VjGpMYQANm2seleV6/si54mT4TkUyB7jMgWdFsewtwF60quvxmiA9RU=
    -----END RSA PRIVATE KEY-----
    
  3. Copy the key. You will add it to the connector configuration. Copy only the part of the key between --BEGIN RSA PRIVATE KEY-- and --END RSA PRIVATE KEY--). You can do this manually or you can use the following command:

    grep -v "BEGIN RSA PRIVATE KEY" snowflake_key.pem | grep -v "END RSA PRIVATE KEY"|tr -d '\r\n'
    
  4. Save the key to use later when you are completing the Quick Start steps. Or, you can complete the previous step when you actually need to get the key for the connector config.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Snowflake Sink connector. The quick start provides the basics of selecting the connector and configuring it to consume data from Kafka and persist the data to a Snowflake database.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Snowflake Sink connector card.

Snowflake Sink Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

At the Add Snowflake Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topic(s) you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check Snowflake

After the connector is running, verify that messages are populating your Snowflake database table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Note

  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • Snowflake Snowpipe failure can prevent messages from showing up in the target table despite being successfully written by the Snowflake Sink connector. If this happens, check the Snowflake COPY_HISTORY view, internal stage, or table stage to find the message and associated error. For more on the workflow of Snowflake Sink connector, see Workflow for the Kafka Connector.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "SnowflakeSink",
  "name": "<connector-name>",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "topics": "<topic1>, <topic2>",
  "input.data.format": "JSON",
  "snowflake.url.name": "https://wm83168.us-central1.gcp.snowflakecomputing.com:443",
  "snowflake.user.name": "<login-username>",
  "snowflake.private.key": "<private-key>",
  "snowflake.database.name": "<database-name>",
  "snowflake.schema.name": "<schema-name>",
  "tasks.max": "1"
}

Note the following required property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Enter a name for your connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "topics": Enter one topic or multiple comma-separated topics.

  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "snowflake.url.name": Enter the URL for accessing your Snowflake account. Use the format https://<account_locator>.<region_id>.<cloud_provider>.snowflakecomputing.com:443. The https:// and 443 port number are optional. For more information, see Account Locator in a Region. Do not use the region ID if your account is in the AWS US West region and you are using AWS PrivateLink.

  • "snowflake.user.name": Enter the user name created earlier. Note that if using the SNOWPIPE_STEAMING ingestion method, you must add the "snowflake.role.name" property. See Configuration Properties for all property values and descriptions.

  • "snowflake.private.key":

    • Enter the private key created earlier as a single line.
    • Enter only the part of the key between --BEGIN RSA PRIVATE KEY-- and --END RSA PRIVATE KEY--.
  • "snowflake.database.name": Enter the database name containing the table to insert rows into.

  • "snowflake.schema.name": Enter the Snowflake Schema name that contains the table to insert rows into.

  • "tasks.max": Enter the number of tasks for the connector. Refer to Confluent Cloud connector limitations for additional information.

The following are optional properties to include in the configuration. These properties affect what metadata is included in the RECORD_METADATA column in the Snowflake database table.

  • "snowflake.metadata.createtime": If this value is set to "false", the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.topic": If this value is set to "false", the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.offset.and.partition": If the value is set to "false", the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.all": If the value is set to "false", the metadata in the RECORD_METADATA column is empty. The default value is "true".

Set the following properties that determine when records are flushed to Snowflake. Records are flushed when the first one of these values is met. For example: The interval to flush records is set to 120 seconds. This time interval has elapsed from the last flush, but the number of records value has not been met. Records are flushed because the time interval tripped before the records property.

  • "buffer.flush.time": The time (in seconds) the connector waits before flushing cached records to Snowflake. The default value is 120 seconds, and the minimum value is 10 seconds. You can configure a longer time interval.

  • "buffer.count.records": Records are cached in a buffer (per partition) before they are flushed to Snowflake. The default value is 10000. This is the minimum number of records. You can configure this to a larger number of records. Records are flushed to Snowflake when the number of records reaches the property value.

  • "buffer.size.bytes": Records are cached in a buffer (per partition) before being written to Snowflake as data files. The buffer size defaults to 5000000 bytes (5 MB). This is the minimum cache size value. Records are flushed to Snowflake when this buffer reaches the property size.

    Note

    When a flush is triggered when the cache reaches 5 MB, you might expect to see a 5 MB data file in Snowflake. You will see a much smaller file (for example, ~150 KB). This is because the 5 MB of flushed data is converted from Java to UTF. This conversion reduces the file size by 50 percent. The file is then compressed with gzip, which further reduces the file size by 95 percent.

  • "tasks.max": Enter the maximum number of tasks that the connector will use. Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and descriptions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file snowflake-sink.json

Example output:

Created connector confluent-snowflake lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |            Name         | Status  | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl   | confluent-snowflake     | RUNNING | sink

Step 6: Check Snowflake

After the connector is running, verify that records are populating your Snowflake database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Note

  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • Snowflake Snowpipe failure can prevent messages from showing up in the target table despite being successfully written by the Snowflake Sink connector. If this happens, check the Snowflake COPY_HISTORY view, internal stage, or table stage to find the message and associated error. For more on the workflow of Snowflake Sink connector, see Workflow for the Kafka Connector.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are JSON, AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: JSON
  • Importance: high
input.key.format

Sets the input Kafka record key format. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: STRING
  • Valid Values: AVRO, JSON, JSON_SR, PROTOBUF, STRING
  • Importance: high
key.converter.reference.subject.name.strategy

Set the subject reference name strategy for key. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: high
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

How should we connect to your Snowflake database?

snowflake.url.name

The URL for accessing your Snowflake account, in the form of https://<account_name>.<region_id>.snowflakecomputing.com:443. Note that the https:// and port number are optional. The region ID is not used if your account is in the AWS US West region and you are not using AWS PrivateLink.

  • Type: string
  • Importance: high
snowflake.user.name

User login name for the Snowflake account

  • Type: string
  • Importance: high
snowflake.private.key

The private key to authenticate the user. Include only the key, not the header or footer. If the key is split across multiple lines, remove the line breaks. You can provide either an unencrypted key or an encrypted key. If you use an encrypted key, provide the snowflake.private.key.passphrase parameter so Snowflake can decrypt the key. Use this parameter only if the snowflake.private.key parameter value is encrypted.

  • Type: password
  • Importance: high
snowflake.database.name

The name of the database that contains the table to insert rows into.

  • Type: string
  • Importance: high
snowflake.role.name

Access control role to use when inserting the rows into the table. If Ingestion method is Snowpipe_Streaming Ingestion then it’s required. If Ingestion method is Snowpipe then its not required and use default role

  • Type: string
  • Importance: low

Database details

snowflake.schema.name

The name of the schema that contains the table to insert rows into.

  • Type: string
  • Importance: high
snowflake.topic2table.map

Map of topics to tables (optional). Format : comma-separated tuples, e.g. <topic-1>:<table-1>,<topic-2>:<table-2>,…

  • Type: string
  • Importance: high

Snowflake connection

snowflake.ingestion.method

Choose the preferred ingestion method. The connector supports the SNOWPIPE (default) and SNOWPIPE_STREAMIMG for Kafka data ingestion. Using SNOWPIPE_STREAMING may provide a cost-benefit for your Snowflake project.

  • Type: string
  • Default: SNOWPIPE
  • Importance: high

Connection details

snowflake.private.key.passphrase

If snowflake.private.key is encrypted, this passphrase is used to decrypt the key. If the value of this parameter is not empty, Kafka uses this phrase to try to decrypt the private key.

  • Type: password
  • Default: [hidden]
  • Importance: medium
snowflake.metadata.createtime

If the value is set to FALSE, the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is TRUE.

  • Type: boolean
  • Default: true
  • Importance: medium
snowflake.metadata.topic

If the value is set to FALSE, the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is TRUE.

  • Type: boolean
  • Default: true
  • Importance: medium
snowflake.metadata.offset.and.partition

If the value is set to FALSE, the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is TRUE.

  • Type: boolean
  • Default: true
  • Importance: medium
snowflake.metadata.all

If the value is set to FALSE, the metadata in the RECORD_METADATA column is completely empty. The default value is TRUE.

  • Type: boolean
  • Default: true
  • Importance: medium
snowflake.enable.schematization

Specify to TRUE to enable schema detection and evolution for Kafka Connector with Snowpipe Streaming. The default value is FALSE

  • Type: boolean
  • Default: false
  • Importance: medium
buffer.flush.time

Number of seconds between buffer flushes, where the flush is from the Kafka’s memory cache to the internal stage. The default value is 120 seconds. Minimum value allowed is 10 for snowflake.ingestion.method=SNOWPIPE, and 1 for snowflake.ingestion.method=SNOWPIPE_STREAMING. The connector uses buffer.count.records and buffer.size.bytes=10,000,000 (10MB) as well. Whichever comes first, the connector will flush Kafka records to Snowflake.

  • Type: long
  • Default: 120
  • Valid Values: Value must be greater than 10 in SNOWPIPE method OR greater than 1 in SNOWPIPE_STREAMING method
  • Importance: low
buffer.count.records

Number of records between buffer flushes, where the flush is from the Kafka’s memory cache to the internal stage. The default and minimum value is 10,000 records. The connector uses buffer.flush.time and buffer.size.bytes=10,000,000 (10MB) as well. Whichever comes first, the connector will flush Kafka records to Snowflake.

  • Type: long
  • Default: 10000
  • Valid Values: [10000,…]
  • Importance: low
buffer.size.bytes

Kafka records are cached in a buffer (per partition) before being written to Snowflake as data files. The buffer size defaults to 10000000 bytes (10 MB). The records are compressed when written to Snowflake. Because of the compression, the size of the cached records buffer may be larger that the size of the resulting data files created in Snowflake.

  • Type: long
  • Default: 10000000
  • Valid Values: [10000000,…,100000000]
  • Importance: low

Error handling

errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior differently from the Connect framework’s.

  • Type: string
  • Default: all
  • Importance: low

Organize my data by…

key.subject.name.strategy

Determines how to construct the subject name under which the key schema is registered with Schema Registry.

  • Type: string
  • Default: TopicNameStrategy
  • Valid Values: RecordNameStrategy, TopicNameStrategy
  • Importance: medium
value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string
  • Default: TopicNameStrategy
  • Valid Values: RecordNameStrategy, TopicNameStrategy
  • Importance: medium

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000]
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500]
  • Importance: low

Number of tasks for this connector

tasks.max

The number of tasks for the connector. Each task is limited to a number of topic partitions based on the buffer.size.bytes configuration, e.g., 10 MB -> 50 Topic Partitions, 20 MB-> 25 Topic Partitions, 50 MB -> 10 Topic Partitions, and 100 MB -> 5 Topic Partitions.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Troubleshooting

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

Suggested Reading

The following blog post provides an introduction to the Snowflake Sink connector and a scenario walkthrough.

Blog post: Announcing the Snowflake Sink connector for Apache Kafka in Confluent Cloud

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../../_images/topology.png