Snowflake Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see the Snowflake Connector for Kafka documentation.

The Kafka Connect Snowflake Sink connector for Confluent Cloud maps and persists events from Apache Kafka® topics directly to a Snowflake database. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) data from Apache Kafka® topics. It ingests events from Kafka topics directly into a Snowflake database, exposing the data to services for querying, enrichment, and analytics.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The Snowflake Sink connector provides the following features:

  • Database authentication: Uses private key authentication.
  • Input data formats: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • Select configuration properties: The following properties determine what metadata is included in the RECORD_METADATA column in the Snowflake database table.
    • snowflake.metadata.createtime: If this value is set to false, the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.topic: If this value is set to false, the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.offset.and.partition: If the value is set to false, the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • snowflake.metadata.all: If the value is set to false, the metadata in the RECORD_METADATA column is empty. The default value is true.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

See Configuration Properties for configuration property values and descriptions. See the Confluent Cloud connector limitations for more information.

Target table naming guidelines

Note the following table naming guidelines and limitations:

  • The fully-managed Snowflake Sink connector allows you to configure topic:table name mapping. This feature is also supported by the self-managed Snowflake Sink connector.

  • Snowflake itself has limitations on object (table) naming conventions. See Identifier Requirements for details.

  • Kafka is much more permissive with topic naming conventions. You are allowed to use Kafka topic names that break the table name mapping in the Confluent Cloud Snowflake Sink connector.

    When a Kafka topic name does not conform to Snowflake’s table naming limitations (for example, my-topic-name), the connector will rename the topic to a safe name with an appended hash (for example, my_topic_name_021342). A conforming topic name (for example, my_topic_name) will send results to the expected table named my_topic_name.

  • If the connector needs to adjust the name of the table created for a Kafka topic, there is the potential for identical table names. For example, if you are reading data from Kafka topics numbers+x and numbers-x, the tables created for these topics will both be named NUMBERS_X. To avoid table name duplication, the connector appends a suffix to the table name. The suffix is an underscore followed by a generated hash.

Generate a Snowflake key pair

Before the connector can sink data to Snowflake, you need to generate a key pair. Snowflake authentication requires 2048-bit (minimum) RSA. You add the public key to a Snowflake user account. You add the private key to the connector configuration (when completing the Quick Start instructions).

Note

This procedure generates an unencrypted private key. You can generate and use an encrypted key. If you generate an encrypted key, add the passphrase to your connector configuration in addition to the private key. For information about generating an encrypted key, see Using Key Pair Authentication in the Snowflake documentation.

Creating the key pair

Complete the following steps to generate a key pair.

  1. Generate a private key using OpenSSL.

    openssl genrsa -out snowflake_key.pem 2048
    
  2. Generate the public key referencing the private key.

    openssl rsa -in snowflake_key.pem  -pubout -out snowflake_key.pub
    
  3. List the generated Snowflake key files.

    ls -l snowflake_key*
    
    -rw-r--r--  1  1679 Jun  8 17:04 snowflake_key.pem
    -rw-r--r--  1   451 Jun  8 17:05 snowflake_key.pub
    
  4. Show the contents of the public key file.

    cat snowflake_key.pub
    
    -----BEGIN PUBLIC KEY-----
    MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2zIuUb62JmrUAMoME+SX
    vsz9KUCp/cC+Y+kTGfYB3jRDQ06O0UT+yUKMO/KWuc0dUxZ8s9koW5l/n+TBfxIQ
    
    ... omitted
    
    1tD+Ktd/CTXPoVEI2tgCC9Avf/6/9HU3IpV0gL8SZ8U0N5ot4Uw+CSYB3JjMagEG
    bBWZ8Qc26pFk7Fd17+ykH6rEdLeQ9OElc0ZruVwSsa4AxaZOT+rqCCP7FQPzKTtA
    JQIDAQAB
    -----END PUBLIC KEY-----
    
  5. Copy the key. You will add it to a new user in Snowflake. Copy only the part of the key between --BEGIN PUBLIC KEY-- and --END PUBLIC KEY--). You can do this manually or you can use the following command:

    grep -v "BEGIN PUBLIC" snowflake_key.pub | grep -v "END PUBLIC"|tr -d '\r\n'
    

    In the following section you create a user and add the public key.

Creating a user and adding the public key

Open your Snowflake project. Complete the following steps to create a user account and add the public key to this account.

Note

The following steps show Snowflake UI screen captures. If you notice any issues with the screen captures, please report the outdated screen captures to the Confluent docs team.
  1. Go to the Worksheets panel and switch to the SECURITYADMIN role.

    Important

    Be sure to set the SECURITYADMIN role in the Worksheets panel (shown below) and not by using the user account drop-down selection. For additional information, see User Management.

    Snowflake security admin role
  2. Run the following query in Worksheets to create a user, and add the public key copied earlier.

    CREATE USER confluent RSA_PUBLIC_KEY='<public-key>';
    

    Make sure to add the public key as a single line in the statement.The following shows what this looks like in Snowflake Worksheets:

    Snowflake sysadmin role creation statements

    Tip

    If you did not set the role to SECURITYADMIN, or if you set the role using the user account drop-down menu, an SQL access control error is displayed.

    SQL access control error: Insufficient privileges to operate on account '<account-name>'
    

Configuring user privileges

Complete the following steps to set the correct privileges for the user added.

For example: Suppose you want to send Apache Kafka® records to a database named PRODUCTION using the schema PUBLIC. The following shows the required queries to configure the necessary user privileges.

# Use a role that can create and manage roles and privileges:
use role securityadmin;

# Create a Snowflake role with the privileges to work with the connector
create role kafka_connector_role;

# Grant privileges on the database:
grant usage on database PRODUCTION to role kafka_connector_role;

# Grant privileges on the schema:
grant usage on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create table on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create stage on schema PRODUCTION.PUBLIC to role kafka_connector_role;
grant create pipe on schema PRODUCTION.PUBLIC to role kafka_connector_role;

# Grant the custom role to an existing user:
grant role kafka_connector_role to user confluent;

# Make the new role the default role:
alter user confluent set default_role=kafka_connector_role;

Extracting the private key

You add the private key to your Snowflake connector configuration. Extract the key and put it in a safe place until you set up your connector.

  1. List the generated Snowflake key files.

    ls -l snowflake_key*
    
    -rw-r--r--  1  1679 Jun  8 17:04 snowflake_key.pem
    -rw-r--r--  1   451 Jun  8 17:05 snowflake_key.pub
    
  2. Show the contents of the private key file.

    cat snowflake_key.pem
    
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpQIBAAKCAQEA2zIuUb62JmrUAMoME+SXvsz9KUCp/cC+Y+kTGfYB3jRDQ06O
    0UT+yUKMO/KWuc0dUxZ8s9koW5l/n+TBfxIQx+24C2+l9t3TxxaLdf/YCgQwKNR9
    dO9/c+SkX8NfcwUynGEo3wpmdb4hp0X9TfWKX9vG//zK2tndmMUrFY5OcGSSVJYJ
    Wv3gk04sVxhINo5knpgZoUVztxcRLm/vNvIX1tD+Ktd/CTXPoVEI2tgCC9Avf/6/
    9HU3IpV0gL8SZ8U0N5ot4Uw+CSYB3JjMagEGbBWZ8Qc26pFk7Fd17+ykH6rEdLeQ
    
    ... omitted
    
    UfrYj7+p03yVflrsB+nyuPETnRJx41b01GrwJk+75v5EIg8U71PQDWfy1qOrUk/d
    9u25iaVRzi6DFM0ppE76Lh72SKy+m0iEZIXWbV9q6vf46Oz1PrtffAzyi4pyJbe/
    ypQ53f0CgYEA7rE6Dh0tG7EnYfFYrnHLXFC2aVtnkfCMIZX/VIZPX82VGB1mV43G
    qTDQ/ax1tit6RHDBk7VU4Xn545Tgj1z6agYPvHtkhxYTq50xVBXr/xwlMnzUZ9s3
    VjGpMYQANm2seleV6/si54mT4TkUyB7jMgWdFsewtwF60quvxmiA9RU=
    -----END RSA PRIVATE KEY-----
    
  3. Copy the key. You will add it to the connector configuration. Copy only the part of the key between --BEGIN RSA PRIVATE KEY-- and --END RSA PRIVATE KEY--). You can do this manually or you can use the following command:

    grep -v "BEGIN RSA PRIVATE KEY" snowflake_key.pem | grep -v "END RSA PRIVATE KEY"|tr -d '\r\n'
    
  4. Save the key to use later when you are completing the Quick Start steps. Or, you can complete the previous step when you actually need to get the key for the connector config.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Snowflake sink connector. The quick start provides the basics of selecting the connector and configuring it to consume data from Kafka and persist the data to a Snowflake database.

Prerequisites
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use the Confluent Cloud CLI or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Snowflake Sink connector icon.

Snowflake Sink Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Select one or more topics.

  2. Enter a connector name.

  3. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.

  4. Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  5. Enter the Snowflake connection details:

    • Connection URL: Enter the URL for accessing your Snowflake account. Use the format https://<account_name>.<region_id>.snowflakecomputing.com:443. The https:// and 443 port number are optional. Do not use the region ID if your account is in the AWS US West region and you are using AWS PrivateLink.
    • Connection user name: Enter the user name created earlier.
    • Private key:: Enter the private key created earlier as a single line. Enter only the part of the key between --BEGIN RSA PRIVATE KEY-- and --END RSA PRIVATE KEY--.
    • Database name: Enter the database name containing the table to insert rows into.
  6. Enter the Snowflake Schema name that contains the table to insert rows into.

  7. (Optional) Enter the private key passphrase. This is required if you created an encrypted key when generating the key pair.

  8. (Optional) Select whether or not to include the following metadata in the RECORD_METADATA column in the database table. To include any of these in the column, the UI field Whether or not to include metadata column must be true (the default).

    • createtime: If this value is set to false, the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • topic: If this value is set to false, the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • offset and partition: If this value is set to false, the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is true.
    • all metadata: If this value is set to false, the metadata in the RECORD_METADATA column is completely empty. The default value is true.

    For details about metadata, see Schema of Topics in the Snowflake documentation.

  9. Set the properties that determine when records are flushed to Snowflake. Records are flushed when the first one of these values is met. For example: The interval to flush records is set to 120 seconds. This time interval has elapsed from the last flush, but the number of records value has not been met. Records are flushed because the time interval tripped before the records property.

    • time in seconds: This property defaults to 120 seconds. This is the minimum value allowed. You can set a longer time interval.
    • number of records: This property defaults to 10000 records. This is the minimum value allowed. You can set this to a larger number of records.
    • record cache size: This property defaults to 10000000 (10 MB). This is the minimum value allowed. The maximum value you can enter is 100000000 (100 MB).

    Note

    When a flush is triggered when the cache reaches 10 MB, you might expect to see a 10 MB data file in Snowflake. You will see a much smaller file (for example, ~250 KB). This is because the 10 MB of flushed data is converted from Java to UTF. This conversion reduces the file size by 50 percent. The file is then compressed with gzip, which further reduces the file size by 95 percent.

  10. Enter the maximum number of tasks that the connector will use. Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.

  11. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running. It may take a few minutes.

Check the connector status

Step 7: Check Snowflake

After the connector is running, verify that messages are populating your Snowflake database table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Note

  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • Snowflake Snowpipe failure can prevent messages from showing up in the target table despite being successfully written by the Snowflake Sink connector. If this happens, please check the Snowflake COPY_HISTORY view, internal stage, or table stage to find the message and associated error. For more on the workflow of Snowflake Sink connector, see Workflow for the Kafka Connector.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe SnowflakeSink

Example output:

Following are the required configs:
connector.class: SnowflakeSink
name
kafka.api.key
kafka.api.secret
input.data.format
snowflake.url.name
snowflake.user.name
snowflake.private.key
snowflake.schema.name
tasks.max
topics

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "SnowflakeSink",
  "name": "<connector-name>",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "topics": "<topic1>, <topic2>",
  "input.data.format": "JSON",
  "snowflake.url.name": "https://wm83168.us-central1.gcp.snowflakecomputing.com:443",
  "snowflake.user.name": "<login-username>",
  "snowflake.private.key": "<private-key>",
  "snowflake.database.name": "<database-name>",
  "snowflake.schema.name": "<schema-name>",
  "tasks.max": "1"
}

Note the following required property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Enter a name for your connector.
  • "topics": Enter one topic or multiple comma-separated topics.
  • "input.data.format": Sets the input message format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "snowflake.url.name": Enter the URL for accessing your Snowflake account. Use the format https://<account_name>.<region_id>.snowflakecomputing.com:443. The https:// and 443 port number are optional. Do not use the region ID if your account is in the AWS US West region and you are using AWS PrivateLink.
  • "snowflake.user.name": Enter the user name created earlier.
  • "snowflake.private.key":
    • Enter the private key created earlier as a single line.
    • Enter only the part of the key between --BEGIN RSA PRIVATE KEY-- and --END RSA PRIVATE KEY--.
  • "snowflake.database.name": Enter the database name containing the table to insert rows into.
  • "snowflake.schema.name": Enter the Snowflake Schema name that contains the table to insert rows into.
  • "tasks.max": Enter the number of tasks for the connector. Refer to Confluent Cloud connector limitations for additional information.

The following are optional properties to include in the configuration. These properties affect what metadata is included in the RECORD_METADATA column in the Snowflake database table.

  • "snowflake.metadata.createtime": If this value is set to "false", the CreateTime property value is omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.topic": If this value is set to "false", the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.offset.and.partition": If the value is set to "false", the Offset and Partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is "true".
  • "snowflake.metadata.all": If the value is set to "false", the metadata in the RECORD_METADATA column is empty. The default value is "true".

Set the following properties that determine when records are flushed to Snowflake. Records are flushed when the first one of these values is met. For example: The interval to flush records is set to 120 seconds. This time interval has elapsed from the last flush, but the number of records value has not been met. Records are flushed because the time interval tripped before the records property.

  • "buffer.flush.time": The time (in seconds) the connector waits before flushing cached records to Snowflake. The default value is 120 seconds. This is the minimum time interval. You can configure a longer time interval.

  • "buffer.count.records": Records are cached in a buffer (per partition) before they are flushed to Snowflake. The default value is 10000. This is the minimum number of records. You can configure this to a larger number of records. Records are flushed to Snowflake when the number of records reaches the property value.

  • "buffer.size.bytes": Records are cached in a buffer (per partition) before being written to Snowflake as data files. The buffer size defaults to 10000000 bytes (10 MB). This is the minimum cache size value. It is configurable up to 100000000 bytes (100 MB). Records are flushed to Snowflake when this buffer reaches the property size.

    Note

    When a flush is triggered when the cache reaches 10 MB, you might expect to see a 10 MB data file in Snowflake. You will see a much smaller file (for example, ~250 KB). This is because the 10 MB of flushed data is converted from Java to UTF. This conversion reduces the file size by 50 percent. The file is then compressed with gzip, which further reduces the file size by 95 percent.

  • "tasks.max": Enter the maximum number of tasks that the connector will use. Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config snowflake-sink.json

Example output:

Created connector confluent-snowflake lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |            Name         | Status  | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl   | confluent-snowflake     | RUNNING | sink

Step 6: Check Snowflake

After the connector is running, verify that records are populating your Snowflake database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Note

  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • Snowflake Snowpipe failure can prevent messages from showing up in the target table despite being successfully written by the Snowflake Sink connector. If this happens, please check the Snowflake COPY_HISTORY view, internal stage, or table stage to find the message and associated error. For more on the workflow of Snowflake Sink connector, see Workflow for the Kafka Connector.

Configuration Properties

The following connector configuration properties are used with the Snowflake Sink connector for Confluent Cloud.

snowflake.url.name

The URL for accessing your Snowflake account. This is in the format https://<account_name>.<region_id>.snowflakecomputing.com:443. Note that the entries https:// and port number are optional. The region ID is not used if your account is in the AWS US West region and you are not using AWS PrivateLink.

  • Type: string
  • Importance: high
snowflake.user.name

The user login name for the Snowflake account.

  • Type: string
  • Importance: high
snowflake.private.key

The private key to authenticate the user. Include only the key, not the header or footer. If the key is split across multiple lines, you must remove the line breaks. You can provide either an unencrypted or encrypted key. If you use an encrypted key, provide the snowflake.private.key.passphrase parameter so Snowflake can decrypt the key. Use this parameter only if the snowflake.private.key parameter value is encrypted.

  • Type: password
  • Importance: high
snowflake.database.name

The name of the database containing the table to insert rows into.

  • Type: string
  • Importance: high
snowflake.schema.name

The name of the schema containing the table to insert rows into.

  • Type: string
  • Importance: high
snowflake.topic2table.map

The map of topics to tables (optional). This is in the format of comma-separated tuples. For example, <topic-1>:<table-1>,<topic-2>:<table-2>,...

  • Type: string
  • Importance: high
snowflake.private.key.passphrase

If snowflake.private.key is encrypted, this is the passphrase used to decrypt the private key.

  • Type: password
  • Importance: medium
snowflake.metadata.createtime

If the value is set to false, the create time property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.

  • Type: string
  • Default value: true
  • Importance: medium
snowflake.metadata.topic

If the value is set to false, the topic property value is omitted from the metadata in the RECORD_METADATA column. The default value is true.

  • Type: string
  • Default value: true
  • Importance: medium
snowflake.metadata.offset.and.partition

If the value is set to false, the offset and partition property values are omitted from the metadata in the RECORD_METADATA column. The default value is true.

  • Type: string
  • Default value: true
  • Importance: medium
snowflake.metadata.all

If the value is set to false, the metadata in the RECORD_METADATA column is empty. The default value is true.

  • Type: string
  • Default value: true
  • Importance: medium
buffer.flush.time

Number of seconds between buffer flushes from Kafka memory cache to the internal stage. The default and minimum value is 120 seconds.

  • Type: long
  • Default value: 120
  • Importance: low
buffer.count.records

Records are cached in a buffer (per partition) before they are flushed to Snowflake. The default value is 10000. This is the minimum number of records. You can configure this to a larger number of records. Records are flushed to Snowflake when the number of records reaches the property value.

  • Type: long
  • Default value: 10000
  • Importance: low
tasks.max
The maximum number of tasks that the connector will use. Each task is limited to a number of topic partitions based on the buffer.size.bytes property value. For example, a 10 MB buffer size is limited to 50 topic partitions, a 20 MB buffer is limited to 25 topic partitions, 50 MB buffer is limited to 10 topic partitions, and a 100 MB buffer to 5 topic partitions.

Troubleshooting

For Snowflake troubleshooting, see Troubleshooting Issues in the Snowflake documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Suggested Reading

The following blog post provides an introduction to the Snowflake Sink connector and a scenario walkthrough.

Blog post: Announcing the Snowflake Sink connector for Apache Kafka in Confluent Cloud

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png