Google Cloud Functions Sink (Legacy) Connector for Confluent Cloud

Tip

Confluent recommends using the Google Cloud Functions Gen 2 Sink connector, which offers support for both Gen 2 and Gen 1 functions while delivering improved performance. For more information, see Google Cloud Functions Gen 2 Sink Connector for Confluent Cloud.

The fully-managed Google Cloud Functions Sink connector for Confluent Cloud integrates Apache Kafka® with Google Cloud Functions. For basic information about functions, see the Google Cloud Functions Documentation.

The connector consumes records from Kafka topics and executes a Google Cloud Function. Each request sent to Google Cloud Functions can contain up to the max.batch.size number of records.

Note

This is a Quick Start for the fully-managed cloud connector. If you are installing the connector locally for Confluent Platform, see Google Functions Sink Connector for Confluent Platform.

Features

The Google Cloud Functions Sink connector provides the following features:

  • Results from Google Cloud Functions are stored in the following topics:
    • success-<connector-id>
    • error-<connector-id>
  • Input data formats supported are Bytes, AVRO, JSON_SR (JSON Schema), JSON (Schemaless) and PROTOBUF. If no schema is defined, values are encoded as plain strings. For example, "name": "Kimberley Human" is encoded as name=Kimberley Human.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Limitations

Be sure to review the following information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Google Cloud Functions Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a target Google Cloud Function.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
  • The following connector-service account role must be enabled in your project:

    Google Project Connector Role
  • The Trigger type must be set to HTTP. Select Require authentication.

    HTTP Trigger Type

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Google Cloud Functions Sink connector card.

Google Cloud Functions Sink Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Google Cloud Functions Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check for records

Verify that records are being produced.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "GoogleCloudFunctionsSink",
  "name": "GoogleCloudFunctionsSinkConnector_0",
  "topics": "pageviews_proto",
  "input.data.format": "PROTOBUF",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "function.name": "dev-test",
  "project.id": "connect-2021",
  "gcf.credentials.json": "*",
  "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "topics": Identifies the topic name or a comma-separated list of topic names.
  • "input.data.format": Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "function.name": The name of your predefined Google Cloud function.

  • "project.id": Your GCP project ID.

  • "gcf.credentials.json": This contains the contents of the downloaded JSON file. See Formatting Google Cloud credentials for details about how to format and use the contents of the downloaded credentials file.

Optional:

  • "max.batch.size": The maximum number of records to combine when invoking a single Google Cloud function. Defaults to 1 (batching disabled). Accepts values from 1 to 1000.
  • "max.pending.requests": The maximum number of pending requests that can be made to Google Cloud functions concurrently. Defaults to 1.
  • "request.timeout": The maximum time in milliseconds that the connector will attempt a request to Google Cloud Functions before timing out (i.e., socket timeout). Defaults to 300000 ms (5 minutes).
  • "retry.timeout": The total amount of time, in milliseconds (ms), that the connector will exponentially backoff and retry failed requests (i.e., throttling). Response codes that are retried are HTTP 401 Unauthorized and HTTP 500 Internal Server Error. Defaults to 300000 ms (5 minutes). Enter -1 to configure this property for indefinite retries.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and definitions.

Formatting Google Cloud credentials

The contents of the downloaded credentials file must be converted to string format before it can be used in the connector configuration.

  1. Convert the JSON file contents into string format.

  2. Add the escape character \ before all \n entries in the Private Key section so that each section begins with \\n (see the highlighted lines below). The example below has been formatted so that the \\n entries are easier to see. Most of the credentials key has been omitted.

    Tip

    A script is available that converts the credentials to a string and also adds additional \ escape characters where needed. See Stringify Google Cloud Credentials.

      {
          "connector.class": "GoogleCloudFunctionsSink",
          "name": "GoogleCloudFunctionsSinkConnector_0",
          "kafka.api.key": "<my-kafka-api-key>",
          "kafka.api.secret": "<my-kafka-api-secret>",
          "topics": "<topic-name>",
          "data.format": "AVRO",
          "function.name": "dev-test",
          "project.id": "connect-2021",
          "gcf.credentials.json": "{\"type\":\"service_account\",\"project_id\":\"connect-
          1234567\",\"private_key_id\":\"omitted\",
          \"private_key\":\"-----BEGIN PRIVATE KEY-----
          \\nMIIEvAIBADANBgkqhkiG9w0BA
          \\n6MhBA9TIXB4dPiYYNOYwbfy0Lki8zGn7T6wovGS5pzsIh
          \\nOAQ8oRolFp\rdwc2cC5wyZ2+E+bhwn
          \\nPdCTW+oZoodY\\nOGB18cCKn5mJRzpiYsb5eGv2fN\/J
          \\n...rest of key omitted...
          \\n-----END PRIVATE KEY-----\\n\",
          \"client_email\":\"pub-sub@connect-123456789.iam.gserviceaccount.com\",
          \"client_id\":\"123456789\",\"auth_uri\":\"https:\/\/accounts.google.com\/o\/oauth2\/
          auth\",\"token_uri\":\"https:\/\/oauth2.googleapis.com\/
          token\",\"auth_provider_x509_cert_url\":\"https:\/\/
          www.googleapis.com\/oauth2\/v1\/
          certs\",\"client_x509_cert_url\":\"https:\/\/www.googleapis.com\/
          robot\/v1\/metadata\/x509\/pub-sub%40connect-
          123456789.iam.gserviceaccount.com\"}",
          "tasks.max": "1"
      }
    
  3. Add all the converted string content to the "gcf.credentials.json" section of your configuration file as shown in the example above.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file google-functions-sink-config.json

Example output:

Created connector GoogleCloudFunctionsSinkConnector_0 lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |       Name                          | Status  | Type
+-----------+-------------------------------------+---------+------+
lcc-ix4dl   | GoogleCloudFunctionsSinkConnector_0 | RUNNING | sink

Step 6: Check for records.

Verify that records are being produced.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

How should we connect to your function?

function.name

The Google Cloud function to invoke.

  • Type: string
  • Importance: high
project.id

The Google Cloud Project ID where the function is deployed.

  • Type: string
  • Importance: high

GCP credentials

gcf.credentials.json

GCP service account JSON file with invoker permission for Functions.

  • Type: password
  • Importance: high

Cloud Function details

max.batch.size

The maximum number of Kafka records to combine in a single function invocation. To disable batching of records, set this value to 1.

  • Type: int
  • Default: 1
  • Valid Values: [1,…]
  • Importance: low
max.pending.requests

The maximum number of pending requests that can be made to Google Cloud Functions concurrently.

  • Type: int
  • Default: 1
  • Valid Values: [1,…,128]
  • Importance: low
request.timeout.ms

The maximum time, in milliseconds, that the connector attempts to request Google Cloud Functions before timing out (socket timeout).

  • Type: int
  • Default: 300000 (5 minutes)
  • Valid Values: [0,…]
  • Importance: low
retry.timeout.ms

The total amount of time, in milliseconds, that the connector will exponentially backoff and retry failed requests i.e on throttling. Response codes that are retried are HTTP 401 Unauthorized and HTTP 500 Internal Server Error. A value of -1 indicates indefinite retrying.

  • Type: int
  • Default: 300000 (5 minutes)
  • Valid Values: [-1,…]
  • Importance: low

How should we handle errors?

behavior.on.error

The connector’s behavior if the called GCP function returns an error. Valid options are ‘log’ and ‘fail’. ‘log’ logs the error message and continues processing and ‘fail’ stops the connector in case of an error.

  • Type: string
  • Default: log
  • Importance: low

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000]
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500]
  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png