Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Pivotal Gemfire Sink Connector for Confluent Platform

You can use the Kafka Connect Pivotal Gemfire connector, currently available as a sink connector, to export data from Apache Kafka® topics to Pivotal Gemfire. The Pivotal Gemfire sink connector periodically polls data from Kafka and, in turn, adds it to Pivotal Gemfire.

Note

This connector is compatible with Pivotal Gemfire 9.x and above.

Prerequisites

The following are required to run the Kafka Connect Pivotal Gemfire Sink Connector:

  • Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above
  • Connect: Confluent Platform 4.0.0 or above, or Kafka 1.0.0 or above

Limitations

  • The connector supports only one task since it can create only one region object (client to write data). More information can be found here.
  • This connector expects non-null keys, hence having explicit keys and values is necessary for the data to be exported to Pivotal Gemfire.

Install the Pivotal Gemfire Connector

You can install this connector by using the Confluent Hub client (recommended) or you can manually download the ZIP file.

Install the connector using Confluent Hub

Prerequisite
Confluent Hub Client must be installed. This is installed by default with Confluent Enterprise.

Navigate to your Confluent Platform installation directory and run the following command to install the latest (latest) connector version. The connector must be installed on every machine where Connect will run.

confluent-hub install confluentinc/kafka-connect-pivotal-gemfire:latest

You can install a specific version by replacing latest with a version number. For example:

confluent-hub install confluentinc/kafka-connect-pivotal-gemfire:1.0.0-preview

Install the connector manually

Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.

License

You can use this connector for a 30-day trial period without a license key.

After 30 days, this connector is available under a Confluent enterprise license. Confluent issues enterprise license keys to subscribers, along with providing enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, please contact Confluent Support at support@confluent.io for more information.

See Confluent Platform license for license properties and License topic configuration for information about the license topic.

Configuration Properties

For a complete list of configuration properties for this connector, see Pivotal Gemfire Sink Connector Configuration Properties.

Note

For an example of how to get Kafka Connect connected to Confluent Cloud, see Distributed Cluster in Connect Kafka Connect to Confluent Cloud.

Quick Start

In this quick start, the Pivotal Gemfire connector is used to export data produced by the Avro console producer to Pivotal Gemfire Cache Region.

Note

Before you begin: Start the Pivotal Gemfire locator and server. Create a cache region to store the data.

Start the services using the Confluent CLI.

confluent local start

Every service starts in order, printing a message with its status.

Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]

To import a few records with a simple schema in Kafka, start the Avro console producer as follows:

  ./bin/kafka-avro-console-producer --broker-list localhost:9092 --topic input_topic \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'

Then, in the console producer, enter the following:

{"f1": "value1"}
{"f1": "value2"}
{"f1": "value3"}

The three records entered are published to the Kafka topic input_topic in Avro format.

Property-based example

Create a configuration file, gemfire.properties. This configuration is used typically along with standalone workers.

name=gemfire-sink
connector.class=io.confluent.connect.pivotal.gemfire.PivotalGemfireSinkConnector
tasks.max=1
topics=input_topic
gemfire.locator.host=localhost
gemfire.locator.port=10334
gemfire.username= <gemfire username>
gemfire.password= <gemfire password>
gemfire.region=check
confluent.topic.bootstrap.servers=localhost:9092
confluent.topic.replication.factor=1

Before starting the connector, make sure that the configurations in gemfire.properties are properly set.

Note

Provide either gemfire.locator.host or gemfire.server.host to establish connection with Pivotal Gemfire and run the connector

Then start the Pivotal Gemfire connector by loading its configuration with the following command.

Caution

You must include a double dash (--) between the connector name and your flag. For more information, see this post.

confluent local load gemfire-sink -- -d gemfire.properties
{
 "name": "gemfire-sink",
 "config": {
     "name":"gemfire-sink",
     "connector.class":"io.confluent.connect.pivotal.gemfire.PivotalGemfireSinkConnector",
     "tasks.max":"1",
     "topics":"input_topic",
     "gemfire.locator.host":"localhost",
     "gemfire.locator.port":"10334",
     "gemfire.username":"<gemfire username>",
     "gemfire.password":"<gemfire password>",
     "gemfire.region":"check",
     "confluent.topic.bootstrap.servers":"localhost:9092",
     "confluent.topic.replication.factor":"1"
 },
  "tasks": []
}

REST-based example

Use this setting with distributed workers. Write the following JSON to config.json, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the Kafka Connect REST API

 {
 "name": "gemfire-sink",
 "config": {
     "name":"gemfire-sink",
     "connector.class":"io.confluent.connect.pivotal.gemfire.PivotalGemfireSinkConnector",
     "tasks.max":"1",
     "topics":"input_topic",
     "gemfire.locator.host":"localhost",
     "gemfire.locator.port":"10334",
     "gemfire.username":"<gemfire username>",
     "gemfire.password":"<gemfire password>",
     "gemfire.region":"check",
     "confluent.topic.bootstrap.servers":"localhost:9092",
     "confluent.topic.replication.factor":"1"
 }
}

Use curl to post the configuration to one of the Kafka Connect Workers. Change http://localhost:8083/ the endpoint of one of your Kafka Connect worker(s).

curl -s -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors

Use the following command to update the configuration of existing connector.

curl -s -X PUT -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors/ServiceBusSourceConnector/config

Check that the connector started successfully. Review the Connect worker’s log by entering the following:

confluent local log connect

Towards the end of the log you should see that the connector starts, logs a few messages, and then adds data from Kafka to Pivotal Gemfire check region.

Once the connector has ingested records, check that the data is available in Pivotal Gemfire check region. Use the following command:

To see the values in Gemfire check region.

query --query="select * from /check"
Result : true
Limit  : 100
Rows   : 3
Result
-----------
{"f1": "value1"}
{"f1": "value2"}
{"f1": "value3"}

To see the keys in Gemfire check region.

query --query="select * from /check.keySet"
Result : true
Limit  : 100
Rows   : 3
Result
-----------
kafka1$0$1
kafka1$0$2
kafka1$0$3

Finally, stop the Connect worker and all other Confluent services by running:

confluent local stop

Your output should resemble:

Stopping control-center
control-center is [DOWN]
Stopping ksql-server
ksql-server is [DOWN]
Stopping connect
connect is [DOWN]
Stopping kafka-rest
kafka-rest is [DOWN]
Stopping schema-registry
schema-registry is [DOWN]
Stopping kafka
kafka is [DOWN]
Stopping zookeeper
zookeeper is [DOWN]

You can stop all services and remove any data generated during this quick start by entering the following command:

confluent local destroy

Your output should resemble:

Stopping control-center
control-center is [DOWN]
Stopping ksql-server
ksql-server is [DOWN]
Stopping connect
connect is [DOWN]
Stopping kafka-rest
kafka-rest is [DOWN]
Stopping schema-registry
schema-registry is [DOWN]
Stopping kafka
kafka is [DOWN]
Stopping zookeeper
zookeeper is [DOWN]
Deleting: /var/folders/ty/rqbqmjv54rg_v10ykmrgd1_80000gp/T/confluent.PkQpsKfE