Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Salesforce Change Data Capture Source Connector for Confluent Platform¶
Salesforce Change Data Capture provides a way to monitor Salesforce records. Salesforce sends a notification when a change to a Salesforce record occurs as part of a create, update, delete, or undelete operation. The Kafka Connect Salesforce Change Data Capture Source connector can be used to capture these change events and write them to Kafka topic. This connector can be used with either standalone or distributed Connect workers.
Note
The connector’s salesforce.cdc.name
property indicates the name of the change event. In case of standard objects in standard channels, the property indicates the name of the change event, must end with ChangeEvent
. In case of custom objects in standard channels, the name of the change event must end with a __ChangeEvent
. For example, salesforce.cdc.name=Employee__ChangeEvent
. These change events must exist in Salesforce before running the connector. Refer the Salesforce Subscription Guide for more information.
Note
If you are using Confluent Cloud, see https://docs.confluent.io/cloud/current/connectors/cc-salesforce-source-cdc.html for the cloud Quick Start.
Configuration Properties¶
For a complete list of configuration properties for this connector, see Salesforce Change Data Capture Source Connector Configuration Properties.
Considerations¶
Note the following when using the Salesforce CDC Events source connector.
Replaying Messages¶
Change events are published to the event bus, where they are stored temporarily for 72 hours (3 days). When you create a connector, you can control how stored events are treated when the connector starts for the first time.
- Consume only new change events after the connector starts up, using
salesforce.initial.start=latest
. This is the default behavior, due largely to API limits. - Consume all of the change events, via
salesforce.initial.start=all
.
Restarting¶
When the connector operates, it periodically records the replay ID of the last record written to Kafka. When the connector is stopped and then restarted within 24 hours (for standard volume messages) or 72 hours (for high volume messages) the connector continues consuming the change events where it stopped, with no missed events. However, if the connector is stopped for more than 24 or 72 hours, some events are discarded in Salesforce before the connector can read them.
If the connector stops unexpectedly due to a failure, it may not record the replay ID of the last record successfully written to Kafka. When the connector restarts, it resumes from the last recorded replay ID. This means that some events may be duplicated in Kafka.
Retries¶
In case of failures caused by network issues, you can configure the connector to retry Salesforce requests (once authentication succeeds) using request.max.retries.time.ms
parameters.
The backoff period for each retry attempt uses a randomization function that grows exponentially. But, if the total time spent retrying the request exceeds this duration (15 minutes by default), retries stop and the request fails. This will likely result in task failure.
Retries Example¶
Create a
sfcdc-source.properties
file with the following contents:name=RetriesExample topics=sfcdc-messages tasks.max=1 connector.class=io.confluent.salesforce.SalesforceCdcSourceConnector key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter confluent.topic.bootstrap.servers=localhost:9092 confluent.topic.replication.factor=1 salesforce.cdc.name=LeadChangeEvent request.max.retries.time.ms=20000
Run and validate the connector.
The Connector retries requests for an initial backoff duration of 20000 ms. Retries stop if a Salesforce request is successful.
Note that the default value for request.max.retries.time.ms
is 900000 ms (15 minutes).
Limitations¶
The following limitations apply to this Salesforce connector:
- The connector is limited to one task only.
- The connector is limited by the allocations and limits of the Salesforce account. For example, free org accounts have a cap on the number of events that can occur within a 24-hour period.
Prerequisites¶
The following are required to run the Kafka Connect Salesforce Change Data Capture Source Connector:
- Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above
- Connect: Confluent Platform 4.0.0 or above, or Kafka 1.0.0 or above
- Salesforce Streaming API version 45.0 or above
- Salesforce change notification subscription for Salesforce entities, see Salesforce Change Data Capture Setup.
Install the Salesforce Change Data Capture Connector¶
You can install this connector by using the Confluent Hub client (recommended) or you can manually download the ZIP file.
Install the connector using Confluent Hub¶
- Prerequisite
- Confluent Hub Client must be installed. This is installed by default with Confluent Enterprise.
Navigate to your Confluent Platform installation directory and run the following command to install the latest (latest
) connector version. The connector must be installed on every machine where Connect will run.
confluent-hub install confluentinc/kafka-connect-salesforce:latest
You can install a specific version by replacing latest
with a version number. For example:
confluent-hub install confluentinc/kafka-connect-salesforce:1.3.0
License¶
You can use this connector for a 30-day trial period without a license key.
After 30 days, this connector is available under a Confluent enterprise license. Confluent issues enterprise license keys to subscribers, along with providing enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, please contact Confluent Support at support@confluent.io for more information.
See Confluent Platform license for license properties and License topic configuration for information about the license topic.
Quick Start¶
This quick start uses the Salesforce Change Data Capture Source Connector to capture change events and write them to Kafka topic.
Salesforce Change Data Capture Setup¶
Following are the steps to enable change notification for Salesforce account.
Use https://login.salesforce.com/ to log into an existing account or create a new one.
Tip
You can also use the https://test.salesforce.com/ test environment instead. Make sure to add Leads in the same environment and set
salesforce.instance=https://test.salesforce.com
in your connector configurations.Create a new Connected App.
- Select the gear icon in the upper right hand corner and choose Setup.
- Enter App in the Quick Find search box, and choose App Manager in the filtered results.
- Click the New Connected App button in the upper right corner of the Setup panel.
- Supply a Connected App Name, API Name, and Contact Email.
- Select Enable OAuth Settings checkbox and select the Enable for Device Flow checkbox. These selections enable the connector to use the Salesforce API.
- Under the Select OAuth Scopes field, select all of the items under Available OAuth scopes and add them to the Selected OAuth Scopes.
- Save the new app and press Continue at the prompt.
- Look for the Consumer Key and Consumer Secret in the displayed form. Save these so you can put them in the configuration properties file for the Salesforce connect worker.
Find your Security Token (emailed to you from Salesforce.com). If you need to reset your token or view your profile on Salesforce.com, select Reset My Security Token and follow the instructions.
Finally, subscribe to change event notifications for Salesforce entities. Subscribe the Leads entity.
Following are the steps to add a Lead to Salesforce:
- In the Salesforce UI, find and click on the App launcher icon in the upper left of the page.
- Find and start the Leads application.
- Click the New button, fill out the form, and save it. The Lead has been created.
- Alternatively, the leads can be created through simple a Java application, follow the salesforce documentation.
Start Confluent Platform¶
Start the Confluent services using the following Confluent CLI command:
confluent local start
Important
Do not use the Confluent CLI in production environments.
Property-based example¶
Create a configuration file salesforce-cdc-source.properties
with the following content. This file should be placed inside the Confluent Platform installation directory. This configuration is used typically along with standalone workers.
name=SalesforceCdcSourceConenctor
connector.class=io.confluent.salesforce.SalesforceCdcSourceConnector
tasks.max=1
kafka.topic=< Required Configuration >
salesforce.consumer.key=< Required Configuration >
salesforce.consumer.secret=< Required Configuration >
salesforce.username=< Required Configuration >
salesforce.password=< Required Configuration >
salesforce.password.token=< Required Configuration >
salesforce.cdc.name=< Required Configuration >
salesforce.initial.start=all
confluent.topic.bootstrap.servers=localhost:9092
confluent.topic.replication.factor=1
confluent.license=
Note
salesforce.consumer.key
and salesforce.consumer.secret
are required properties used for OAuth2 secure authentication by saleforce.com. Additional information and tutorials are available at salesforce.com.
Note
Change the confluent.topic.bootstrap.servers
property to include your broker address(es), and change the confluent.topic.replication.factor
to 3 for staging or production use.
Run the connector with this configuration.
confluent local load SalesforceCdcSourceConenctor -- -d salesforce-cdc-source.properties
Confirm that the connector is in a RUNNING
state.
confluent local status SalesforceCdcSourceConenctor
REST-based example¶
Use this setting with distributed workers. Write the following JSON to config.json
, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the Kafka Connect REST API
{
"name" : "SalesforceCdcSourceConenctor",
"config" : {
"connector.class" : "io.confluent.salesforce.SalesforceCdcSourceConnector",
"tasks.max" : "1",
"kafka.topic" : "< Required Configuration >",
"salesforce.cdc.name" : "< Required Configuration >",
"salesforce.initial.start" : "all",
"salesforce.consumer.key" : "< Required Configuration >",
"salesforce.consumer.secret" : "< Required Configuration >",
"salesforce.username" : "< Required Configuration >",
"salesforce.password" : "< Required Configuration >",
"salesforce.password.token" : "< Required Configuration >",
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"confluent.license": " Omit to enable trial mode "
}
}
Note
Change the confluent.topic.bootstrap.servers
property to include your broker address(es), and change the confluent.topic.replication.factor
to 3 for staging or production use.
Use curl to post a configuration to one of the Kafka Connect workers. Change http://localhost:8083/
to the endpoint of one of your Kafka Connect worker(s).
curl -s -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors
Use the following command to update the configuration of existing connector.
curl -s -X PUT -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors/SalesforceCdcSourceConenctor/config
To consume records written by the connector to the configured Kafka topic, run the following command:
kafka-avro-console-consumer --bootstrap-server localhost:9092 --property schema.registry.url=http://localhost:8081 --topic cdc-topic --from-beginning
The leads that were inserted into Salesforce will be present in the Kafka topic.
Record Schema¶
Change Data Capture Record Format¶
The following example shows the JSON document structure of a Salesforce Change Data Capture Event as received by the Salesforce Change Data Capture Connector.
{
"data":{
"schema":"MbZFVarhwdRnx4cf-8DowQ",
"payload":{
"Status":"Open - Not Contacted",
"LastModifiedDate":"2019-08-14T10:59:15.000Z",
"Company":"Confluent1010",
"CleanStatus":"Pending",
"OwnerId":"0052v00000alfOXAAY",
"CreatedById":"0052v00000alfOXAAY",
"IsConverted":false,
"ChangeEventHeader":{
"commitNumber":10663250415860,
"commitUser":"0052v00000alfOXAAY",
"sequenceNumber":1,
"entityName":"Lead",
"changeType":"CREATE",
"changeOrigin":"com/salesforce/api/rest/46.0",
"transactionKey":"00025790-0623-3300-2724-bdf92c57a1e7",
"commitTimestamp":1565780355000,
"recordIds":[
"00Q2v00001V4bqkEAB"
]
},
"IsUnreadByOwner":true,
"CreatedDate":"2019-08-14T10:59:15.000Z",
"LastModifiedById":"0052v00000alfOXAAY",
"Name":{
"LastName":"ConfluentLastName101"
}
},
"event":{
"replayId":1109257
}
},
"channel":"/data/LeadChangeEvent"
}
Kafka Record Value¶
The following is a sample record written to Kafka topic for a Change Data Capture event.
{
"Id":"00Q2v00001WOn3zEAD",
"ReplayId":"1115950",
"ChangeEventHeader":{
"entityName":"Lead",
"recordIds":["00Q2v00001WOn3zEAD"],
"changeType":"CREATE",
"changeOrigin":"com/salesforce/api/rest/46.0",
"transactionKey":"00097908-d9b2-d840-5501-ae933a58ee4b",
"sequenceNumber":1,
"commitTimestamp":1568628454000,
"commitUser":"0052v00000alfOXAAY",
"commitNumber":10683824131775
},
"LastName":"ConfluentLastName89",
"FirstName":null,
"Salutation":null,
"Name":null,
"Title":null,
"Company":"Confluent89",
"City":null,
"State":null,
"PostalCode":null,
"Country":null,
"Latitude":null,
"Longitude":null,
"GeocodeAccuracy":null,
"Address":null,
"Phone":null,
"MobilePhone":null,
"Fax":null,
"Email":null,
"Website":null,
"LeadSource":null,
"Status":"Open - Not Contacted",
"Industry":null,
"Rating":null,
"AnnualRevenue":null,
"NumberOfEmployees":null,
"OwnerId":"0052v00000alfOXAAY",
"IsConverted":false,
"ConvertedDate":null,
"ConvertedAccountId":null,
"ConvertedContactId":null,
"ConvertedOpportunityId":null,
"IsUnreadByOwner":true,
"CreatedDate":1568628454000,
"CreatedById":"0052v00000alfOXAAY",
"LastModifiedDate":1568628454000,
"LastModifiedById":"0052v00000alfOXAAY",
"Jigsaw":null,
"JigsawContactId":null,
"CleanStatus":"Pending",
"CompanyDunsNumber":null,
"DandbCompanyId":null,
"EmailBouncedReason":null,
"EmailBouncedDate":null,
"IndividualId":null,
"SICCode__c":null,
"ProductInterest__c":null,
"Primary__c":null,
"CurrentGenerators__c":null,
"NumberofLocations__c":null,
"_ObjectType":"LeadChangeEvent",
"_EventType":"MbZFVarhwdRnx4cf-8DowQ"
}
Note
The Salesforce record fields unaltered in a Change Data Capture event are expected to have value as null
when written to a Kafka topic.
Troubleshooting Connector and Task Failures¶
You can use the Kafka Connect REST API to check
the status of the connectors and tasks. If a task or connector has failed, the
trace
field will include a reason and a stack trace.
Authorization Failures
The Salesforce connector must authenticate with Salesforce and establish a connection. If a connection fails because of authentication or authorization errors, the connector will stop immediately. These errors require changes in your Salesforce account which may include creating OAuth tokens. Try to rerun your connector after you make the account changes.
Salesforce Error Codes
Whenever the connector or task fails, it captures a message that includes the Salesforce error code and error message. The message may also include a failure reason or suggested fix. See the Salesforce error codes for more detail about each error code.
Repeated Connection Failures
The Salesforce connector uses a long-lived connection to the Salesforce service. Periodically this connection is lost due to network lag, service disruptions, or network issues. When the connector loses its connection, it automatically attempts to reconnect so that the connector can continue working. In some cases, however, the error reported by Salesforce indicates that the connector is unable to continue. In these cases, the connector task fails and includes the Salesforce error and reason in the task failure message.
If the connector stops frequently because of connection timeouts, consider changing the following connector configuration properties and restarting the connector:
- The
connection.timeout
defaults to 30 seconds (30000 milliseconds) and dictates the maximum amount of time that the connector should wait for a single connection attempt to succeed. If the initial connection takes longer than this, the connector task will fail. - The
request.max.retries.time.ms
defaults to 15 minutes (900000 milliseconds) and specifies the maximum amount of time that the connector should continue to retry a failed request. The actual duration of the delay is random and it grows exponentially with each retry. If all retries take longer than this value, the connector task will fail. This does not apply to initial connection attempts, but it does apply to subsequent requests to reconnect. Any requests that result in authorization or authentication failures will not be retried.
If the failure rate is not affected by tuning these properties, it may be necessary to look at the Connect worker logs for errors, warnings, and exceptions. The log messages typically have a more detail than the connector status. The log messages preceding failure log messages can provide additional context and may provide you with information that identifies the failure cause.
Enabling Debug Logging¶
The Connect worker log configuration controls how much detail is included in the logs. By default, the worker logs include enough detail to identify basic functionality. Enable DEBUG logs in the Connect worker’s log configuration to include more details. This change must be made on each worker and only takes effect upon worker startup. After you change the log configuration as outlined below on each Connect worker, restart all of the Connect workers. A rolling restart can be used if necessary.
Note
Trace level logging is verbose and contains many more details, and may be
useful to solve certain failures. Trace level logging is enabled like debug
level logging is enabled, except TRACE
is used instead of DEBUG
.
On-Premises Installation
For local or on-premises installations of Confluent Platform, the etc/kafka/connect-log4j.properties
file defines the logging configuration of the Connect worker process. To enable DEBUG on just the Salesforce connector,
modify the etc/kafka/connect-log4j.properties
file to include the following line:
log4j.logger.io.confluent.salesforce=DEBUG
To enable DEBUG on all of the Connect worker’s code, including all connectors, change the log4j.rootLogger=
line to use DEBUG
instead of INFO
. For example, the default log configuration for Connect includes this line:
log4j.rootLogger=INFO, stdout
Change this line to the following to enable DEBUG on all of the Connect worker code:
log4j.rootLogger=DEBUG, stdout
Docker
For Docker containers, the logging configuration is set via environment variables. To enable DEBUG on just the Salesforce connector, use the following environment variable when starting your Confluent Platform Connect container:
CONNECT_LOG4J_ROOT_LOGLEVEL=DEBUG
To enable DEBUG log messages on all Connect worker code, including all connectors, use the following environment variable when starting your Confluent Platform Connect container:
CONNECT_LOG4J_LOGGERS="log4j.logger.io.confluent.salesforce=DEBUG"
The value of this environment variable is a comma-separated list of key-value pairs. For example, the following enables DEBUG on the Salesforce connector and the Connect framework:
CONNECT_LOG4J_LOGGERS="log4j.logger.io.confluent.salesforce=DEBUG,org.apache.kafka.connect=DEBUG"