Salesforce SObject Sink Connector Configuration Properties
The Salesforce PushTopic Source connector Connector can be configured using a variety of configuration properties.
Note
These are properties for the self-managed connector. If you are using Confluent Cloud, see Salesforce SObject Sink Connector for Confluent Cloud.
Connection
salesforce.consumer.keyThe consumer key for the OAuth application.
Type: string
Importance: high
salesforce.consumer.secretThe consumer secret for the OAuth application.
Type: password
Importance: high
salesforce.passwordThe Salesforce password the connector should use.
Type: password
Importance: high
salesforce.usernameThe Salesforce username the connector should use.
Type: string
Importance: high
salesforce.jwt.keystore.passwordKeystore password to enable OAuth JWT token bearer flow.
Type: password
Default: null
Importance: medium
salesforce.jwt.keystore.pathPath to keystore containing key to use in OAuth JWT token bearer flow.
Type: string
Default: null
Importance: medium
salesforce.instanceThe URL of the Salesforce endpoint to use. The default is blank. This directs the connector to use the endpoint specified in the authentication response.
Type: string
Default: https://login.salesforce.com
Valid Values: Valid URL with a scheme of
httpsorhttpImportance: high
salesforce.password.tokenThe Salesforce security token associated with the username.
Type: password
Default: null
Importance: high
http.proxyThe HTTP(S) proxy host and port the connector should use when talking to Salesforce. This defaults to a blank string, which corresponds to not using a proxy.
Type: string
Default: null
Valid Values: Of the form
<host>:<port>where<host>is a valid hostname or IP address, and<port>is a valid port numberImportance: medium
http.proxy.auth.schemeAuthentication scheme to be used when authenticating the connector towards HTTP(s) proxy. Basic and NTLM schemes are supported.
Type: string
Default: NONE
Valid Values: One of
NONE,NTLM,BASICImportance: medium
http.proxy.userThe HTTP(S) proxy user name.
Type: string
Default: NONE
Importance: medium
http.proxy.passwordThe HTTP(S) proxy password.
Type: password
Default: null
Importance: medium
http.proxy.auth.ntlm.domainThe domain to authenticate within, when NTLM scheme is used.
Type: string
Default: null
Importance: medium
connection.timeoutThe amount of time to wait while connecting to the Salesforce streaming endpoint.
Type: long
Default: 30000
Valid Values: [5000,…,600000]
Importance: low
curl.loggingIf enabled the logs output the equivalent curl commands. This is a security risk because your authorization header is displayed in the log file. Use at your own risk.
Type: boolean
Default: false
Importance: low
request.max.retries.time.msThe maximum time in milliseconds that the connector continues retry requests to Salesforce that fail because of network issues (after authentication has succeeded). The backoff period for each retry attempt uses a randomization function that grows exponentially. But, if the total time spent retrying the request exceeds this duration (15 minutes by default), retries stop and the request fails. This will likely result in task failure.
Type: long
Default: 900000
Valid Values: [1,…]
Importance: low
salesforce.versionThe version of the Salesforce API to use.
Type: string
Default: latest
Valid Values: Matches regex( ^(latest|[d.]+)$ )
Importance: low
Salesforce SObject Sink
salesforce.objectThe Salesforce SObject to perform the sink operation on.
Type: string
Importance: high
topicsOne or more Kafka topics to use as data sources for SOjects or Events.
Type: string
Importance: high
behavior.on.api.errorsError handling behavior setting for Rest api calls to Salesforce. Must be one of
fail,ignore, orlog.failstops the connector,ignorecontinues to the next record, andloglogs the error and continues to the next record.Type: string
Default: fail
Valid Values: Matches regex( ^(log|ignore|fail)$ )
Importance: low
override.event.typeA flag to indicate that the Kafka SObject source record EventType(create, update, delete) is overriden to use the operation specified in the
salesforce.sink.object.operationconfiguration setting.Type: boolean
Default: false
Importance: low
Dependents:
salesforce.sink.object.operation
salesforce.custom.id.field.nameName of a custom external id field in SObject to structure Rest Api calls for insert, upsert, delete, and update operations. When
salesforce.use.custom.id.field=true, The operations substitute the value of theidfield of source records in Kafka into the value of the specified custom external id field for sink records. This allows the sink connector to match records for operations without having to specify theidfield in Salesforce which is auto-generated.Type: string
Default: null
Importance: low
salesforce.ignore.fieldsComma separate list of fields from the source Kafka record to ignore when pushing a record into Salesforce.
Type: string
Default: “”
Importance: low
salesforce.ignore.reference.fieldsFlag to prevent reference type fields from being updated or inserted in Salesforce SObjects.
Type: boolean
Default: false
Importance: low
salesforce.sink.object.operationThe Salesforce sink operation to perform on the SObject. One of: insert, update, upsert, delete. Default is insert. This feature works if
override.event.typeis true.Type: string
Default: insert
Valid Values: Matches regex( ^(insert|update|upsert|delete)$ )
Importance: low
salesforce.use.custom.id.fieldFlag to indicate whether to use the
salesforce.custom.id.field.namefor all sink connector operations.Type: boolean
Default: false
Importance: low
Dependents:
salesforce.custom.id.field.name
topics.regexA Java regex or regular expression to use for matching data source topics.
Type: string
Default: null
Importance: low
Connect Reporter
For more information about Reporter, see Connect Reporter.
reporter.result.topic.nameThe name of the topic to produce records to after successfully processing a sink record. Use
${connector}within the pattern to specify the current connector name. Leave blank to disable error reporting behavior.Type: string
Default: ${connector}-success
Valid Values: Replacing ${connector} must be either Valid topic names that contain 1-249 ASCII alphanumeric,
+,.,_and-characters.Importance: medium
reporter.result.topic.replication.factorThe replication factor of the result topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss occurs. This should be 1 in development environments and ALWAYS at least 3 in production environments.
Type: short
Default: 3
Valid Values: [1,…]
Importance: medium
reporter.result.topic.partitionsThe number of partitions in the result topic when it is automatically created by this connector. This number of partitions should be the same as the number of input partitions to handle the potential throughput.
Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
reporter.error.topic.nameThe name of the topic to produce records to after each unsuccessful record sink attempt. Use
${connector}within the pattern to specify the current connector name. Leave blank to disable error reporting behavior.Type: string
Default: ${connector}-error
Valid Values: Replacing ${connector} must be either Valid topic names that contain 1-249 ASCII alphanumeric,
+,.,_and-characters.Importance: medium
reporter.error.topic.replication.factorThe replication factor of the error topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss occurs. This should be 1 in development environments and ALWAYS at least 3 in production environments.
Type: short
Default: 3
Valid Values: [1,…]
Importance: medium
reporter.error.topic.partitionsThe number of partitions in the error topic when it is automatically created by this connector. This number of partitions should be the same as the number of input partitions in order to handle the potential throughput.
Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
reporter.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers regardless of which bootstrap servers are specified here. This list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,..Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list does not need to contain the full set of servers. However, you may want to include more than one in case a server is down.Type: list
Valid Values: Non-empty list
Importance: high
Formatter
reporter.result.topic.key.formatThe format in which the result report key is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.result.topic.key.format.schemas.enable,reporter.result.topic.key.format.schemas.cache.size
reporter.result.topic.value.formatThe format in which the result report value is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.result.topic.value.format.schemas.cache.size,reporter.result.topic.value.format.schemas.enable
reporter.error.topic.key.formatThe format in which the error report key is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.error.topic.key.format.schemas.cache.size,reporter.error.topic.key.format.schemas.enable
reporter.error.topic.value.formatThe format in which the error report value is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.error.topic.value.format.schemas.cache.size,reporter.error.topic.value.format.schemas.enable
JSON Formatter
reporter.result.topic.key.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.result.topic.key.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.result.topic.value.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.result.topic.value.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.error.topic.key.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.error.topic.key.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.error.topic.value.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.error.topic.value.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
CSFLE configuration
csfle.enabled
Accepts a boolean value. CSFLE is enabled for the connector if csfle.enabled is set to True.
Type: boolean
Default: False
auto.register.schemas
Specifies if the Serializer should attempt to register the Schema with Schema Registry.
Type: boolean
Default: true
Importance: medium
use.latest.version
Only applies when auto.register.schemas is set to false. If auto.register.schemas is set to false and use.latest.version is set to true, then instead of deriving a schema for the object passed to the client for serialization, Schema Registry uses the latest version of the schema in the subject for serialization.
Type: boolean
Default: true
Importance: medium
Confluent Platform license
confluent.topic.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the following form:
host1:port1,host2:port2,...
Since the servers are used for the initial connection to discover the full cluster membership (which may change dynamically), the previous list does not need to contain the full set of servers–you may want more than one, though, in case a server is down.
Type: list
Importance: high
confluent.topicName of the Kafka topic used for Confluent Platform configuration, including licensing information.
Type: string
Default: _confluent-command
Importance: low
confluent.topic.replication.factorThe replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
Type: int
Default: 3
Importance: low
Confluent license properties
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties
as described below.
confluent.licenseConfluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
Type: string
Default: “”
Valid Values: Confluent Platform license
Importance: high
confluent.topic.ssl.truststore.locationThe location of the trust store file.
Type: string
Default: null
Importance: high
confluent.topic.ssl.truststore.passwordThe password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Type: password
Default: null
Importance: high
confluent.topic.ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.
Type: string
Default: null
Importance: high
confluent.topic.ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
Type: password
Default: null
Importance: high
confluent.topic.ssl.key.passwordThe password of the private key in the key store file. This is optional for client.
Type: password
Default: null
Importance: high
confluent.topic.security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
Type: string
Default: “PLAINTEXT”
Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command topic is
generated under different scenarios:
A 30-day trial license is automatically generated for the
_confluent commandtopic if you do not add theconfluent.licenseproperty or leave this property empty (for example,confluent.license=).Adding a valid license key (for example,
confluent.license=<valid-license-key>) adds a valid license in the_confluent-commandtopic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command topic using the
confluent.topic property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic..
License topic ACLs
The _confluent-command topic contains the license that corresponds to the
license key supplied through the confluent.license property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
DESCRIBE, READ, and WRITE on the
_confluent-commandtopic.Important
You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-commandtopic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Override Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor property to 1.
You can override producer-specific properties by using the
producer.override.* prefix (for source connectors) and consumer-specific
properties by using the consumer.override.* prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id property defaults to the name of the connector
with -licensing suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.