Salesforce Bulk API Sink Connector Configuration Properties
To use this connector, specify the name of the connector class in the
connector.class configuration property.
connector.class=io.confluent.connect.salesforce.SalesforceBulkApiSinkConnector
Connector-specific configuration properties are described below.
Authentication
salesforce.usernameThe Salesforce username the connector will use.
Type: string
Valid Values: non-empty string
Importance: high
salesforce.passwordThe Salesforce password the connector will use.
Type: password
Importance: high
salesforce.password.tokenThe Salesforce security token associated with the username.
Type: password
Importance: high
Salesforce Instance
salesforce.instanceThe URL of the Salesforce endpoint to use. This directs the connector to use the endpoint specified in the authentication response.
Type: string
Default: https://login.salesforce.com
Valid Values: URI with one of these schemes:
https,httpImportance: high
salesforce.versionThe Salesforce version
Type: string
Default: 48.0
Importance: high
Salesforce SObject
salesforce.objectThe Salesforce object name.
Type: string
Valid Values: Non empty string
Importance: high
salesforce.sink.object.operationThe Salesforce sink operation to perform on the SObject. One of the following: INSERT, UPDATE, UPSERT, DELETE. Default is INSERT. This feature works if
override.event.typeis set totrue.Type: string
Default: INSERT
Valid Values: UPSERT, INSERT, UPDATE, DELETE
Importance: low
override.event.typeA flag to indicate that the Kafka SObject source record EventType (CREATE, UPDATE, DELETE) is overridden to use the operation specified in the
salesforce.sink.object.operationconfiguration setting.Type: boolean
Default: false
Importance: low
salesforce.custom.id.field.nameThe name of a custom external ID field in SObject to structure Rest API calls for INSERT and UPSERT operations. When
salesforce.use.custom.id.field=true.Type: string
Default: null
Importance: low
salesforce.use.custom.id.fieldThe flag that indicates whether to use the
salesforce.custom.id.field.namefor INSERT/UPSERT sink connector operations.Type: boolean
Default: false
Importance: low
salesforce.ignore.fieldsA comma-separated list of fields from the source Kafka record to ignore when pushing a record into Salesforce.
Type: list
Default: “”
Importance: low
salesforce.ignore.reference.fieldsThe flag to prevent reference type fields from being updated or inserted in Salesforce SObjects.
Type: boolean
Default: false
Importance: low
Proxy
http.proxyThe HTTP(S) proxy host and port the connector should use when communicating with Salesforce. This defaults to a blank string, which is equivalent to not using a proxy.
Type: string
Default: null
Valid Values: Formatted as
<host>:<port>, where<host>is a valid hostname or IP address, and<port>is a valid port numberImportance: low
http.proxy.auth.schemeThe authentication scheme that is used when authenticating the connector with the HTTP(s) proxy. Basic and NTLM schemes are supported.
Type: string
Default: NONE
Valid Values: one of [NTLM, NONE, BASIC]
Importance: low
http.proxy.userThe Salesforce proxy username to connect to Salesforce.
Type: string
Default: “”
Importance: low
http.proxy.passwordThe Salesforce proxy password to connect to Salesforce.
Type: password
Default: [hidden]
Importance: low
http.proxy.auth.ntlm.domainThe domain to authenticate within, when NTLM scheme is used.
Type: string
Default: null
Importance: low
Error Handling
behavior.on.api.errorsThis property sets the error handling behavior for Rest API calls to Salesforce. Must be set to either
fail,ignore, orlog:failstops the connector.ignorecontinues to the next record.loglogs the error and continues to the next record.
Type: string
Default: fail
Valid Values: one of [fail, log, ignore]
Importance: low
Connect Reporter
For more information about Reporter, see Connect Reporter.
reporter.result.topic.nameThe name of the topic to produce records to after successfully processing a sink record. Use
${connector}within the pattern to specify the current connector name. Leave blank to disable error reporting behavior.Type: string
Default: ${connector}-success
Valid Values: Replacing ${connector} must be either Valid topic names that contain 1-249 ASCII alphanumeric,
+,.,_and-characters.Importance: medium
reporter.result.topic.replication.factorThe replication factor of the result topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss occurs. This should be 1 in development environments and ALWAYS at least 3 in production environments.
Type: short
Default: 3
Valid Values: [1,…]
Importance: medium
reporter.result.topic.partitionsThe number of partitions in the result topic when it is automatically created by this connector. This number of partitions should be the same as the number of input partitions to handle the potential throughput.
Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
reporter.error.topic.nameThe name of the topic to produce records to after each unsuccessful record sink attempt. Use
${connector}within the pattern to specify the current connector name. Leave blank to disable error reporting behavior.Type: string
Default: ${connector}-error
Valid Values: Replacing ${connector} must be either Valid topic names that contain 1-249 ASCII alphanumeric,
+,.,_and-characters.Importance: medium
reporter.error.topic.replication.factorThe replication factor of the error topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss occurs. This should be 1 in development environments and ALWAYS at least 3 in production environments.
Type: short
Default: 3
Valid Values: [1,…]
Importance: medium
reporter.error.topic.partitionsThe number of partitions in the error topic when it is automatically created by this connector. This number of partitions should be the same as the number of input partitions in order to handle the potential throughput.
Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
reporter.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers regardless of which bootstrap servers are specified here. This list only impacts the initial hosts used to discover the full set of servers. This list should be in the form
host1:port1,host2:port2,..Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list does not need to contain the full set of servers. However, you may want to include more than one in case a server is down.Type: list
Valid Values: Non-empty list
Importance: high
Formatter
reporter.result.topic.key.formatThe format in which the result report key is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.result.topic.key.format.schemas.enable,reporter.result.topic.key.format.schemas.cache.size
reporter.result.topic.value.formatThe format in which the result report value is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.result.topic.value.format.schemas.cache.size,reporter.result.topic.value.format.schemas.enable
reporter.error.topic.key.formatThe format in which the error report key is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.error.topic.key.format.schemas.cache.size,reporter.error.topic.key.format.schemas.enable
reporter.error.topic.value.formatThe format in which the error report value is serialized.
Type: string
Default: json
Valid Values: one of [string, json]
Importance: medium
Dependents:
reporter.error.topic.value.format.schemas.cache.size,reporter.error.topic.value.format.schemas.enable
JSON Formatter
reporter.result.topic.key.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.result.topic.key.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.result.topic.value.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.result.topic.value.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.error.topic.key.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.error.topic.key.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
reporter.error.topic.value.format.schemas.cache.sizeThe maximum number of schemas that can be cached in the JSON formatter.
Type: int
Default: 128
Valid Values: [0,…,2048]
Importance: medium
reporter.error.topic.value.format.schemas.enableInclude schemas within each of the serialized values and keys.
Type: boolean
Default: false
Importance: medium
CSFLE configuration
csfle.enabled
Accepts a boolean value. CSFLE is enabled for the connector if csfle.enabled is set to True.
Type: boolean
Default: False
auto.register.schemas
Specifies if the Serializer should attempt to register the Schema with Schema Registry.
Type: boolean
Default: true
Importance: medium
use.latest.version
Only applies when auto.register.schemas is set to false. If auto.register.schemas is set to false and use.latest.version is set to true, then instead of deriving a schema for the object passed to the client for serialization, Schema Registry uses the latest version of the schema in the subject for serialization.
Type: boolean
Default: true
Importance: medium
Confluent Licensing
confluent.licenseConfluent will issue a license key to each subscriber. The license key is a short snippet of text that you can copy and paste. Without the license key, you can use the connector for a 30-day trial period. If you are a subscriber, contact Confluent Support for more information.
Type: string
Default: “”
Valid Values: Confluent Platform license
Importance: high
confluent.topic.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. The list should be in the form
host1:port1,host2:port2,.... These servers are used only for the initial connection to discover the full cluster membership, which may change dynamically, so this list doesn’t need to contain the full set of servers. You may want more than one in case a server is down.Type: list
Importance: high
confluent.topicThe name of the Kafka topic used for Confluent Platform configuration, including licensing information.
Type: string
Default:
_confluent-commandImportance: low
confluent.topic.replication.factorThe replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
Type: int
Default: 3
Importance: low
Confluent license properties
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties
as described below.
confluent.licenseConfluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
Type: string
Default: “”
Valid Values: Confluent Platform license
Importance: high
confluent.topic.ssl.truststore.locationThe location of the trust store file.
Type: string
Default: null
Importance: high
confluent.topic.ssl.truststore.passwordThe password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Type: password
Default: null
Importance: high
confluent.topic.ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.
Type: string
Default: null
Importance: high
confluent.topic.ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
Type: password
Default: null
Importance: high
confluent.topic.ssl.key.passwordThe password of the private key in the key store file. This is optional for client.
Type: password
Default: null
Importance: high
confluent.topic.security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
Type: string
Default: “PLAINTEXT”
Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command topic is
generated under different scenarios:
A 30-day trial license is automatically generated for the
_confluent commandtopic if you do not add theconfluent.licenseproperty or leave this property empty (for example,confluent.license=).Adding a valid license key (for example,
confluent.license=<valid-license-key>) adds a valid license in the_confluent-commandtopic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command topic using the
confluent.topic property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic..
Override Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor property to 1.
You can override producer-specific properties by using the
producer.override.* prefix (for source connectors) and consumer-specific
properties by using the consumer.override.* prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id property defaults to the name of the connector
with -licensing suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.