SFTP Source Connector Configuration Properties
General
input.file.parser.formatParser used to parse fetched files from the SFTP directory.
Importance: high
Type: string
Default:
JSONValid Values:
BINARY,CSV,SCHEMALESS_JSON,JSON
kafka.topicThe Kafka topic to write the data to.
Importance: high
Type: string
batch.sizeThe number of records that should be returned with each batch.
Importance: low
Type: int
Default Value: 1000
empty.poll.wait.msThe amount of time to wait if a poll returns an empty list of records.
Importance: low
Type: long
Default Value: 250
Valid values: [1,…,9223372036854775807]
sftp.max.record.sizeThe maximum size of a record in bytes that the connector accepts. If the size of a record exceeds this limit, an error will occur and will be addressed per the cleanup policy. The default is 0, which means no limit.
Importance: low
Type: long
Default Value: 0
Valid values: [1,…,9223372036854775807]
Connection
sftp.hostSftp host to connect with.
Type: string
Default: localhost
Importance: high
sftp.portPort number of SFTP server.
Type: int
Default: 22
Importance: medium
sftp.usernameUsername for sftp server.
Type: string
Default: foo
Importance: high
sftp.passwordPassword for sftp server. Can be left empty if TLS configs are configured.
Type: string
Default: pass
Importance: high
Security
tls.private.keyPrivate key that will be used for public-key authentication. When authenticating with SSL:
Use a non-empty passphrase.
Replace private key newlines with
'\r\n'.
For example:
RSA_PRIVATE_KEY=$(awk '{printf "%s\\r\\n", $0}' ssh_host_rsa_key)Type: password
Default: [hidden]
Importance: low
tls.public.keyPublic key that will be used to decrypt the private key if the given private key is encrypted.
Type: password
Default: [hidden]
Importance: low
tls.passphrasePassphrase that will be used to decrypt the private key if the given private key is encrypted.
Type: password
Default: [hidden]
Importance: low
tls.pemfilePath to pemfile.
Type: string
Default: “”
Importance: low
Kerberos
If Kerberos is enabled, the Connector assumes the /etc/krb5.conf file has been correctly configured and points to a KDC which can issue tickets for SFTP. The connector will refresh its TGT when it expires.
To test if the configuration of the krb5.conf file and the keytab was successful, the command kinit -k -t /path/to/user.keytab username may be used.
The connector first looks for a cached ticket on the system, if it does not exist it uses the configured keytab and principal. Note that if there is ticket cached on the system that is for a different application, the cache may need to be cleared temporarily when starting the connector.
kerberos.user.principalThe principal to use when connecting to SFTP with Kerberos. The
sftp.usernameis still required. Format:username@REALM.Type: string
Default: “”
Importance: low
kerberos.keytab.pathThe path to the keytab file for the SFTP connector principal. This keytab file should only be readable by the connector.
Type: string
Default: “”
Importance: low
Proxy
sftp.proxy.urlProxy url for sftp connection.
Type: string
Default: “”
Importance: low
proxy.usernameProxy username for sftp server if proxy is being used.
Type: string
Default: null
Importance: low
proxy.passwordProxy password for sftp server if proxy is being used.
Type: string
Default: null
Importance: low
Auto topic creation
For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.
Configuration properties accept regular expressions (regex) that are defined as Java regex.
topic.creation.groupsA list of group aliases that are used to define per-group topic configurations for matching topics. A
defaultgroup always exists and matches all topics.Type: List of String types
Default: empty
Possible Values: The values of this property refer to any additional groups. A
defaultgroup is always defined for topic configurations.
topic.creation.$alias.replication.factorThe replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the
defaultgroup. This property is optional for any other group defined intopic.creation.groups. Other groups use the Kafka broker default value.Type: int
Default: n/a
Possible Values:
>= 1for a specific valid value or-1to use the Kafka broker’s default value.
topic.creation.$alias.partitionsThe number of topic partitions created by this connector. This is a required property for the
defaultgroup. This property is optional for any other group defined intopic.creation.groups. Other groups use the Kafka broker default value.Type: int
Default: n/a
Possible Values:
>= 1for a specific valid value or-1to use the Kafka broker’s default value.
topic.creation.$alias.includeA list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics.
$aliasapplies to any group defined intopic.creation.groups. This property does not apply to thedefaultgroup.Type: List of String types
Default: empty
Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.excludeA list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration.
$aliasapplies to any group defined intopic.creation.groups. This property does not apply to thedefaultgroup. Note that exclusion rules override any inclusion rules for topics.Type: List of String types
Default: empty
Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.${kafkaTopicSpecificConfigName}Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule.
$aliasapplies to thedefaultgroup as well as any group defined intopic.creation.groups.Type: property values
Default: Kafka broker value
CSV Parsing
The configuration properties in this section are specific to the CSV Source connector only.
csv.pre.validate.file.enabledFlag to enable validating the integrity of all records in the CSV file before processing any of its records. For example, if any of the records have a linefeed within an unquoted field–which would break the record–the entire field will be considered invalid and no records from that file will be processed. The failed file would be moved to the configured error path.
Important
You can also configure this property for the Consolidated SFTP Source connector.
If the number of records in a file is larger than the configured batch size, portions of the file may be retrieved from the SFTP server by the connector more than once.
Importance: low
Type: boolean
Default Value: false
csv.case.sensitive.field.namesFlag to determine if the field names in the header row should be treated as case sensitive.
Importance: low
Type: boolean
Default Value: false
csv.rfc.4180.parser.enabledFlag to determine if the RFC 4180 parser should be used instead of the default parser.
Importance: low
Type: boolean
Default Value: false
csv.first.row.as.headerFlag to indicate if the first row of data contains the header of the file. If true, the position of the columns are determined by the first row of the CSV file. The column position is inferred from the position of the schema supplied in
value.schema. If set to true, the number of columns must be greater than or equal to the number of fields in the schema. If false, andschema.generation.enabledis true, the position of the columns are determined by the regexcolumn%02d``where ``%02dwill be replaced by the column number.Importance: medium
Type: boolean
Default Value: false
csv.escape.charThe character that indicates a special character in integer form (ASCII code). Typically, a CSV file uses
\(92).Importance: low
Type: int
Default Value: 92
csv.file.charsetCharacter set used to read the file.
Importance: low
Type: string
Default Value: UTF-8
Valid values: Big5, Big5-HKSCS, CESU-8, EUC-JP, EUC-KR, GB18030, GB2312, GBK, IBM-Thai, IBM00858, IBM01140, IBM01141, IBM01142, IBM01143, IBM01144, IBM01145, IBM01146, IBM01147, IBM01148, IBM01149, IBM037, IBM1026, IBM1047, IBM273, IBM277, IBM278, IBM280, IBM284, IBM285, IBM290, IBM297, IBM420, IBM424, IBM437, IBM500, IBM775, IBM850, IBM852, IBM855, IBM857, IBM860, IBM861, IBM862, IBM863, IBM864, IBM865, IBM866, IBM868, IBM869, IBM870, IBM871, IBM918, ISO-2022-CN, ISO-2022-JP, ISO-2022-JP-2, ISO-2022-KR, ISO-8859-1, ISO-8859-13, ISO-8859-15, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6, ISO-8859-7, ISO-8859-8, ISO-8859-9, JIS_X0201, JIS_X0212-1990, KOI8-R, KOI8-U, Shift_JIS, TIS-620, US-ASCII, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE, UTF-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254, windows-1255, windows-1256, windows-1257, windows-1258, windows-31j, x-Big5-HKSCS-2001, x-Big5-Solaris, x-COMPOUND_TEXT, x-euc-jp-linux, x-EUC-TW, x-eucJP-Open, x-IBM1006, x-IBM1025, x-IBM1046, x-IBM1097, x-IBM1098, x-IBM1112, x-IBM1122, x-IBM1123, x-IBM1124, x-IBM1166, x-IBM1364, x-IBM1381, x-IBM1383, x-IBM300, x-IBM33722, x-IBM737, x-IBM833, x-IBM834, x-IBM856, x-IBM874, x-IBM875, x-IBM921, x-IBM922, x-IBM930, x-IBM933, x-IBM935, x-IBM937, x-IBM939, x-IBM942, x-IBM942C, x-IBM943, x-IBM943C, x-IBM948, x-IBM949, x-IBM949C, x-IBM950, x-IBM964, x-IBM970, x-ISCII91, x-ISO-2022-CN-CNS, x-ISO-2022-CN-GB, x-iso-8859-11, x-JIS0208, x-JISAutoDetect, x-Johab, x-MacArabic, x-MacCentralEurope, x-MacCroatian, x-MacCyrillic, x-MacDingbat, x-MacGreek, x-MacHebrew, x-MacIceland, x-MacRoman, x-MacRomania, x-MacSymbol, x-MacThai, x-MacTurkish, x-MacUkraine, x-MS932_0213, x-MS950-HKSCS, x-MS950-HKSCS-XP, x-mswin-936, x-PCK, x-SJIS_0213, x-UTF-16LE-BOM, X-UTF-32BE-BOM, X-UTF-32LE-BOM, x-windows-50220, x-windows-50221, x-windows-874, x-windows-949, x-windows-950, x-windows-iso2022jp
csv.ignore.leading.whitespaceSets whether leading white space is ignored. If set to true (the default), white space in front of a quote in a field is ignored.
Importance: low
Type: boolean
Default Value: true
csv.ignore.quotationsSets whether quotations are ignored. If set to true, quotations are ignored.
Importance: low
Type: boolean
Default Value: false
csv.keep.carriage.returnFlag to determine if the carriage return at the end of the line should be maintained.
Importance: low
Type: boolean
Default Value: false
csv.null.field.indicatorIndicator to determine how the CSV Reader can determine if a field is null. Valid values are
EMPTY_SEPARATORS,EMPTY_QUOTES,BOTH, orNEITHER(the default). For more information see the Opencsv documentationImportance: low
Type: string
Default Value: NEITHER
Valid values:
EMPTY_SEPARATORS,EMPTY_QUOTES,BOTH,NEITHER
csv.quote.charThe character that is used to quote a field. This typically happens when the
csv.separator.charcharacter is within the data.Importance: low
Type: int
Default Value: 34
csv.separator.charThe ASCII value of the character that separates each field. Typically, a CSV file uses
,(ASCII value 44) and a TSV file usestab(ASCII value 9). Ifcsv.separator.charis defined as anull(0), then the RFC 4180 parser must be used by default. This is the equivalent ofcsv.rfc.4180.parser.enabled = true.Importance: low
Type: int
Default Value: 44
csv.skip.linesNumber of lines to skip at the beginning of the file.
Importance: low
Type: int
Default Value: 0
csv.strict.quotesSets the strict quotes setting. If true, characters outside the quotes are ignored.
Importance: low
Type: boolean
Default Value: false
csv.verify.readerFlag to determine if the reader should be verified.
Importance: low
Type: boolean
Default Value: true
File System
input.pathThe directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.
Importance: high
Type: string
Valid value: Absolute path to a sftp directory that exists and is writable.
input.file.patternRegular expression to check input file names against. This expression must match the entire filename. The equivalent of
Matcher.matches().Importance: high
Type: string
finished.pathThe directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.
Importance: high
Type: string
Valid value: Absolute path to a sftp directory that exists and is writable.
error.pathThe directory to place files that have errors. This directory must exist and be writable by the user running Kafka Connect.
Importance: high
Type: string
Valid value: Absolute path to a sftp directory that exists and is writable.
behavior.on.errorSets how the connector should behave when error are encountered while processing records.
FAILstops the connector when any error occurs.IGNOREignores the current file and continues to the next file for processing.LOGafter logging the error message, it continues to the next file for processing.Importance: high
Type: string
Default Value: FAIL
Valid values:
FAIL,IGNORE,LOG
cleanup.policySets how the connector should clean up files that are successfully processed.
NONEleaves the files in place. Files left in place may be reprocessed if the connector is restarted.DELETEremoves the file from the filesystem.MOVE(the default) moves the file to thefinished.pathdirectory.Importance: medium
Type: string
Default Value: MOVE
Valid values:
NONE,DELETE,MOVE
file.minimum.age.msThe amount of time in milliseconds after the file was last written to before the file can be processed.
Importance: low
Type: long
Default Value: 0
Valid values: [0,…]
processing.file.extensionBefore a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.
Importance: low
Type: string
Default Value: .PROCESSING
Valid values: regex( ^.*..+$ )
Schema and Schema Generation
The configuration properties in this section are specific to the CSV Source connector and the JSON Source connector only.
key.schemaThe schema for the key written to Kafka.
Importance: high
Type: string
value.schemaThe schema for the value written to Kafka.
Importance: high
Type: string
schema.generation.enabledFlag to determine if schemas should be dynamically generated. If set to true,
key.schemaandvalue.schemacan be omitted, butschema.generation.key.nameandschema.generation.value.namemust be set.Importance: medium
Type: boolean
Default Value: false
schema.generation.key.fieldsField(s) use to build key schema. This is only used during schema generation. If
schema.generation.enabledis true andschema.generation.key.fieldsis set to empty list as[]thenkey.schemawill be of type Struct with empty fields. Ifschema.generation.enabledis true andschema.generation.key.fieldsis set to list of field names then thekey.schemawill be generated by extracting the fields present in the list fromvalue.schema.Importance: medium
Type: list
Default Value: []
schema.generation.key.nameThe name of the generated key schema.
Importance: medium
Type: string
Default Value: defaultkeyschemaname
schema.generation.value.nameThe name of the generated value schema.
Importance: medium
Type: string
Default Value: defaultvalueschemaname
timestamp.fieldThe field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
Importance: medium
Type: string
Default Value: “”
Timestamps
timestamp.modeDetermines how the connector sets the timestamp for the ConnectRecord. If set to
Field, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field intimestamp.field. If set toFILE_TIME, the last time the file was modified is used. If set toPROCESS_TIME(the default), the time the record is read is used.Note
This configuration property works only with the Schemaless JSON Source and Binary Source connectors.
Importance: medium
Type: string
Default Value: PROCESS_TIME
Valid values:
FIELD,FILE_TIME,PROCESS_TIME
timestamp.fieldThe field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
Importance: medium
Type: string
parser.timestamp.date.formatsThe date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.
Importance: low
Type: list
Default Value: [yyyy-MM-dd’T’HH:mm:ss, yyyy-MM-dd’ ‘HH:mm:ss]
parser.timestamp.timezoneThe time zone used for all parsed dates.
Importance: low
Type: string
Default Value: UTC
CSFLE configuration
csfle.enabled
Accepts a boolean value. CSFLE is enabled for the connector if csfle.enabled is set to True.
Type: boolean
Default: False
auto.register.schemas
Specifies if the Serializer should attempt to register the Schema with Schema Registry.
Type: boolean
Default: true
Importance: medium
use.latest.version
Only applies when auto.register.schemas is set to false. If auto.register.schemas is set to false and use.latest.version is set to true, then instead of deriving a schema for the object passed to the client for serialization, Schema Registry uses the latest version of the schema in the subject for serialization.
Type: boolean
Default: true
Importance: medium
Confluent license properties
confluent.topic.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form
host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one though, in case a server is down).Type: list
Importance: high
confluent.topicName of the Kafka topic used for Confluent Platform configuration, including licensing information.
Type: string
Default:
_confluent-commandImportance: low
confluent.topic.replication.factorThe replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than three brokers, you must set this to the number of brokers (often 1).
Type: int
Default: 3
Importance: low
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties
as described below.
confluent.licenseConfluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
Type: string
Default: “”
Valid Values: Confluent Platform license
Importance: high
confluent.topic.ssl.truststore.locationThe location of the trust store file.
Type: string
Default: null
Importance: high
confluent.topic.ssl.truststore.passwordThe password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Type: password
Default: null
Importance: high
confluent.topic.ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.
Type: string
Default: null
Importance: high
confluent.topic.ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
Type: password
Default: null
Importance: high
confluent.topic.ssl.key.passwordThe password of the private key in the key store file. This is optional for client.
Type: password
Default: null
Importance: high
confluent.topic.security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
Type: string
Default: “PLAINTEXT”
Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command topic is
generated under different scenarios:
A 30-day trial license is automatically generated for the
_confluent commandtopic if you do not add theconfluent.licenseproperty or leave this property empty (for example,confluent.license=).Adding a valid license key (for example,
confluent.license=<valid-license-key>) adds a valid license in the_confluent-commandtopic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command topic using the
confluent.topic property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic..
License topic ACLs
The _confluent-command topic contains the license that corresponds to the
license key supplied through the confluent.license property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
DESCRIBE, READ, and WRITE on the
_confluent-commandtopic.Important
You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-commandtopic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Override Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor property to 1.
You can override producer-specific properties by using the
producer.override.* prefix (for source connectors) and consumer-specific
properties by using the consumer.override.* prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id property defaults to the name of the connector
with -licensing suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.