CSV Source Connector for Confluent Platform

The Kafka Connect CSV Source Connector monitors the SFTP directory specified in input.path for files and reads them as CSVs, converting each of the records to the strongly typed equivalent specified in key.schema and value.schema. The connector can also auto generate the key.schema and value.schema at run time if schema.generation.enabled is true.

To use this connector, specify the name of the connector class in the connector.class configuration property.

connector.class=io.confluent.connect.sftp.SftpCsvSourceConnector

Limitations

The CSV Source Connector will only authenticate with Kerberos when a keytab is cached on the system–not when the keytab is provided in the configs. Additionally, the Connector will only work with Kerberos on Confluent Platform 5.5.4 or later.

CSV Source Connector Examples

The following examples follow the same steps as the Quick Start for installing Confluent Platform and the SFTP connector package. Review the Quick Start for help running the Confluent Platform and installing the SFTP connector package.

CSV with Schema Example

This example reads CSV files and writes them to Kafka. It parses them using the schema specified in key.schema and value.schema.

  1. Set up an SFTP data directory for files to process, generate test data in your local and push it to the SFTP server. For example:

    echo $'1,Salmon,Baitman,sbaitman0@feedburner.com,Male,120.181.75.98,2015-03-01T06:01:15Z,17462.66,IT,#f09bc0\n2,Debby,Brea,dbrea1@icio.us,Female,153.239.187.49,2018-10-21T12:27:12Z,14693.49,CZ,#73893a' > "csv-sftp-source.csv"
    
  2. Create an sftp.properties file with the following contents:

    name=CsvSchemaSftp
    kafka.topic=sftp-testing-topic
    tasks.max=1
    connector.class=io.confluent.connect.sftp.SftpCsvSourceConnector
    cleanup.policy=NONE
    behavior.on.error=IGNORE
    input.path=/path/to/data
    error.path=/path/to/error
    finished.path=/path/to/finished
    input.file.pattern=csv-sftp-source.csv
    sftp.username=username
    sftp.password=password
    sftp.host=localhost
    sftp.port=22
    csv.first.row.as.header=false
    key.schema={\"name\" : \"com.example.users.UserKey\",\"type\" : \"STRUCT\",\"isOptional\" : false,\"fieldSchemas\" : {\"id\" : {\"type\" : \"INT64\",\"isOptional\" : false}}}
    value.schema={\"name\" : \"com.example.users.User\",\"type\" : \"STRUCT\",\"isOptional\" : false,\"fieldSchemas\" : {\"id\" : {\"type\" : \"INT64\",\"isOptional\" : false},\"first_name\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"last_name\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"email\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"gender\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"ip_address\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"last_login\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"account_balance\" : {\"name\" : \"org.apache.kafka.connect.data.Decimal\",\"type\" : \"BYTES\",\"version\" : 1,\"parameters\" : {\"scale\" : \"2\"},\"isOptional\" : true},\"country\" : {\"type\" : \"STRING\",\"isOptional\" : true},\"favorite_color\" : {\"type\" : \"STRING\",\"isOptional\" : true}}}
    
  3. Load the SFTP CSV Source Connector.

    Caution

    You must include a double dash (--) between the connector name and your flag. For more information, see this post.

    confluent local services connect connector load CsvSchemaSftp --config sftp.properties
    

    Important

    Don’t use the Confluent CLI in production environments.

  4. Validate messages are sent to Kafka serialized with Avro.

    kafka-avro-console-consumer --topic sftp-testing-topic --from-beginning --bootstrap-server localhost:9092
    

TSV Input File Example

The following example loads a TSV file and produces each record to Kafka. TSV Source connector works just like the CSV Source connector, except that you have to provide the extra configuration property csv.separator.char=9. The number 9 is the ASCII value for tab space.

  1. Generate a TSV data set using the command below:

    echo $'id\tfirst_name\tlast_name\temail\tgender\tip_address\tlast_login\taccount_balance\tcountry\tfavorite_color\n1\tPadraig\tOxshott\tpoxshott0@dion.ne.jp\tMale\t47.243.121.95\t2016-06-24T22:43:42Z\t15274.22\tJP\t#06708f\n2\tEdi\tOrrah\teorrah1@cafepress.com\tFemale\t158.229.234.101\t2017-03-01T17:52:47Z\t12947.6\tCN\t#5f2aa2' > "tsv-sftp-source.tsv"
    
  2. Create an sftp.properties file with the following contents:

    name=TsvSftp
    tasks.max=1
    connector.class=io.confluent.connect.sftp.SftpCsvSourceConnector
    input.path=/path/to/data
    error.path=/path/to/error
    finished.path=/path/to/finished
    cleanup.policy=NONE
    input.file.pattern=tsv-sftp-source.tsv
    behavior.on.error=IGNORE
    sftp.username=username
    sftp.password=password
    sftp.host=localhost
    sftp.port=22
    kafka.topic=sftp-tsv-topic
    schema.generation.enabled=true
    csv.first.row.as.header=true
    csv.separator.char=9
    
  3. Load the SFTP TSV Source Connector.

    Caution

    You must include a double dash (--) between the connector name and your flag. For more information, see this post.

    confluent local services connect connector load TsvSftp --config sftp.properties
    

    Important

    Don’t use the Confluent CLI in production environments.

Key-Based TLS Authentication Example

The following connector configuration example shows the configuration properties used when the connector requires TLS private/public key authentication:

name=sftp-sink-connector
topics=sftptopic
tasks.max=1
connector.class=io.confluent.connect.sftp.SftpSinkConnector
tls.enabled = true
tls.private.key = /<path-to-file>/myftpd.key
tls.public.key = /<path-to-file>/myftpd.crt
#sftp.passphrase=<uncomment-if-required>
sftp.username=<sftp-username>
sftp.password=<sftp-password>
confluent.topic.bootstrap.servers=localhost:9092
sftp.host=192.168.2.5
sftp.port=22
flush.size=3
format.class=io.confluent.connect.sftp.sink.format.avro.AvroFormat
storage.class=io.confluent.connect.sftp.sink.storage.SftpSinkStorage
sftp.working.dir= /home/<path-to-files/

Configuration Properties

Connector-specific configuration properties are described below.

General

kafka.topic

The Kafka topic to write the data to.

  • Importance: HIGH
  • Type: STRING
batch.size

The number of records that should be returned with each batch.

  • Importance: LOW
  • Type: INT
  • Default Value: 1000
empty.poll.wait.ms

The amount of time to wait if a poll returns an empty list of records.

  • Importance: LOW
  • Type: LONG
  • Default Value: 250
  • Valid values: [1,…,9223372036854775807]

Connection

sftp.host

Sftp host to connect with.

  • Type: string
  • Default: localhost
  • Importance: high
sftp.port

Port number of SFTP server.

  • Type: int
  • Default: 22
  • Importance: medium
sftp.username

Username for sftp server.

  • Type: string
  • Default: foo
  • Importance: high
sftp.password

Password for sftp server.

  • Type: string
  • Default: pass
  • Importance: high

Security

tls.private.key

Private key that will be used for public-key authentication.

  • Type: password
  • Default: [hidden]
  • Importance: low
tls.public.key

Public key that will be used to decrypt the private key if the given private key is encrypted.

  • Type: password
  • Default: [hidden]
  • Importance: low
tls.passphrase

Passphrase that will be used to decrypt the private key if the given private key is encrypted.

  • Type: password
  • Default: [hidden]
  • Importance: low
tls.pemfile

Path to pemfile.

  • Type: string
  • Default: “”
  • Importance: low

Kerberos

If Kerberos is enabled, the Connector assumes the /etc/krb5.conf file has been correctly configured and points to a KDC which can issue tickets for SFTP. The connector will refresh its TGT when it expires.

To test if the configuration of the krb5.conf file and the keytab was successful, the command kinit -k -t /path/to/user.keytab username may be used. The connector first looks for a cached ticket on the system, if it does not exist it uses the configured keytab and principal. Note that if there is ticket cached on the system that is for a different application, the cache may need to be cleared temporarily when starting the connector.

kerberos.user.principal

The principal to use when connecting to SFTP with Kerberos. The `` sftp.username`` is still required. Format: username@REALM.

  • Type: string
  • Default: “”
  • Importance: low
kerberos.keytab.path

The path to the keytab file for the SFTP connector principal. This keytab file should only be readable by the connector.

  • Type: string
  • Default: “”
  • Importance: low

Proxy

sftp.proxy.url

Proxy url for sftp connection.

  • Type: string
  • Default: “”
  • Importance: low
proxy.username

Proxy username for sftp server if proxy is being used.

  • Type: string
  • Default: null
  • Importance: low
proxy.password

Proxy password for sftp server if proxy is being used.

  • Type: string
  • Default: null
  • Importance: low

Auto topic creation

For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.

Note

Configuration properties accept regular expressions (regex) that are defined as Java regex.

topic.creation.groups

A list of group aliases that are used to define per-group topic configurations for matching topics. A default group always exists and matches all topics.

  • Type: List of String types
  • Default: empty
  • Possible Values: The values of this property refer to any additional groups. A default group is always defined for topic configurations.
topic.creation.$alias.replication.factor

The replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the default group. This property is optional for any other group defined in topic.creation.groups. Other groups use the Kafka broker default value.

  • Type: int
  • Default: n/a
  • Possible Values: >= 1 for a specific valid value or -1 to use the Kafka broker’s default value.
topic.creation.$alias.partitions

The number of topic partitions created by this connector. This is a required property for the default group. This property is optional for any other group defined in topic.creation.groups. Other groups use the Kafka broker default value.

  • Type: int
  • Default: n/a
  • Possible Values: >= 1 for a specific valid value or -1 to use the Kafka broker’s default value.
topic.creation.$alias.include

A list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics. $alias applies to any group defined in topic.creation.groups. This property does not apply to the default group.

  • Type: List of String types
  • Default: empty
  • Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.exclude

A list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration. $alias applies to any group defined in topic.creation.groups. This property does not apply to the default group. Note that exclusion rules override any inclusion rules for topics.

  • Type: List of String types
  • Default: empty
  • Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.${kafkaTopicSpecificConfigName}

Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule. $alias applies to the default group as well as any group defined in topic.creation.groups.

  • Type: property values
  • Default: Kafka broker value

File System

input.path

The directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.

  • Importance: HIGH
  • Type: STRING
  • Valid value: Absolute path to a sftp directory that exists and is writable.
input.file.pattern

Regular expression to check input file names against. This expression must match the entire filename. The equivalent of Matcher.matches().

  • Importance: HIGH
  • Type: STRING
finished.path

The directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.

  • Importance: HIGH
  • Type: STRING
  • Valid value: Absolute path to a sftp directory that exists and is writable.
error.path

The directory to place files that have errors. This directory must exist and be writable by the user running Kafka Connect.

  • Importance: HIGH
  • Type: STRING
  • Valid value: Absolute path to a sftp directory that exists and is writable.
behavior.on.error

Sets how the connector should behave when error are encountered while processing records. FAIL stops the connector when any error occurs. IGNORE ignores the current file and continues to the next file for processing. LOG after logging the error message, it continues to the next file for processing.

  • Importance: HIGH
  • Type: STRING
  • Default Value: FAIL
  • Valid values: FAIL, IGNORE, LOG
cleanup.policy

Sets how the connector should clean up files that are successfully processed. NONE leaves the files in place. Files left in place may be reprocessed if the connector is restarted. DELETE removes the file from the filesystem. MOVE (the default) moves the file to the finished.path directory.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: MOVE
  • Valid values: NONE, DELETE, MOVE
file.minimum.age.ms

The amount of time in milliseconds after the file was last written to before the file can be processed.

  • Importance: LOW
  • Type: LONG
  • Default Value: 0
  • Valid values: [0,…]
processing.file.extension

Before a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.

  • Importance: LOW
  • Type: STRING
  • Default Value: .PROCESSING
  • Valid values: regex( ^.*..+$ )

Schema

key.schema

The schema for the key written to Kafka.

  • Importance: HIGH
  • Type: STRING
value.schema

The schema for the value written to Kafka.

  • Importance: HIGH
  • Type: STRING

Schema Generation

schema.generation.enabled

Flag to determine if schemas should be dynamically generated. If set to true, key.schema and value.schema can be omitted, but schema.generation.key.name and schema.generation.value.name must be set.

  • Importance: MEDIUM
  • Type: BOOLEAN
  • Default Value: false
schema.generation.key.fields

Field(s) use to build key schema. This is only used during schema generation. If schema.generation.enabled is true and schema.generation.key.fields is set to empty list as [] then key.schema will be of type Struct with empty fields. If schema.generation.enabled is true and schema.generation.key.fields is set to list of field names then the key.schema will be generated by extracting the fields present in the list from value.schema.

  • Importance: MEDIUM
  • Type: LIST
  • Default Value: []
schema.generation.key.name

The name of the generated key schema.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: defaultkeyschemaname
schema.generation.value.name

The name of the generated value schema.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: defaultvalueschemaname
timestamp.field

The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: “”

Timestamps

timestamp.mode

Determines how the connector sets the timestamp for the ConnectRecord. If set to Field, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field in timestamp.field. If set to FILE_TIME, the last time the file was modified is used. If set to PROCESS_TIME (the default), the time the record is read is used.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: PROCESS_TIME
  • Valid values: FIELD, FILE_TIME, PROCESS_TIME
timestamp.field

The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.

  • Importance: MEDIUM
  • Type: STRING
parser.timestamp.date.formats

The date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.

  • Importance: LOW
  • Type: LIST
  • Default Value: [yyyy-MM-dd’T’HH:mm:ss, yyyy-MM-dd’ ‘HH:mm:ss]
parser.timestamp.timezone

The time zone used for all parsed dates.

  • Importance: LOW
  • Type: STRING
  • Default Value: UTC

CSV Parsing

csv.case.sensitive.field.names

Flag to determine if the field names in the header row should be treated as case sensitive.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.rfc.4180.parser.enabled

Flag to determine if the RFC 4180 parser should be used instead of the default parser.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.first.row.as.header

Flag to indicate if the first row of data contains the header of the file. If true, the position of the columns are determined by the first row of the CSV file. The column position is inferred from the position of the schema supplied in value.schema. If set to true, the number of columns must be greater than or equal to the number of fields in the schema. If false, and schema.generation.enabled is true, the position of the columns are determined by the regex column%02d where %02d will be replaced by column number.

  • Importance: MEDIUM
  • Type: BOOLEAN
  • Default Value: false
csv.escape.char

The character that indicates a special character in integer form (ASCII code). Typically, a CSV file uses \(92).

  • Importance: LOW
  • Type: INT
  • Default Value: 92
csv.file.charset

Character set used to read the file.

  • Importance: LOW
  • Type: STRING
  • Default Value: UTF-8
  • Valid values: Big5, Big5-HKSCS, CESU-8, EUC-JP, EUC-KR, GB18030, GB2312, GBK, IBM-Thai, IBM00858, IBM01140, IBM01141, IBM01142, IBM01143, IBM01144, IBM01145, IBM01146, IBM01147, IBM01148, IBM01149, IBM037, IBM1026, IBM1047, IBM273, IBM277, IBM278, IBM280, IBM284, IBM285, IBM290, IBM297, IBM420, IBM424, IBM437, IBM500, IBM775, IBM850, IBM852, IBM855, IBM857, IBM860, IBM861, IBM862, IBM863, IBM864, IBM865, IBM866, IBM868, IBM869, IBM870, IBM871, IBM918, ISO-2022-CN, ISO-2022-JP, ISO-2022-JP-2, ISO-2022-KR, ISO-8859-1, ISO-8859-13, ISO-8859-15, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6, ISO-8859-7, ISO-8859-8, ISO-8859-9, JIS_X0201, JIS_X0212-1990, KOI8-R, KOI8-U, Shift_JIS, TIS-620, US-ASCII, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE, UTF-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254, windows-1255, windows-1256, windows-1257, windows-1258, windows-31j, x-Big5-HKSCS-2001, x-Big5-Solaris, x-COMPOUND_TEXT, x-euc-jp-linux, x-EUC-TW, x-eucJP-Open, x-IBM1006, x-IBM1025, x-IBM1046, x-IBM1097, x-IBM1098, x-IBM1112, x-IBM1122, x-IBM1123, x-IBM1124, x-IBM1166, x-IBM1364, x-IBM1381, x-IBM1383, x-IBM300, x-IBM33722, x-IBM737, x-IBM833, x-IBM834, x-IBM856, x-IBM874, x-IBM875, x-IBM921, x-IBM922, x-IBM930, x-IBM933, x-IBM935, x-IBM937, x-IBM939, x-IBM942, x-IBM942C, x-IBM943, x-IBM943C, x-IBM948, x-IBM949, x-IBM949C, x-IBM950, x-IBM964, x-IBM970, x-ISCII91, x-ISO-2022-CN-CNS, x-ISO-2022-CN-GB, x-iso-8859-11, x-JIS0208, x-JISAutoDetect, x-Johab, x-MacArabic, x-MacCentralEurope, x-MacCroatian, x-MacCyrillic, x-MacDingbat, x-MacGreek, x-MacHebrew, x-MacIceland, x-MacRoman, x-MacRomania, x-MacSymbol, x-MacThai, x-MacTurkish, x-MacUkraine, x-MS932_0213, x-MS950-HKSCS, x-MS950-HKSCS-XP, x-mswin-936, x-PCK, x-SJIS_0213, x-UTF-16LE-BOM, X-UTF-32BE-BOM, X-UTF-32LE-BOM, x-windows-50220, x-windows-50221, x-windows-874, x-windows-949, x-windows-950, x-windows-iso2022jp
csv.ignore.leading.whitespace

Sets whether leading white space is ignored. If set to true (the default), white space in front of a quote in a field is ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: true
csv.ignore.quotations

Sets whether quotations are ignored. If set to true, quotations are ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.keep.carriage.return

Flag to determine if the carriage return at the end of the line should be maintained.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.null.field.indicator

Indicator to determine how the CSV Reader can determine if a field is null. Valid values are EMPTY_SEPARATORS, EMPTY_QUOTES, BOTH, or NEITHER (the default). For more information see the Opencsv documentation

  • Importance: LOW
  • Type: STRING
  • Default Value: NEITHER
  • Valid values: EMPTY_SEPARATORS, EMPTY_QUOTES, BOTH, NEITHER
csv.quote.char

The character that is used to quote a field. This typically happens when the csv.separator.char character is within the data.

  • Importance: LOW
  • Type: INT
  • Default Value: 34
csv.separator.char

The ASCII value of the character that separates each field. Typically, a CSV file uses , (ASCII value 44) and a TSV file uses tab (ASCII value 9). If csv.separator.char is defined as a null(0), then the RFC 4180 parser must be used by default. This is the equivalent of csv.rfc.4180.parser.enabled = true.

  • Importance: LOW
  • Type: INT
  • Default Value: 44
csv.skip.lines

Number of lines to skip at the beginning of the file.

  • Importance: LOW
  • Type: INT
  • Default Value: 0
csv.strict.quotes

Sets the strict quotes setting. If true, characters outside the quotes are ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.verify.reader

Flag to determine if the reader should be verified.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: true