Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

CSV Source Connector for Confluent Platform

This connector monitors the directory specified in input.path for files and reads them as CSVs, converting each of the records to the strongly typed equivalent specified in key.schema and value.schema.

To use this connector, specify the name of the connector class in the connector.class configuration property.

connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector

Connector-specific configuration properties are described below.

CSV Source Connector Examples

The following examples follow the same steps as the Quick Start for installing Confluent Platform and the Spool Dir connectors.

CSV with Schema Example

This example reads CSV files and writes them to Kafka. It parses them using the schema specified in key.schema and value.schema.

  1. Create a data directory and generate test data.

    curl "https://api.mockaroo.com/api/58605010?count=1000&key=25fd9c80" > "data/csv-spooldir-source.csv"
    
  2. Create a spooldir.properties file with the following contents:

    name=CsvSchemaSpoolDir
    tasks.max=1
    connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector
    input.path=/path/to/data
    input.file.pattern=csv-spooldir-source.csv
    error.path=/path/to/error
    finished.path=/path/to/finished
    halt.on.error=false
    topic=spooldir-testing-topic
    csv.first.row.as.header=true
    key.schema={\n  \"name\" : \"com.example.users.UserKey\",\n  \"type\" : \"STRUCT\",\n  \"isOptional\" : false,\n  \"fieldSchemas\" : {\n    \"id\" : {\n      \"type\" : \"INT64\",\n      \"isOptional\" : false\n    }\n  }\n}
    value.schema={\n  \"name\" : \"com.example.users.User\",\n  \"type\" : \"STRUCT\",\n  \"isOptional\" : false,\n  \"fieldSchemas\" : {\n    \"id\" : {\n      \"type\" : \"INT64\",\n      \"isOptional\" : false\n    },\n    \"first_name\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"last_name\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"email\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"gender\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"ip_address\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"last_login\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"account_balance\" : {\n      \"name\" : \"org.apache.kafka.connect.data.Decimal\",\n      \"type\" : \"BYTES\",\n      \"version\" : 1,\n      \"parameters\" : {\n        \"scale\" : \"2\"\n      },\n      \"isOptional\" : true\n    },\n    \"country\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    },\n    \"favorite_color\" : {\n      \"type\" : \"STRING\",\n      \"isOptional\" : true\n    }\n  }\n}
    
  3. Load the SpoolDir CSV Source Connector.

    Caution

    You must include a double dash (--) between the topic name and your flag. For more information, see this post.

    confluent local load spooldir -- -d spooldir.properties
    

    Important

    Don’t use the Confluent CLI in production environments.

  4. Validate messages are sent to Kafka serialized with Avro.

    kafka-avro-console-consumer --topic spooldir-testing-topic --from-beginning --bootstrap-server localhost:9092
    

TSV Input File Example

The following example loads a TSV file and produces each record to Kafka.

  1. Generate a TSV dataset using the command below:

    curl "https://api.mockaroo.com/api/b10f7e90?count=1000&key=25fd9c80" > "tsv-spooldir-source.tsv"
    
  2. Create a spooldir.properties file with the following contents:

    name=TsvSpoolDir
    tasks.max=1
    connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector
    input.path=/path/to/data
    input.file.pattern=tsv-spooldir-source.tsv
    error.path=/path/to/error
    finished.path=/path/to/finished
    halt.on.error=false
    topic=spooldir-tsv-topic
    schema.generation.enabled=true
    csv.first.row.as.header=true
    csv.separator.char=9
    
  3. Load the SpoolDir JSON Source Connector.

    Caution

    You must include a double dash (--) between the topic name and your flag. For more information, see this post.

    confluent local load spooldir -- -d spooldir.properties
    

    Important

    Don’t use the Confluent CLI in production environments.

Configuration Properties

General

topic

The Kafka topic to write the data to.

  • Importance: HIGH
  • Type: STRING
batch.size

The number of records that should be returned with each batch.

  • Importance: LOW
  • Type: INT
  • Default Value: 1000
empty.poll.wait.ms

The amount of time to wait if a poll returns an empty list of records.

  • Importance: LOW
  • Type: LONG
  • Default Value: 500
  • Validator: [1,…,9223372036854775807]

Metadata

metadata.field

The name of the field in the value where the metadata will be stored.

  • Importance: LOW
  • Type: STRING
  • Default Value: metadata
metadata.location

Location that metadata about the input file will be stored. FIELD - Metadata about the file will be stored in a field in the value of the record. HEADERS - Metadata about the input file will be stored as headers on the record. NONE - no metadata about the input file will be stored.

  • Importance: LOW
  • Type: STRING
  • Default Value: HEADERS
  • Validator: Matches: NONE, HEADERS, FIELD

File System

error.path

The directory to place files that have errors. This directory must exist and be writable by the user running Kafka Connect.

  • Importance: HIGH
  • Type: STRING
  • Validator: Absolute path to a directory that exists and is writable.
input.file.pattern

Regular expression to check input file names against. This expression must match the entire filename. The equivalent of Matcher.matches().

  • Importance: HIGH
  • Type: STRING
input.path

The directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.

  • Importance: HIGH
  • Type: STRING
  • Validator: Absolute path to a directory that exists and is writable.
finished.path

The directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.

  • Importance: HIGH
  • Type: STRING
halt.on.error

Sets whether the task halts when it encounters an error or continues to the next file.

  • Importance: HIGH
  • Type: BOOLEAN
  • Default Value: true
cleanup.policy

Determines how the connector should cleanup the files that have been successfully processed. NONE leaves the files in place which could cause them to be reprocessed if the connector is restarted. DELETE removes the file from the filesystem. MOVE will move the file to a finished directory. MOVEBYDATE will move the file to a finished directory with subdirectories by date.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: MOVE
  • Validator: Matches: NONE, DELETE, MOVE, MOVEBYDATE
task.partitioner

The task partitioner implementation is used when the connector is configured to use more than one task. This is used by each task to identify which files will be processed by that task. This ensures that each file is only assigned to one task.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: ByName
  • Validator: Matches: ByName
file.buffer.size.bytes

The size of buffer for the BufferedInputStream that will be used to interact with the file system.

  • Importance: LOW
  • Type: INT
  • Default Value: 131072
  • Validator: [1,…]
file.minimum.age.ms

The amount of time in milliseconds after the file was last written to before the file can be processed.

  • Importance: LOW
  • Type: LONG
  • Default Value: 0
  • Validator: [0,…]
files.sort.attributes

The attributes each file will use to determine the sort order. Name is name of the file. Length is the length of the file preferring larger files first. LastModified is the LastModified attribute of the file preferring older files first.

  • Importance: LOW
  • Type: LIST
  • Default Value: [NameAsc]
  • Validator: Matches: NameAsc, NameDesc, LengthAsc, LengthDesc, LastModifiedAsc, LastModifiedDesc
processing.file.extension

Before a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.

  • Importance: LOW
  • Type: STRING
  • Default Value: .PROCESSING
  • Validator: Matches regex( ^.*..+$ )

Schema

key.schema

The schema for the key written to Kafka.

  • Importance: HIGH
  • Type: STRING
value.schema

The schema for the value written to Kafka.

  • Importance: HIGH
  • Type: STRING

Schema Generation

schema.generation.enabled

Flag to determine if schemas should be dynamically generated. If set to true, key.schema and value.schema can be omitted, but schema.generation.key.name and schema.generation.value.name must be set.

  • Importance: MEDIUM
  • Type: BOOLEAN
schema.generation.key.fields

The field(s) to use to build a key schema. This is only used during schema generation.

  • Importance: MEDIUM
  • Type: LIST
schema.generation.key.name

The name of the generated key schema.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: com.github.jcustenborder.kafka.connect.model.Key
schema.generation.value.name

The name of the generated value schema.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: com.github.jcustenborder.kafka.connect.model.Value
timestamp.field

The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.

  • Importance: MEDIUM
  • Type: STRING

Timestamps

timestamp.mode

Determines how the connector sets the timestamp for the ConnectRecord. If set to Field, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field in timestamp.field. If set to FILE_TIME, the last time the file was modified is used. If set to PROCESS_TIME (the default), the time the record is read is used.

  • Importance: MEDIUM
  • Type: STRING
  • Default Value: PROCESS_TIME
  • Validator: Matches: FIELD, FILE_TIME, PROCESS_TIME
timestamp.field

The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.

  • Importance: MEDIUM
  • Type: STRING
parser.timestamp.date.formats

The date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.

  • Importance: LOW
  • Type: LIST
  • Default Value: [yyyy-MM-dd’T’HH:mm:ss, yyyy-MM-dd’ ‘HH:mm:ss]
parser.timestamp.timezone

The time zone used for all parsed dates.

  • Importance: LOW
  • Type: STRING
  • Default Value: UTC

CSV Parsing

csv.case.sensitive.field.names

Flag to determine if the field names in the header row should be treated as case sensitive.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.rfc.4180.parser.enabled

Flag to determine if the RFC 4180 parser should be used instead of the default parser.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.first.row.as.header

Flag to indicate if the first row of data contains the header of the file. If true, the position of the columns are determined by the first row of the CSV file. The column position is inferred from the position of the schema supplied in value.schema. If set to true, the number of columns must be greater than or equal to the number of fields in the schema.

  • Importance: MEDIUM
  • Type: BOOLEAN
  • Default Value: false
csv.escape.char

The character that indicates a special character in integer form (ASCII code). Typically, a CSV file uses \(92).

  • Importance: LOW
  • Type: INT
  • Default Value: 92
csv.file.charset

Character set used to read the file.

  • Importance: LOW
  • Type: STRING
  • Default Value: UTF-8
  • Validator: Big5,Big5-HKSCS,CESU-8,EUC-JP,EUC-KR,GB18030,GB2312,GBK,IBM-Thai,IBM00858,IBM01140,IBM01141,IBM01142,IBM01143,IBM01144,IBM01145,IBM01146,IBM01147,IBM01148,IBM01149,IBM037,IBM1026,IBM1047,IBM273,IBM277,IBM278,IBM280,IBM284,IBM285,IBM290,IBM297,IBM420,IBM424,IBM437,IBM500,IBM775,IBM850,IBM852,IBM855,IBM857,IBM860,IBM861,IBM862,IBM863,IBM864,IBM865,IBM866,IBM868,IBM869,IBM870,IBM871,IBM918,ISO-2022-CN,ISO-2022-JP,ISO-2022-JP-2,ISO-2022-KR,ISO-8859-1,ISO-8859-13,ISO-8859-15,ISO-8859-2,ISO-8859-3,ISO-8859-4,ISO-8859-5,ISO-8859-6,ISO-8859-7,ISO-8859-8,ISO-8859-9,JIS_X0201,JIS_X0212-1990,KOI8-R,KOI8-U,Shift_JIS,TIS-620,US-ASCII,UTF-16,UTF-16BE,UTF-16LE,UTF-32,UTF-32BE,UTF-32LE,UTF-8,windows-1250,windows-1251,windows-1252,windows-1253,windows-1254,windows-1255,windows-1256,windows-1257,windows-1258,windows-31j,x-Big5-HKSCS-2001,x-Big5-Solaris,x-COMPOUND_TEXT,x-euc-jp-linux,x-EUC-TW,x-eucJP-Open,x-IBM1006,x-IBM1025,x-IBM1046,x-IBM1097,x-IBM1098,x-IBM1112,x-IBM1122,x-IBM1123,x-IBM1124,x-IBM1166,x-IBM1364,x-IBM1381,x-IBM1383,x-IBM300,x-IBM33722,x-IBM737,x-IBM833,x-IBM834,x-IBM856,x-IBM874,x-IBM875,x-IBM921,x-IBM922,x-IBM930,x-IBM933,x-IBM935,x-IBM937,x-IBM939,x-IBM942,x-IBM942C,x-IBM943,x-IBM943C,x-IBM948,x-IBM949,x-IBM949C,x-IBM950,x-IBM964,x-IBM970,x-ISCII91,x-ISO-2022-CN-CNS,x-ISO-2022-CN-GB,x-iso-8859-11,x-JIS0208,x-JISAutoDetect,x-Johab,x-MacArabic,x-MacCentralEurope,x-MacCroatian,x-MacCyrillic,x-MacDingbat,x-MacGreek,x-MacHebrew,x-MacIceland,x-MacRoman,x-MacRomania,x-MacSymbol,x-MacThai,x-MacTurkish,x-MacUkraine,x-MS932_0213,x-MS950-HKSCS,x-MS950-HKSCS-XP,x-mswin-936,x-PCK,x-SJIS_0213,x-UTF-16LE-BOM,X-UTF-32BE-BOM,X-UTF-32LE-BOM,x-windows-50220,x-windows-50221,x-windows-874,x-windows-949,x-windows-950,x-windows-iso2022jp
csv.ignore.leading.whitespace

Sets whether leading white space is ignored. If set to true (the default), white space in front of a quote in a field is ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: true
csv.ignore.quotations

Sets whether quotations are ignored. If set to true, quotations are ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.keep.carriage.return

Flag to determine if the carriage return at the end of the line should be maintained.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.null.field.indicator

Indicator to determine how the CSV Reader can determine if a field is null. Valid values are EMPTY_SEPARATORS, EMPTY_QUOTES, BOTH, or NEITHER (the default). For more information see the Opencsv documentation

  • Importance: LOW
  • Type: STRING
  • Default Value: NEITHER
  • Validator matches: EMPTY_SEPARATORS, EMPTY_QUOTES, BOTH, NEITHER
csv.quote.char

The character that is used to quote a field. This typically happens when the csv.separator.char character is within the data.

  • Importance: LOW
  • Type: INT
  • Default Value: 34
csv.separator.char

The character that separates each field in integer form. Typically, a CSV file uses ,(44) and a TSV file uses tab(9). If csv.separator.char is defined as a null(0), then the RFC 4180 parser must be used by default. This is the equivalent of csv.rfc.4180.parser.enabled = true.

  • Importance: LOW
  • Type: INT
  • Default Value: 44
csv.skip.lines

Number of lines to skip at the beginning of the file.

  • Importance: LOW
  • Type: INT
  • Default Value: 0
csv.strict.quotes

Sets the strict quotes setting. If true, characters outside the quotes are ignored.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: false
csv.verify.reader

Flag to determine if the reader should be verified.

  • Importance: LOW
  • Type: BOOLEAN
  • Default Value: true