CSV Source Connector for Confluent Platform¶
The CSV Source connector monitors the directory specified in input.path
for
files and reads them as CSVs, converting each of the records to the strongly
typed equivalent specified in key.schema
and value.schema
. Note that if
a file has already been processed by this connector, it will continue from the
last known offset from the previous process step.
To use this connector, specify the name of the connector class in the connector.class
configuration property:
connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector
Connector-specific configuration properties are described below.
CSV Source Connector Examples¶
The examples in this page follow the same steps as the quick start for installing Confluent Platform and the Spool Dir connectors.
CSV with Schema Example¶
This example reads CSV files and writes them to Kafka. It parses them using the schema specified in key.schema
and value.schema
.
Create a data directory and generate test data.
curl "https://api.mockaroo.com/api/58605010?count=1000&key=25fd9c80" > "data/csv-spooldir-source.csv"
Create a
spooldir.properties
file with the following contents:name=CsvSchemaSpoolDir tasks.max=1 connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector input.path=/path/to/data input.file.pattern=csv-spooldir-source.csv error.path=/path/to/error finished.path=/path/to/finished halt.on.error=false topic=spooldir-testing-topic csv.first.row.as.header=true key.schema={\n \"name\" : \"com.example.users.UserKey\",\n \"type\" : \"STRUCT\",\n \"isOptional\" : false,\n \"fieldSchemas\" : {\n \"id\" : {\n \"type\" : \"INT64\",\n \"isOptional\" : false\n }\n }\n} value.schema={\n \"name\" : \"com.example.users.User\",\n \"type\" : \"STRUCT\",\n \"isOptional\" : false,\n \"fieldSchemas\" : {\n \"id\" : {\n \"type\" : \"INT64\",\n \"isOptional\" : false\n },\n \"first_name\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"last_name\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"email\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"gender\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"ip_address\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"last_login\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"account_balance\" : {\n \"name\" : \"org.apache.kafka.connect.data.Decimal\",\n \"type\" : \"BYTES\",\n \"version\" : 1,\n \"parameters\" : {\n \"scale\" : \"2\"\n },\n \"isOptional\" : true\n },\n \"country\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n },\n \"favorite_color\" : {\n \"type\" : \"STRING\",\n \"isOptional\" : true\n }\n }\n}
Load the SpoolDir CSV Source connector.
Caution
You must include a double dash (
--
) between the topic name and your flag. For more information, see this post.confluent local services connect connector load spooldir --config spooldir.properties
Important
Don’t use the Confluent CLI in production environments.
Validate messages are sent to Kafka serialized with Avro.
kafka-avro-console-consumer --topic spooldir-testing-topic --from-beginning --bootstrap-server localhost:9092
TSV Input File Example¶
The following example loads a TSV file and produces each record to Kafka.
Generate a TSV dataset using the command below:
curl "https://api.mockaroo.com/api/b10f7e90?count=1000&key=25fd9c80" > "tsv-spooldir-source.tsv"
Create a
spooldir.properties
file with the following contents:name=TsvSpoolDir tasks.max=1 connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector input.path=/path/to/data input.file.pattern=tsv-spooldir-source.tsv error.path=/path/to/error finished.path=/path/to/finished halt.on.error=false topic=spooldir-tsv-topic schema.generation.enabled=true csv.first.row.as.header=true csv.separator.char=9
Load the SpoolDir JSON Source connector.
Caution
You must include a double dash (
--
) between the topic name and your flag. For more information, see this post.confluent local services connect connector load spooldir --config spooldir.properties
Important
Don’t use the Confluent CLI in production environments.
Configuration Properties¶
General¶
topic
The Kafka topic to write the data to.
- Importance: high
- Type: string
batch.size
The number of records that should be returned with each batch.
- Importance: low
- Type: INT
- Default Value: 1000
empty.poll.wait.ms
The amount of time to wait if a poll returns an empty list of records.
- Importance: low
- Type: long
- Default Value: 500
- Valid values: [1,…,9223372036854775807]
Metadata¶
metadata.field
The name of the field in the value where the metadata will be stored.
- Importance: low
- Type: string
- Default Value: metadata
metadata.location
Location that metadata about the input file will be stored. FIELD - Metadata about the file will be stored in a field in the value of the record. HEADERS - Metadata about the input file will be stored as headers on the record. NONE - no metadata about the input file will be stored.
- Importance: low
- Type: string
- Default Value: HEADERS
- Valid values:
NONE
,HEADERS
,FIELD
Auto topic creation¶
For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.
Configuration properties accept regular expressions (regex) that are defined as Java regex.
topic.creation.groups
A list of group aliases that are used to define per-group topic configurations for matching topics. A
default
group always exists and matches all topics.- Type: List of String types
- Default: empty
- Possible Values: The values of this property refer to any additional groups. A
default
group is always defined for topic configurations.
topic.creation.$alias.replication.factor
The replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the
default
group. This property is optional for any other group defined intopic.creation.groups
. Other groups use the Kafka broker default value.- Type: int
- Default: n/a
- Possible Values:
>= 1
for a specific valid value or-1
to use the Kafka broker’s default value.
topic.creation.$alias.partitions
The number of topic partitions created by this connector. This is a required property for the
default
group. This property is optional for any other group defined intopic.creation.groups
. Other groups use the Kafka broker default value.- Type: int
- Default: n/a
- Possible Values:
>= 1
for a specific valid value or-1
to use the Kafka broker’s default value.
topic.creation.$alias.include
A list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics.
$alias
applies to any group defined intopic.creation.groups
. This property does not apply to thedefault
group.- Type: List of String types
- Default: empty
- Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.exclude
A list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration.
$alias
applies to any group defined intopic.creation.groups
. This property does not apply to thedefault
group. Note that exclusion rules override any inclusion rules for topics.- Type: List of String types
- Default: empty
- Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.${kafkaTopicSpecificConfigName}
Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule.
$alias
applies to thedefault
group as well as any group defined intopic.creation.groups
.- Type: property values
- Default: Kafka broker value
File System¶
error.path
The directory to place files that have errors. This directory must exist and be writable by the user running Kafka Connect.
- Importance: high
- Type: string
- Valid values: Absolute path to a directory that exists and is writable.
input.file.pattern
Regular expression to check input file names against. This expression must match the entire filename. The equivalent of
Matcher.matches()
. For valid syntax definitions, see Class Pattern.- Importance: high
- Type: string
input.path
The directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.
- Importance: high
- Type: string
- Valid values: Absolute path to a directory that exists and is writable.
finished.path
The directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.
- Importance: high
- Type: string
halt.on.error
Sets whether the task halts when it encounters an error or continues to the next file.
- Importance: high
- Type: boolean
- Default Value: true
cleanup.policy
Determines how the connector should cleanup the files that have been successfully processed. DELETE removes the file from the filesystem. MOVE will move the file to a finished directory. MOVEBYDATE will move the file to a finished directory with subdirectories by date.
- Importance: medium
- Type: string
- Default Value: MOVE
- Valid values:
DELETE
,MOVE
,MOVEBYDATE
task.partitioner
The task partitioner implementation is used when the connector is configured to use more than one task. This is used by each task to identify which files will be processed by that task. This ensures that each file is only assigned to one task.
- Importance: medium
- Type: string
- Default Value: ByName
- Valid values:
ByName
file.buffer.size.bytes
The size of buffer for the BufferedInputStream that will be used to interact with the file system.
- Importance: low
- Type: INT
- Default Value: 131072
- Valid values: [1,…]
file.minimum.age.ms
The amount of time in milliseconds after the file was last written to before the file can be processed.
- Importance: low
- Type: long
- Default Value: 0
- Valid values: [0,…]
files.sort.attributes
The attributes each file will use to determine the sort order. Name is name of the file. Length is the length of the file preferring larger files first. LastModified is the LastModified attribute of the file preferring older files first.
- Importance: low
- Type: list
- Default Value: [NameAsc]
- Valid values:
NameAsc
,NameDesc
,LengthAsc
,LengthDesc
,LastModifiedAsc
,LastModifiedDesc
processing.file.extension
Before a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.
- Importance: low
- Type: string
- Default Value: .PROCESSING
- Valid values: Matches regex( ^.*..+$ )
Schema¶
key.schema
The schema for the key written to Kafka.
- Importance: high
- Type: string
value.schema
The schema for the value written to Kafka.
- Importance: high
- Type: string
Schema Generation¶
schema.generation.enabled
Flag to determine if schemas should be dynamically generated. If set to true,
key.schema
andvalue.schema
can be omitted, butschema.generation.key.name
andschema.generation.value.name
must be set.- Importance: medium
- Type: boolean
schema.generation.key.fields
The field(s) to use to build a key schema. This is only used during schema generation.
- Importance: medium
- Type: list
schema.generation.key.name
The name of the generated key schema.
- Importance: medium
- Type: string
- Default Value: com.github.jcustenborder.kafka.connect.model.Key
schema.generation.value.name
The name of the generated value schema.
- Importance: medium
- Type: string
- Default Value: com.github.jcustenborder.kafka.connect.model.Value
timestamp.field
The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
- Importance: medium
- Type: string
Timestamps¶
timestamp.mode
Determines how the connector sets the timestamp for the ConnectRecord. If set to
Field
, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field intimestamp.field
. If set toFILE_TIME
, the last time the file was modified is used. If set toPROCESS_TIME
(the default), the time the record is read is used.- Importance: medium
- Type: string
- Default Value: PROCESS_TIME
- Valid values:
FIELD
,FILE_TIME
,PROCESS_TIME
timestamp.field
The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
- Importance: medium
- Type: string
parser.timestamp.date.formats
The date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.
- Importance: low
- Type: list
- Default Value: [yyyy-MM-dd’T’HH:mm:ss, yyyy-MM-dd’ ‘HH:mm:ss]
parser.timestamp.timezone
The time zone used for all parsed dates.
- Importance: low
- Type: string
- Default Value: UTC
CSV Parsing¶
csv.case.sensitive.field.names
Flag to determine if the field names in the header row should be treated as case sensitive.
- Importance: LOW
- Type: BOOLEAN
- Default Value: false
csv.rfc.4180.parser.enabled
Flag to determine if the RFC 4180 parser should be used instead of the default parser.
- Importance: LOW
- Type: BOOLEAN
- Default Value: false
csv.first.row.as.header
Flag to indicate if the first row of data contains the header of the file. If true, the position of the columns are determined by the first row of the CSV file. The column position is inferred from the position of the schema supplied in
value.schema
. If set to true, the number of columns must be greater than or equal to the number of fields in the schema.- Importance: MEDIUM
- Type: BOOLEAN
- Default Value: false
csv.escape.char
The character that indicates a special character in integer form (ASCII code). Typically, a CSV file uses
\(92)
.- Importance: LOW
- Type: INT
- Default Value: 92
csv.file.charset
Character set used to read the file.
- Importance: LOW
- Type: STRING
- Default Value: UTF-8
- Valid values: Big5,Big5-HKSCS,CESU-8,EUC-JP,EUC-KR,GB18030,GB2312,GBK,IBM-Thai,IBM00858,IBM01140,IBM01141,IBM01142,IBM01143,IBM01144,IBM01145,IBM01146,IBM01147,IBM01148,IBM01149,IBM037,IBM1026,IBM1047,IBM273,IBM277,IBM278,IBM280,IBM284,IBM285,IBM290,IBM297,IBM420,IBM424,IBM437,IBM500,IBM775,IBM850,IBM852,IBM855,IBM857,IBM860,IBM861,IBM862,IBM863,IBM864,IBM865,IBM866,IBM868,IBM869,IBM870,IBM871,IBM918,ISO-2022-CN,ISO-2022-JP,ISO-2022-JP-2,ISO-2022-KR,ISO-8859-1,ISO-8859-13,ISO-8859-15,ISO-8859-2,ISO-8859-3,ISO-8859-4,ISO-8859-5,ISO-8859-6,ISO-8859-7,ISO-8859-8,ISO-8859-9,JIS_X0201,JIS_X0212-1990,KOI8-R,KOI8-U,Shift_JIS,TIS-620,US-ASCII,UTF-16,UTF-16BE,UTF-16LE,UTF-32,UTF-32BE,UTF-32LE,UTF-8,windows-1250,windows-1251,windows-1252,windows-1253,windows-1254,windows-1255,windows-1256,windows-1257,windows-1258,windows-31j,x-Big5-HKSCS-2001,x-Big5-Solaris,x-COMPOUND_TEXT,x-euc-jp-linux,x-EUC-TW,x-eucJP-Open,x-IBM1006,x-IBM1025,x-IBM1046,x-IBM1097,x-IBM1098,x-IBM1112,x-IBM1122,x-IBM1123,x-IBM1124,x-IBM1166,x-IBM1364,x-IBM1381,x-IBM1383,x-IBM300,x-IBM33722,x-IBM737,x-IBM833,x-IBM834,x-IBM856,x-IBM874,x-IBM875,x-IBM921,x-IBM922,x-IBM930,x-IBM933,x-IBM935,x-IBM937,x-IBM939,x-IBM942,x-IBM942C,x-IBM943,x-IBM943C,x-IBM948,x-IBM949,x-IBM949C,x-IBM950,x-IBM964,x-IBM970,x-ISCII91,x-ISO-2022-CN-CNS,x-ISO-2022-CN-GB,x-iso-8859-11,x-JIS0208,x-JISAutoDetect,x-Johab,x-MacArabic,x-MacCentralEurope,x-MacCroatian,x-MacCyrillic,x-MacDingbat,x-MacGreek,x-MacHebrew,x-MacIceland,x-MacRoman,x-MacRomania,x-MacSymbol,x-MacThai,x-MacTurkish,x-MacUkraine,x-MS932_0213,x-MS950-HKSCS,x-MS950-HKSCS-XP,x-mswin-936,x-PCK,x-SJIS_0213,x-UTF-16LE-BOM,X-UTF-32BE-BOM,X-UTF-32LE-BOM,x-windows-50220,x-windows-50221,x-windows-874,x-windows-949,x-windows-950,x-windows-iso2022jp
csv.ignore.leading.whitespace
Sets whether leading white space is ignored. If set to true (the default), white space in front of a quote in a field is ignored.
- Importance: LOW
- Type: BOOLEAN
- Default Value: true
csv.ignore.quotations
Sets whether quotations are ignored. If set to true, quotations are ignored.
- Importance: LOW
- Type: BOOLEAN
- Default Value: false
csv.keep.carriage.return
Flag to determine if the carriage return at the end of the line should be maintained.
- Importance: LOW
- Type: BOOLEAN
- Default Value: false
csv.null.field.indicator
Indicator to determine how the CSV Reader can determine if a field is null. Valid values are
EMPTY_SEPARATORS
,EMPTY_QUOTES
,BOTH
, orNEITHER
(the default). For more information see the Opencsv documentation- Importance: LOW
- Type: STRING
- Default Value: NEITHER
- Valid values:
EMPTY_SEPARATORS
,EMPTY_QUOTES
,BOTH
,NEITHER
csv.quote.char
The character that is used to quote a field. This typically happens when the
csv.separator.char
character is within the data.- Importance: LOW
- Type: INT
- Default Value: 34
csv.separator.char
The character that separates each field in integer form. Typically, a CSV file uses
,(44)
and a TSV file usestab(9)
. Ifcsv.separator.char
is defined as anull(0)
, then the RFC 4180 parser must be used by default. This is the equivalent ofcsv.rfc.4180.parser.enabled = true
.- Importance: LOW
- Type: INT
- Default Value: 44
csv.skip.lines
Number of lines to skip at the beginning of the file.
- Importance: LOW
- Type: INT
- Default Value: 0
csv.strict.quotes
Sets the strict quotes setting. If true, characters outside the quotes are ignored.
- Importance: LOW
- Type: BOOLEAN
- Default Value: false
csv.verify.reader
Flag to determine if the reader should be verified.
- Importance: LOW
- Type: BOOLEAN
- Default Value: true