Extended Log File Format Source Connector for Confluent Platform
This connector is used to stream Extended Log File Format files from a directory while converting the data to a strongly typed schema.
To use this connector, use a connector configuration that specifies the name of this connector class in the connector.class configuration property:
connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirELFSourceConnector
The other connector-specific configuration properties are described below.
Configuration Properties
General
topicThe Kafka topic to write the data to.
Importance: high
Type: string
batch.sizeThe number of records that should be returned with each batch.
Importance: low
Type: INT
Default Value: 1000
empty.poll.wait.msThe amount of time to wait if a poll returns an empty list of records.
Importance: low
Type: long
Default Value: 500
Valid values: [1,…,9223372036854775807]
Metadata
metadata.fieldThe name of the field in the value where the metadata will be stored.
Importance: low
Type: string
Default Value: metadata
metadata.locationLocation that metadata about the input file will be stored. FIELD - Metadata about the file will be stored in a field in the value of the record. HEADERS - Metadata about the input file will be stored as headers on the record. NONE - no metadata about the input file will be stored.
Importance: low
Type: string
Default Value: HEADERS
Valid values:
NONE,HEADERS,FIELD
Auto topic creation
For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.
Configuration properties accept regular expressions (regex) that are defined as Java regex.
topic.creation.groupsA list of group aliases that are used to define per-group topic configurations for matching topics. A
defaultgroup always exists and matches all topics.Type: List of String types
Default: empty
Possible Values: The values of this property refer to any additional groups. A
defaultgroup is always defined for topic configurations.
topic.creation.$alias.replication.factorThe replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the
defaultgroup. This property is optional for any other group defined intopic.creation.groups. Other groups use the Kafka broker default value.Type: int
Default: n/a
Possible Values:
>= 1for a specific valid value or-1to use the Kafka broker’s default value.
topic.creation.$alias.partitionsThe number of topic partitions created by this connector. This is a required property for the
defaultgroup. This property is optional for any other group defined intopic.creation.groups. Other groups use the Kafka broker default value.Type: int
Default: n/a
Possible Values:
>= 1for a specific valid value or-1to use the Kafka broker’s default value.
topic.creation.$alias.includeA list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics.
$aliasapplies to any group defined intopic.creation.groups. This property does not apply to thedefaultgroup.Type: List of String types
Default: empty
Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.excludeA list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration.
$aliasapplies to any group defined intopic.creation.groups. This property does not apply to thedefaultgroup. Note that exclusion rules override any inclusion rules for topics.Type: List of String types
Default: empty
Possible Values: Comma-separated list of exact topic names or regular expressions.
topic.creation.$alias.${kafkaTopicSpecificConfigName}Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule.
$aliasapplies to thedefaultgroup as well as any group defined intopic.creation.groups.Type: property values
Default: Kafka broker value
File System
error.pathA directory to place the files that could not be read successfully. This directory must exist and be writable by the user running Kafka Connect.
Importance: high
Type: string
Valid values: Absolute path to a directory that exists and is writable.
input.file.patternRegular expression to check input file names against. This expression must match the entire filename. The equivalent of
Matcher.matches(). For valid syntax definitions, see Class Pattern.Importance: high
Type: string
input.pathThe directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.
Importance: high
Type: string
Valid values: Absolute path to a directory that exists and is writable.
finished.pathThe directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.
Importance: high
Type: string
halt.on.errorSets whether the task halts when it encounters an error or continues to the next file.
Importance: high
Type: boolean
Default Value: true
cleanup.policyDetermines how the connector should cleanup the files that have been successfully processed. DELETE removes the file from the filesystem. MOVE will move the file to a finished directory. MOVEBYDATE will move the file to a finished directory with subdirectories by date.
Importance: medium
Type: string
Default Value: MOVE
Valid values:
DELETE,MOVE,MOVEBYDATE
task.partitionerThe task partitioner implementation is used when the connector is configured to use more than one task. This is used by each task to identify which files will be processed by that task. This ensures that each file is only assigned to one task.
Importance: medium
Type: string
Default Value: ByName
Valid values:
ByName
file.buffer.size.bytesThe size of buffer for the BufferedInputStream that will be used to interact with the file system.
Importance: low
Type: INT
Default Value: 131072
Valid values: [1,…]
file.minimum.age.msThe amount of time in milliseconds after the file was last written to before the file can be processed.
Importance: low
Type: long
Default Value: 0
Valid values: [0,…]
files.sort.attributesThe attributes each file will use to determine the sort order. Name is name of the file. Length is the length of the file preferring larger files first. LastModified is the LastModified attribute of the file preferring older files first.
Importance: low
Type: list
Default Value: [NameAsc]
Valid values:
NameAsc,NameDesc,LengthAsc,LengthDesc,LastModifiedAsc,LastModifiedDesc
processing.file.extensionBefore a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.
Importance: low
Type: string
Default Value: .PROCESSING
Valid values: Matches regex( ^.*..+$ )
Schema
key.schemaThe schema for the key written to Kafka.
Importance: high
Type: string
value.schemaThe schema for the value written to Kafka.
Importance: high
Type: string
Schema Generation
schema.generation.enabledFlag to determine if schemas should be dynamically generated. If set to true,
key.schemaandvalue.schemacan be omitted, butschema.generation.key.nameandschema.generation.value.namemust be set.Importance: medium
Type: boolean
schema.generation.key.fieldsThe field(s) to use to build a key schema. This is only used during schema generation.
Importance: medium
Type: list
schema.generation.key.nameThe name of the generated key schema.
Importance: medium
Type: string
Default Value: com.github.jcustenborder.kafka.connect.model.Key
schema.generation.value.nameThe name of the generated value schema.
Importance: medium
Type: string
Default Value: com.github.jcustenborder.kafka.connect.model.Value
timestamp.fieldThe field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
Importance: medium
Type: string
Timestamps
timestamp.modeDetermines how the connector sets the timestamp for the ConnectRecord. If set to
Field, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field intimestamp.field. If set toFILE_TIME, the last time the file was modified is used. If set toPROCESS_TIME(the default), the time the record is read is used.Importance: medium
Type: string
Default Value: PROCESS_TIME
Valid values:
FIELD,FILE_TIME,PROCESS_TIME
timestamp.fieldThe field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
Importance: medium
Type: string
parser.timestamp.date.formatsThe date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.
Importance: low
Type: list
Default Value: [yyyy-MM-dd’T’HH:mm:ss, yyyy-MM-dd’ ‘HH:mm:ss]
parser.timestamp.timezoneThe time zone used for all parsed dates.
Importance: low
Type: string
Default Value: UTC