.. _datadog_metrics_sink_connector_config: Configuration Properties ------------------------ To use this connector, specify the name of the connector class in the ``connector.class`` configuration property. .. codewithvars:: properties connector.class=io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector Connector-specific configuration properties are described below. Credentials ^^^^^^^^^^^ ``datadog.api.key`` An API key is used to log all requests made to the Datadog API. * Type: String * Importance: HIGH Datadog ^^^^^^^ ``datadog.domain`` This configuration is used to identify the location of your Datadog service. It is used in URL while establishing connection or sending data to API. It can only be either ``COM`` or ``EU``. * Type: String * Importance: HIGH General ^^^^^^^ ``behavior.on.error`` The connector's behavior if the provided metric does not contain expected data. * Type: String * Importance: Medium * Default Value: FAIL * Value can be either FAIL, LOG or IGNORE. - ``FAIL`` Stops the connector in case of an error. - ``LOG`` Logs the error message in an Kafka topic and continues processing - ``IGNORE`` Continues to process next set of records. ``max.retry.time.seconds`` In case or error while executing post request, connector will retry until the time (in seconds) elapses. * Type: Int * Importance: Low * Default Value: 10 * Valid Values: [0,…] Reporter ^^^^^^^^ ``reporter.result.topic.name`` The name of the topic to produce records to after successfully processing a sink record. Use ``${connector}`` within the pattern to specify the current connector name. Leave blank to disable error reporting behavior. * Type: string * Default: ${connector}-success * Valid Values: either optionally includes substitutions(s): ${connector}, or After replacing ${connector}, this must be either Valid topic name (Matches regex [a-zA-Z0-9._-]{1,249}), or one of [] * Importance: medium ``reporter.result.topic.replication.factor`` The replication factor of the result topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss cannot be prevented. This should be 1 in development environments, and ALWAYS at least 3 in production environments. * Type: short * Default: 3 * Valid Values: [1,…] * Importance: medium ``reporter.result.topic.partitions`` The number of partitions in the result topic when it is automatically created by this connector. This is recommended to be the same as the number of input partitions in order to handle the potential throughput. * Type: int * Default: 1 * Valid Values: [1,…] * Importance: medium ``reporter.error.topic.name`` The name of the topic to produce records to after each unsuccessfully sinking a record. Use ${connector} within the pattern to specify the current connector name. Leave blank to disable error reporting behavior. * Type: string * Default: ${connector}-error * Valid Values: either optionally includes substitutions(s): ${connector}, or After replacing ${connector}, this must be either Valid topic name (Matches regex [a-zA-Z0-9._-]{1,249}), or one of [] * Importance: medium ``reporter.error.topic.replication.factor`` The replication factor of the error topic when it is automatically created by this connector. This determines how many broker failures can be tolerated before data loss cannot be prevented. This should be 1 in development environments, and ALWAYS at least 3 in production environments. * Type: short * Default: 3 * Valid Values: [1,…] * Importance: medium ``reporter.error.topic.partitions`` The number of partitions in the error topic when it is automatically created by this connector. This is recommended to be the same as the number of input partitions in order to handle the potential throughput. * Type: int * Default: 1 * Valid Values: [1,…] * Importance: medium ``reporter.bootstrap.servers`` A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping. This list only impacts the initial hosts used to discover the full set of servers. This list should be in the form ``host1:port1,host2:port2,…``. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). * Type: list * Valid Values: Non-empty list * Importance: high Formatter ^^^^^^^^^ ``reporter.result.topic.key.format`` The format in which the result report key is serialized. * Type: string * Default: json * Valid Values: one of [json] * Importance: medium * Dependents: reporter.result.topic.key.format.schemas.enable, reporter.result.topic.key.format.schemas.cache.size ``reporter.result.topic.value.format`` The format in which the result report value is serialized. * Type: string * Default: json * Valid Values: one of [json] * Importance: medium * Dependents: reporter.result.topic.value.format.schemas.cache.size, reporter.result.topic.value.format.schemas.enable ``reporter.error.topic.key.format`` The format in which the error report key is serialized. * Type: string * Default: json * Valid Values: one of [json] * Importance: medium * Dependents: reporter.error.topic.key.format.schemas.cache.size, reporter.error.topic.key.format.schemas.enable ``reporter.error.topic.value.format`` The format in which the error report value is serialized. * Type: string * Default: json * Valid Values: one of [json] * Importance: medium * Dependents: reporter.error.topic.value.format.schemas.cache.size, reporter.error.topic.value.format.schemas.enable JSON Formatter ^^^^^^^^^^^^^^ ``reporter.result.topic.key.format.schemas.cache.size`` The maximum number of schemas that can be cached in the JSON formatter. * Type: int * Default: 128 * Valid Values: [0,…,2048] * Importance: medium ``reporter.result.topic.key.format.schemas.enable`` Include schemas within each of the serialized values and keys. * Type: boolean * Default: false * Importance: medium ``reporter.result.topic.value.format.schemas.cache.size`` The maximum number of schemas that can be cached in the JSON formatter. * Type: int * Default: 128 * Valid Values: [0,…,2048] * Importance: medium ``reporter.result.topic.value.format.schemas.enable`` Include schemas within each of the serialized values and keys. * Type: boolean * Default: false * Importance: medium ``reporter.error.topic.key.format.schemas.cache.size`` The maximum number of schemas that can be cached in the JSON formatter. * Type: int * Default: 128 * Valid Values: [0,…,2048] * Importance: medium ``reporter.error.topic.key.format.schemas.enable`` Include schemas within each of the serialized values and keys. * Type: boolean * Default: false * Importance: medium ``reporter.error.topic.value.format.schemas.cache.size`` The maximum number of schemas that can be cached in the JSON formatter. * Type: int * Default: 128 * Valid Values: [0,…,2048] * Importance: medium ``reporter.error.topic.value.format.schemas.enable`` Include schemas within each of the serialized values and keys. * Type: boolean * Default: false * Importance: medium Proxy ^^^^^ ``datadog.proxy.url`` Proxy URL if needed. format: protocol:Host:port for e.g. https://localhost:port. * Type: String * Importance: Low ``datadog.proxy.user`` Proxy username for authentication of proxy. * Type: String * Importance: Low ``datadog.proxy.password`` Proxy username for authentication of proxy. * Type: String * Importance: Low .. _datadog_metrics_sink_connector_license_config: |cp| license ^^^^^^^^^^^^ ``confluent.topic.bootstrap.servers`` A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form ``host1:port1,host2:port2,...``. Because these servers are just used for the initial connection to discover the full cluster membership that may change dynamically, this list need not contain the full set of servers. You may want more than one in case a server is down. * Type: list * Importance: high ``confluent.topic`` Name of the Kafka topic used for |cp| configuration, including licensing information. * Type: string * Default: _confluent-command * Importance: low ``confluent.topic.replication.factor`` The replication factor for the |ak| topic used for |cp| configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (typically 1). * Type: int * Default: 3 * Importance: low ---------------------------- Confluent license properties ---------------------------- .. include:: ../includes/security-info.rst .. include:: ../includes/platform-license.rst .. include:: ../includes/security-configs.rst .. _datadog_metrics_sink_license_topic_configuration: .. include:: ../includes/platform-license-detail.rst .. include:: ../includes/overriding-default-config-properties.rst