MongoDB Atlas Source Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see the MongoDB Kafka Connector documentation.

The Kafka Connect MongoDB Atlas Source connector for Confluent Cloud moves data from a MongoDB replica set into an Apache Kafka® cluster. The connector configures and consumes change stream event documents and publishes them to a Kafka topic.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

Note

This connector supports MongoDB Atlas only and will not work with a self-managed MongoDB database.

The MongoDB Atlas Source connector provides the following features:

  • Topics created automatically: The connector automatically creates Kafka topics using the naming convention: <prefix>.<database-name>.<collection-name>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. You add the prefix when setting up the connection in the Quick Start steps. If you want to create topics with specific settings, please create the topics before running this connector.
  • Database authentication: Uses password authentication.
  • Output data formats: Supports Avro, Byte, JSON (schemaless), JSON Schema, Protobuf or String output data. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • Large size records: Supports MongoDb documents of size upto 8 MB on dedicated Kafka cluster and 2 MB on other clusters.
  • Select configuration properties:
    • poll.await.time.ms: The amount of time to wait before checking for new results in the change stream.
    • poll.max.batch.size: The maximum number of change stream documents to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration properties that are not shown in the Cloud Console use the default values. For more information, see the MongoDB Source Connector Configuration Properties.

For more information, see the Confluent Cloud connector limitations.

Quick Start

Use this quick start to get up and running with the Confluent Cloud MongoDB Atlas Source connector. The quick start provides the basics of selecting the connector and configuring it to consume data from Kafka and persist the data to a MongoDB database.

Note

This connector supports MongoDB Atlas only and will not work with a self-managed MongoDB database.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent Cloud CLI installed and configured for the cluster. See Install and Configure the Confluent Cloud CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • Access to a MongoDB database. Note that the connection user must have privileged action “find” to query the MongoDB database. For more information, see Query and Write Actions.
  • The connector automatically creates Kafka topics using the naming convention: <prefix>.<database-name>.<collection-name>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. If you want to create topics with specific settings, please create the topics before running this connector.
  • Customers with a VPC-peered Kafka cluster in Confluent Cloud on AWS should consider configuring a PrivateLink Connection between MongoDB Atlas and the AWS VPC. For additional networking considerations, see Internet access to resources. To use static egress IPs, see Static Egress IP Addresses.
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use the Confluent Cloud CLI or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the MongoDB Atlas Source connector icon.

MongoDB Atlas Source Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector name.

  2. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.

  3. Enter a topic prefix. The connector automatically creates Kafka topics using the naming convention: <prefix>.<database-name>.<collection-name>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. If you want to create topics with specific settings, please create the topics before running this connector. If you are using a dedicated cluster and have a MongoDb document of size more than 2MB, create the topic beforehand with topic config max.message.bytes set equals to more than the largest document size (max is 8388608 bytes).

  4. Enter a JSON object that maps change stream document namespaces to topics. For example: {"db": "dbTopic", "db.coll": "dbCollTopic"} will map all change stream documents from the db database to dbTopic.<collectionName> apart from any documents from the db.coll namespace which map to the dbCollTopic topic. If you want to map all messages to a single topic use *. For example: {"*": "everyThingTopic", "db.coll": "exceptionToTheRuleTopic"} will map all change stream documents to the everyThingTopic apart from the db.coll messages. Note that any prefix configuration will still apply. If multiple collections with records having varying schema are mapped to a single topic with AVRO, JSON_SR, and PROTOBUF, then multiple schemas will be registered under a single subject name. If these schemas are not backward compatible to each other, the connector will fail until you change the schema compatibility in Confluent Cloud Schema Registry.

  5. Enter the MongoDB Atlas database details. For the Connection host, use only the hostname address and not a full URL. For example: cluster4-r5q3r7.gcp.mongodb.net.

  6. Enter your MongoDB collection name. If left blank, all collections are watched in the supplied database.

  7. Enter the amount of time to wait before checking for new results on the change stream. This defaults to 5000 ms (5 seconds).

  8. Enter the maximum number of records to batch together for processing. The default is 1000 records.

  9. Enter an array of JSON objects that represents the pipeline operations to filter or modify the change stream output. For example: [{"$match": {"ns.coll": {"$regex": /^(collection1|collection2)$/}}}] sets the connector to listen to the collection1 and collection2 collections only. The default is an empty array.

  10. Select whether to copy existing data from source collections and convert them to Change Stream events on the respective topics. Any change to the data that occurs during the copy process is applied once the copy is completed. Note that setting copy.existing to true can lead to duplicated records if the connector restarts. For example: when using Schema Registry-based output format, if the schemas are not backward compatible to each other, the connector will fail and restart, which will cause records to be duplicated. If not selected, this property defaults to false.

  11. Enter a regex that matches the namespaces from which the existing documents are copied. A namespace is represented as databaseName.collectionName. For example, stats\.page.* matches all collections that start with page in the stats database.

  12. Enter an array of JSON objects that describes the pipeline operations to run when copying existing data. It is applied to existing documents that are being copied. The default is an empty array.

  13. Select the output message format: Avro, Byte, JSON (schemaless), JSON Schema, Protobuf or String. A valid schema must be available in Schema Registry to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  14. Select whether or not to publish only fullDocument field instead of the full change stream document. If set to true, it automatically sets change.stream.full.document to updateLookup. The default is false.

  15. Select what to return for update operations when using a Change Stream: default or updateLookup. If set to updateLookup, the change stream will include the delta describing the changes as well as a copy of entire document that was changed. The default will only include the updated fields but not the full document.

  16. Select an output json formatter: DefaultJson, ExtendedJson or SimplifiedJson. The default is DefaultJson.

  17. Enter the number of tasks for the connector. Refer to Confluent Cloud connector limitations for additional information.

  18. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

    Note

    Configuration properties that are not listed use the default values. For default values and property definitions, see the MongoDB Source Connector Configuration Properties.

Step 5: Launch the connector.

Verify the connection details by previewing the running configuration. Once you’ve validated that the properties are configured to your satisfaction, click Launch.

Tip

For information about previewing your connector output, see Connector Data Previews.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running. It may take a few minutes.

Check the connector status

Step 7: Check the Kafka topic.

After the connector is running, verify that MongoDB documents are populating the Kafka topic. If the config copy.existing is set to true and the connector restarts due to any reason, you may see duplicated records in the topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

For additional information about this connector, see the MongoDB Kafka Connector documentation. Note that not all connector features are provided in the Confluent Cloud connector.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe MongoDbAtlasSource

Example output:

Following are the required configs:
connector.class: MongoDbAtlasSource
name
kafka.api.key
kafka.api.secret
topic.prefix
connection.host
connection.user
connection.password
database
output.data.format
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

 {
     "connector.class": "MongoDbAtlasSource",
     "name": "<my-connector-name>",
     "kafka.api.key": "<my-kafka-api-key>",
     "kafka.api.secret": "<my-kafka-api-secret>",
     "topic.prefix": "<topic-prefix>",
     "connection.host": "<database-host-address>",
     "connection.user": "<database-username>",
     "connection.password": "<database-password>",
     "database": "<database-name>",
     "collection": "<database-collection-name>",
     "poll.await.time.ms": "5000",
     "poll.max.batch.size": "1000",
     "copy.existing": "true",
     "output.data.format": "JSON"
     "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • (Optional) "topic.prefix": Enter a topic prefix. The connector automatically creates Kafka topics using the naming convention: <prefix>.<database-name>.<collection-name>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. If you want to create topics with specific settings, create the topics before running this connector. If you are using a dedicated cluster and have a MongoDb document of size more than 2MB, create the topic beforehand with topic config max.message.bytes set equals to more than the largest document size (max is 8388608 bytes).
  • (Optional) "topic.namespace.map": A JSON map that maps change stream document namespaces to topics. For example: {\"db\": \"dbTopic\", \"db.coll\": \"dbCollTopic\"} will map all change stream documents from the db database to dbTopic.<collectionName> apart from any documents from the db.coll namespace which map to the dbCollTopic topic. If you want to map all messages to a single topic use *. For example: {\"*\": \"everyThingTopic\", \"db.coll\": \"exceptionToTheRuleTopic\"} will map all change stream documents to the everyThingTopic apart from the db.coll messages. Note that any prefix configuration will still apply. If multiple collections with records having varying schema are mapped to a single topic with AVRO, JSON_SR, and PROTOBUF, then multiple schemas will be registered under a single subject name. If these schemas are not backward compatible to each other, the connector will fail until you change the schema compatibility in Confluent Cloud Schema Registry.
  • "connection.host": The MongoDB host. Use a hostname address and not a full URL. For example: cluster4-r5q3r7.gcp.mongodb.net.
  • "collection": The collection name. If the property is not used, all collections are watched in the supplied database.
  • (Optional) "poll.await.time.ms": The amount of time to wait before checking for new results in the change stream. If not used, this property defaults to 5000 ms (5 seconds).
  • (Optional) "poll.max.batch.size": The maximum number of change stream documents to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector. If not used, this property defaults to 1000 records.
  • (Optional) "pipeline": An array of JSON objects that represents the pipeline operations to filter or modify the change stream output. For example: [{"$match": {"ns.coll": {"$regex": /^(collection1|collection2)$/}}}] sets the connector to listen to the collection1 and collection2 collections only. If not used, this property defaults to an empty array.
  • (Optional) "copy.existing": Select whether to copy existing data from source collections and convert them to Change Stream events on the respective topics. Any change to the data that occurs during the copy process is applied once the copy is completed. Note that setting copy.existing to true can lead to duplicated records if the connector restarts. For example: when using Schema Registry-based output format, if the schemas are not backward compatible to each other, the connector will fail and restart, which will cause records to be duplicated. If not used, this property defaults to false.
  • (Optional) "copy.existing.namespace.regex": Regex that matches the namespaces from which the existing documents are copied. A namespace is represented as databaseName.collectionName. For example, stats\.page.* matches all collections that start with page in the stats database.
  • (Optional) "copy.existing.pipeline": An array of JSON objects that describes the pipeline operations to run when copying existing data. It is applied to existing documents that are being copied. If not used, this property defaults to an empty array.
  • "output.data.format": Sets the output message format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "tasks.max": Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

Note

Configuration properties that are not listed use the default values. For default values and property definitions, see MongoDB Source Connector Configuration Properties.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config mongo-db-source.json

Example output:

Created connector confluent-mongodb-source lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |            Name           | Status  | Type
+-----------+---------------------------+---------+-------+
lcc-ix4dl   | confluent-mongodb-source  | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that MongoDB documents are populating the Kafka topic. If the config copy.existing is set to true and the connector restarts due to any reason, you may see duplicated records in the topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

For additional information about this connector, see the MongoDB Kafka Connector documentation. Note that not all connector features are provided in the Confluent Cloud connector.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png