Consume Stream Shares from Confluent Cloud

Accessing data that is shared with you through Stream Sharing requires that you have or create a Confluent Cloud account.

View stream shares

You can view the data that has been shared with you in the Cloud Console or you can retrieve a list of data shared with you using the Confluent CLI or by making a REST API call.

You can view unredeemed shares from multiple organizations, but after you redeem a share from a particular organization, you can only view that share when you are logged into the organization from which the stream was redeemed from.

To access the Stream shares page:

  1. Log in to your Confluent Cloud account.

  2. From the navigation menu, select Stream shares.

    The Stream shares page opens.

  3. View the Data shared with you section. This section contains tiles for each topic that has been shared with you, and if not yet redeemed, how many days left to redeem the invitation for the shared data. Each tile provides the organization name and account for the provider that shared the data and how many days are left to access the shared data. Click through a tile to see an overview of the topic metadata.

    ../_images/ccloud-streamshare-shared-data-consumer.png

Access your stream shares

You should receive an email if data has been shared with you. Check your junk or spam folders if you know you should have received an email for shared data.

By default, you have one week to access the link to shared data or redeem the access token, although the provider can invalidate the link at any time. If you cannot find the email, or if you experience issues while trying to access shared data, you should contact the data provider.

You can only redeem a stream share once. After you receive the credentials for the stream share, you cannot redeem that same stream share again. Once redeemed, the stream share becomes active and remains so indefinitely until you delete it or the provider revokes access.

You will receive an email if a topic that was shared with you is deleted or if the share is removed.

Tip

If the shared topic is on a cluster that uses a private endpoint, you must set up network connectivity to the Confluent Cloud network of the cluster before you can access the topic. For more information, see Use Google Cloud Private Service Connect with Confluent Cloud, Use AWS PrivateLink with Confluent Cloud, or Use Azure Private Link with Confluent Cloud.

Prerequisites

  • You must already have, or you must create, a Confluent Cloud account associated with the email address that the data was shared to.
  • If the provider of the stream is using a private endpoint, you must have the ID of your cloud provider account to access the stream.

To access the shared data:

  1. Click the Access button in the email. The Confluent Cloud Console will open in a browser.

  2. Using the same email address where the data was shared, log in to your Confluent Cloud account.

  3. In the Read and process shared data box, click the Access button. If the provider of the stream is using a private endpoint, you must enter the ID of your cloud provider account and click Next.

  4. Select the programming language you will use to create an Kafka client and click Next.

  5. Click Generate & download credentials.

    Your credentials are generated and added to the configuration file shown in the browser.

  6. Click Copy.

  7. Save the configuration file.

    The file will resemble the following, containing the bootstrap server and username and password required to access the shared data.

    # Topic: Users
    # Consumer group ID requires a prefix of stream-share.ss-3vpvd
    # Example: group.id = stream-share.ss-3vpvd.[Add optional consumer group suffix]
    
    Required connection configs for Kafka producer, consumer, and admin
    bootstrap.servers=SASL_SSL://pkc-ABC123.us-central1.gcp.confluent.cloud:9092
    security.protocol=SASL_SSL
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='ABCDEFGHIJKLMNOP' password='ABCD+hzdHX2aTWVT5+OcD0CPhzMPl+ScjpjBdPN8b1RZu2h9efHeZULkLACpVmko';
    sasl.mechanism=PLAIN
    # Required for correctness in Apache Kafka clients prior to 2.6
    client.dns.lookup=use_all_dns_ips
    
    # Best practice for higher availability in Apache Kafka clients prior to 3.0
    session.timeout.ms=45000
    
    # Best practice for Kafka producer to prevent data loss
    acks=all
    
    # Required connection configs for Confluent Cloud Schema Registry
    schema.registry.url=https://psrc-1a2xyz.us-central1.gcp.confluent.cloud
    basic.auth.credentials.source=USER_INFO
    basic.auth.user.info=EXAMPLE6THZZR5GY:g13tSFeALOM2nPpaF+V368MG+JNPxOoUbDKlFTPB2kRabTgf3vO9jtvClexample
    

Create a Kafka client

After you have the configuration file, you can write a Kafka client application to connect to the shared data. For more information, see Configure clients from the Confluent CLI.