.. _kafkarest_api: API Reference for |crest-long| ============================== This topic provides the |crest-long| API reference documentation. You can use the |crest| to produce and consume message to an |ak-tm| cluster. For a tutorial using the |crest| API, see this `step-by-step guide `__. .. include:: ../.hidden/docs-common/home/includes/cloud-platform-cta.rst .. _kakarest-api-content-types: Content Types ------------- The |crest| uses content types for both requests and responses to indicate these data properties: - Serialization format: ``json`` - API version (e.g. ``v2`` or ``v3``) - Embedded formats: ``json``, ``binary``, ``avro``, ``protobuf`` and ``jsonschema`` .. important:: The ``jsonschema`` and ``protobuf`` embedded types are supported beginning with |crest| v2. |crest| supports the `Avro® `__, `JSON Schema `__, and `Protobuf `__ serialization formats. The versions of the |crest| API are ``v2`` and ``v3``. The embedded format is the format of data you are producing or consuming. These formats are embedded into requests or responses in the serialization format. For example, you can provide ``binary`` data in a ``json``-serialized request; in this case the data should be provided as a base64-encoded string. - For ``v2``, the content type will be ``application/vnd.kafka.binary.v2+json``. - For ``v3`` the content type will be ``application/json``. If your data is JSON, you can use ``json`` as the embedded format and embed it directly: - For ``v2``, the content type will be ``application/vnd.kafka.json.v2+json``. - For ``v3`` the content type will be ``application/json``. With ``avro``, ``protobuf``, and ``jsonschema`` embedded types, you can directly embed JSON formatted data along with a schema (or schema ID) in the request. These types use |sr|, and the ID of the schema is serialized in addition to the data and payload. - The Avro content type is ``application/vnd.kafka.avro.v2+json``. - The Protobuf content type is ``application/vnd.kafka.protobuf.v2+json``. - The JSON schema content type is ``application/vnd.kafka.jsonschema.v2+json``. The format for the content type is:: application/vnd.kafka[.embedded_format].[api_version]+[serialization_format] For more information, see :ref:`schemaregistry_api`. The embedded format can be omitted when there are no embedded messages (i.e. for metadata requests you can use ``application/vnd.kafka.v2+json``). The preferred content type for ``v2`` is ``application/vnd.kafka.[embedded_format].v2+json``. However, other less specific content types are permitted, including ``application/vnd.kafka+json`` to indicate no specific API version requirement (the most recent stable version will be used), ``application/json``, and ``application/octet-stream``. The latter two are only supported for compatibility and ease of use. In all cases, if the embedded format is omitted, ``binary`` is assumed. Although using these less specific values is permitted, to remain compatible with future versions you *should* specify preferred content types in requests and check the content types of responses. Your requests *should* specify the most specific format and version information possible via the HTTP ``Accept`` header For ``v2``, you can specify format and version as follows: :: Accept: application/vnd.kafka.v2+json For ``v3``, do not specify the version. The latest version (``v3``) will be used: :: Accept: application/json The server also supports content negotiation, so you may include multiple, weighted preferences:: Accept: application/vnd.kafka.v2+json; q=0.9, application/json; q=0.5 This can be useful when, for example, a new version of the API is preferred but you cannot be certain it is available yet. .. seealso:: :ref:`rest-api-usage-examples`, which show how to test the APIs from the command line using `curl `__ Errors ------ All API endpoints use a standard error message format for any requests that return an HTTP status indicating an error (any 400 or 500 statuses). For example, a request entity that omits a required field may generate the following response: .. sourcecode:: http HTTP/1.1 422 Unprocessable Entity Content-Type: application/vnd.kafka.v3+json { "error_code": 422, "message": "records may not be empty" } Although it is good practice to check the status code, you may safely parse the response of any non-DELETE API calls and check for the presence of an ``error_code`` field to detect errors. Some error codes are used frequently across the entire API and you will probably want to have general purpose code to handle these, whereas most other error codes will need to be handled on a per-request basis. .. http:any:: / :statuscode 401: * Error code 40101 -- Kafka Authentication Error. :statuscode 403: * Error code 40301 -- Kafka Authorization Error. :statuscode 404: * Error code 40401 -- Topic not found. * Error code 40402 -- Partition not found. :statuscode 422: The request payload is either improperly formatted or contains semantic errors :statuscode 500: * Error code 50001 -- Zookeeper error. * Error code 50002 -- Kafka error. * Error code 50003 -- Retriable Kafka error. Although the operation failed, it's possible that retrying the request will be successful. * Error code 50101 -- Only TLS/SSL endpoints were found for the specified broker, but TLS/SSL is not supported for the invoked API yet. .. _rest-proxy-v2: |crest| API v2 -------------- .. tip:: See :ref:`rest-api-usage-examples` to learn how to test these API endpoints from the command line. ------ Topics ------ The topics resource provides information about the topics in your |ak| cluster and their current state. It also lets you produce messages by making ``POST`` requests to specific topics. .. http:get:: /topics Get a list of Kafka topics. :>json array topics: List of topic names **Example request**: .. sourcecode:: http GET /topics HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json ["topic1", "topic2"] .. http:get:: /topics/(string:topic_name) Get metadata about a specific topic. :param string topic_name: Name of the topic to get metadata about :>json string name: Name of the topic :>json map configs: Per-topic configuration overrides :>json array partitions: List of partitions for this topic :>json int partitions[i].partition: the ID of this partition :>json int partitions[i].leader: the broker ID of the leader for this partition :>json array partitions[i].replicas: list of replicas for this partition, including the leader :>json array partitions[i].replicas[j].broker: broker ID of the replica :>json boolean partitions[i].replicas[j].leader: true if this replica is the leader for the partition :>json boolean partitions[i].replicas[j].in_sync: true if this replica is currently in sync with the leader :statuscode 404: * Error code 40401 -- Topic not found **Example request**: .. sourcecode:: http GET /topics/test HTTP/1.1 Accept: application/vnd.kafka.v2+json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "name": "test", "configs": { "cleanup.policy": "compact" }, "partitions": [ { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, } ] }, { "partition": 2, "leader": 2, "replicas": [ { "broker": 1, "leader": false, "in_sync": true, }, { "broker": 2, "leader": true, "in_sync": true, } ] } ] } .. http:post:: /topics/(string:topic_name) Produce messages to a topic, optionally specifying keys or partitions for the messages. If no partition is provided, one will be chosen based on the hash of the key. If no key is provided, the partition will be chosen for each message in a round-robin fashion. For the ``avro``, ``protobuf``, and ``jsonschema`` embedded formats, you must provide information about schemas and the |crest| must be configured with the URL to access |sr| (``schema.registry.url``). Schemas may be provided as the full schema encoded as a string, or, after the initial request may be provided as the schema ID returned with the first response. :param string topic_name: Name of the topic to produce the messages to :json int key_schema_id: The ID for the schema used to produce keys, or null if keys were not used :>json int value_schema_id: The ID for the schema used to produce values. :>jsonarr object offsets: List of partitions and offsets the messages were published to :>jsonarr int offsets[i].partition: Partition the message was published to, or null if publishing the message failed :>jsonarr long offsets[i].offset: Offset of the message, or null if publishing the message failed :>jsonarr long offsets[i].error_code: An error code classifying the reason this operation failed, or null if it succeeded. * 1 - Non-retriable Kafka exception * 2 - Retriable Kafka exception; the message might be sent successfully if retried :>jsonarr string offsets[i].error: An error message describing why the operation failed, or null if it succeeded :statuscode 404: * Error code 40401 -- Topic not found :statuscode 422: * Error code 42201 -- Request includes keys and uses a format that requires schemas, but does not include the ``key_schema`` or ``key_schema_id`` fields * Error code 42202 -- Request includes values and uses a format that requires schemas, but does not include the ``value_schema`` or ``value_schema_id`` fields * Error code 42205 -- Request includes invalid schema. :statuscode 408: * Error code 40801 -- Schema registration or lookup failed. **Example binary request**: .. sourcecode:: http POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.binary.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "a2V5", "value": "Y29uZmx1ZW50" }, { "value": "a2Fma2E=", "partition": 1 }, { "value": "bG9ncw==" } ] } **Example binary response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 2, "offset": 100 }, { "partition": 1, "offset": 101 }, { "partition": 2, "offset": 102 } ] } **Example Avro request**: .. sourcecode:: http POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.avro.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "value_schema": "{\"name\":\"int\",\"type\": \"int\"}", "records": [ { "value": 12 }, { "value": 24, "partition": 1 } ] } **Example Avro response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 2, "offset": 103 }, { "partition": 1, "offset": 104 } ] } **Example JSON request**: .. sourcecode:: http POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.json.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "somekey", "value": {"foo": "bar"} }, { "value": [ "foo", "bar" ], "partition": 1 }, { "value": 53.5 } ] } **Example JSON response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 2, "offset": 100 }, { "partition": 1, "offset": 101 }, { "partition": 2, "offset": 102 } ] } ---------- Partitions ---------- The partitions resource provides per-partition metadata, including the current leaders and replicas for each partition. It also allows you to consume and produce messages to single partition using ``GET`` and ``POST`` requests. .. http:get:: /topics/(string:topic_name)/partitions Get a list of partitions for the topic. :param string topic_name: the name of the topic :>jsonarr int partition: ID of the partition :>jsonarr int leader: Broker ID of the leader for this partition :>jsonarr array replicas: List of brokers acting as replicas for this partition :>jsonarr int replicas[i].broker: Broker ID of the replica :>jsonarr boolean replicas[i].leader: true if this broker is the leader for the partition :>jsonarr boolean replicas[i].in_sync: true if the replica is in sync with the leader :statuscode 404: * Error code 40401 -- Topic not found **Example request**: .. sourcecode:: http GET /topics/test/partitions HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json [ { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] }, { "partition": 2, "leader": 2, "replicas": [ { "broker": 1, "leader": false, "in_sync": true, }, { "broker": 2, "leader": true, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] } ] .. http:get:: /topics/(string:topic_name)/partitions/(int:partition_id) Get metadata about a single partition in the topic. :param string topic_name: Name of the topic :param int partition_id: ID of the partition to inspect :>json int partition: ID of the partition :>json int leader: Broker ID of the leader for this partition :>json array replicas: List of brokers acting as replicas for this partition :>json int replicas[i].broker: Broker ID of the replica :>json boolean replicas[i].leader: true if this broker is the leader for the partition :>json boolean replicas[i].in_sync: true if the replica is in sync with the leader :statuscode 404: * Error code 40401 -- Topic not found * Error code 40402 -- Partition not found **Example request**: .. sourcecode:: http GET /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] } .. http:get:: /topics/(string:topic_name)/partitions/(int:partition_id)/offsets Get a summary of the offsets in this topic partition. :param string topic_name: Name of the topic :param int partition_id: ID of the partition to inspect :>json int beginning_offset: First offset in this partition :>json int end_offset: Last offset in this partition :statuscode 404: * Error code 40401 -- Topic not found * Error code 40402 -- Partition not found **Example request**: .. sourcecode:: http GET /topics/test/partitions/1/offsets HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "beginning_offset": 10, "end_offset": 50, } .. _post-topic-string-avro: .. http:post:: /topics/(string:topic_name)/partitions/(int:partition_id) Produce messages to one partition of the topic. For the Avro, JSON Schema, and Protobuf embedded formats, you must provide information about schemas. This may be provided as the full schema encoded as a string, or, after the initial request may be provided as the schema ID returned with the first response. :param string topic_name: Topic to produce the messages to :param int partition_id: Partition to produce the messages to :json int key_schema_id: The ID for the schema used to produce keys, or null if keys were not used :>json int value_schema_id: The ID for the schema used to produce values. :>jsonarr object offsets: List of partitions and offsets the messages were published to :>jsonarr int offsets[i].partition: Partition the message was published to. This will be the same as the ``partition_id`` parameter and is provided only to maintain consistency with responses from producing to a topic :>jsonarr long offsets[i].offset: Offset of the message :>jsonarr long offsets[i].error_code: An error code classifying the reason this operation failed, or null if it succeeded. * 1 - Non-retriable Kafka exception * 2 - Retriable Kafka exception; the message might be sent successfully if retried :>jsonarr string offsets[i].error: An error message describing why the operation failed, or null if it succeeded :statuscode 404: * Error code 40401 -- Topic not found * Error code 40402 -- Partition not found :statuscode 422: * Error code 42201 -- Request includes keys and uses a format that requires schemas, but does not include the ``key_schema`` or ``key_schema_id`` fields * Error code 42202 -- Request includes values and uses a format that requires schemas, but does not include the ``value_schema`` or ``value_schema_id`` fields * Error code 42205 -- Request includes invalid schema. **Example binary request**: .. sourcecode:: http POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.binary.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "a2V5", "value": "Y29uZmx1ZW50" }, { "value": "a2Fma2E=" } ] } **Example binary response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] } **Example Avro request**: .. sourcecode:: http POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.avro.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "value_schema": "{\"name\":\"int\",\"type\": \"int\"}" "records": [ { "value": 25 }, { "value": 26 } ] } **Example Avro response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] } **Example JSON request**: .. sourcecode:: http POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.json.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "somekey", "value": {"foo": "bar"} }, { "value": 53.5 } ] } **Example JSON response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] } **Example PROTOBUF request**: .. sourcecode:: http POST /topics/test/partitions/1 HTTP/1.1 Content-Type: application/vnd.kafka.protobuf.v2+json Accept: application/vnd.kafka.v2+json, application/json { "value_schema": "syntax=\"proto3\"; message Foo { string f1 = 1; }" "records": [{"value": {"f1": "foo"}}] } **Example PROTOBUF response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] } **Example JSONSCHEMA request**: .. sourcecode:: http POST /topics/test/partitions/1 HTTP/1.1 Content-Type: application/vnd.kafka.jsonschema.v2+json Accept: application/vnd.kafka.v2+json, application/json { "value_schema": "{\"type\":\"object\",\"properties\":{\"f1\":{\"type\":\"string\"}}}", "records": [{"value": {"f1": "bar"}}] } **Example JSONSCHEMA response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] } --------- Consumers --------- The consumers resource provides access to the current state of consumer groups, allows you to create a consumer in a consumer group and consume messages from topics and partitions. |crest| can convert data stored in Kafka in serialized form into a JSON-compatible embedded format. These formats are supported: - Raw binary data is encoded as base64 strings - Avro data is converted into embedded - JSON objects, and JSON is embedded directly - Protobuf - JSON Schema Because consumers are stateful, any consumer instances created with the REST API are tied to a specific |crest| instance. A full URL is provided when the instance is created and it should be used to construct any subsequent requests. Failing to use the returned URL for future consumer requests will result in `404` errors because the consumer instance will not be found. If a |crest| instance is shutdown, it will attempt to cleanly destroy any consumers before it is terminated. .. http:post:: /consumers/(string:group_name) Create a new consumer instance in the consumer group. The ``format`` parameter controls the deserialization of data from Kafka and the content type that *must* be used in the ``Accept`` header of subsequent read API requests performed against this consumer. For example, if the creation request specifies ``avro`` for the format, subsequent read requests should use ``Accept: application/vnd.kafka.avro.v2+json``. Note that the response includes a URL including the host since the consumer is stateful and tied to a specific |crest| instance. Subsequent examples in this section use a ``Host`` header for this specific |crest| instance. :param string group_name: The name of the consumer group to join :broker connection. Default value is taken from the |crest| config file :>json string instance_id: Unique ID for the consumer instance in this group. :>json string base_uri: Base URI used to construct URIs for subsequent requests against this consumer instance. This will be of the form ``http://hostname:port/consumers/consumer_group/instances/instance_id``. :statuscode 409: * Error code 40902 -- Consumer instance with the specified name already exists. :statuscode 422: * Error code 42204 -- Invalid consumer configuration. One of the settings specified in the request contained an invalid value. **Example request**: .. sourcecode:: http POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "name": "my_consumer", "format": "binary", "auto.offset.reset": "earliest", "auto.commit.enable": "false" } **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/testgroup/instances/my_consumer" } **Example PROTOBUF request**: .. sourcecode:: http POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.protobuf.v2+json { "name": "my_consumer", "format": "protobuf", "auto.offset.reset": "earliest", "auto.commit.enable": "false" } **Example PROTOBUF response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.protobuf.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/my_protobuf_consumer" } **Example JSONSCHEMA request**: .. sourcecode:: http POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.jsonschema.v2+json { "name": "my_consumer", "format": "jsonschema", "auto.offset.reset": "earliest", "auto.commit.enable": "false" } **Example JSONSCHEMA response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.jsonschema.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/my_jsonschema_consumer" } .. http:delete:: /consumers/(string:group_name)/instances/(string:instance) Destroy the consumer instance. Note that this request *must* be made to the specific |crest| instance holding the consumer instance. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :statuscode 404: * Error code 40403 -- Consumer instance not found **Example request**: .. sourcecode:: http DELETE /consumers/testgroup/instances/my_consumer HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json **Example response**: .. sourcecode:: http HTTP/1.1 204 No Content .. http:post:: /consumers/(string:group_name)/instances/(string:instance)/offsets Commit a list of offsets for the consumer. When the post body is empty, it commits all the records that have been fetched by the consumer instance. Note that this request *must* be made to the specific |crest| instance holding the consumer instance. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :jsonarr offsets: A list of committed offsets :>jsonarr string offsets[i].topic: Name of the topic for which an offset was committed :>jsonarr int offsets[i].partition: Partition ID for which an offset was committed :>jsonarr int offsets[i].offset: Committed offset :>jsonarr string offsets[i].metadata: Metadata for the committed offset :statuscode 404: * Error code 40402 -- Partition not found * Error code 40403 -- Consumer instance not found **Example request**: .. sourcecode:: http GET /consumers/testgroup/instances/my_consumer/offsets HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] } **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json {"offsets": [ { "topic": "test", "partition": 0, "offset": 21, "metadata":"" }, { "topic": "test", "partition": 1, "offset": 31, "metadata":"" } ] } .. http:post:: /consumers/(string:group_name)/instances/(string:instance)/subscription Subscribe to the given list of topics or a topic pattern to get dynamically assigned partitions. If a prior subscription exists, it would be replaced by the latest subscription. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :jsonarr topics: A list of subscribed topics :>jsonarr string topics[i]: Name of the topic :statuscode 404: * Error code 40403 -- Consumer instance not found **Example request**: .. sourcecode:: http GET /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "topics": [ "test1", "test2" ] } .. http:delete:: /consumers/(string:group_name)/instances/(string:instance)/subscription Unsubscribe from topics currently subscribed. Note that this request *must* be made to the specific |crest| instance holding the consumer instance. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :statuscode 404: * Error code 40403 -- Consumer instance not found **Example request**: .. sourcecode:: http DELETE /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json **Example response**: .. sourcecode:: http HTTP/1.1 204 No Content .. http:post:: /consumers/(string:group_name)/instances/(string:instance)/assignments Manually assign a list of partitions to this consumer. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :jsonarr partitions: A list of partitions that are manually assigned to this consumer :>jsonarr string partitions[i].topic: Name of the topic :>jsonarr int partitions[i].partition: Partition ID :statuscode 404: * Error code 40403 -- Consumer instance not found **Example request**: .. sourcecode:: http GET /consumers/testgroup/instances/my_consumer/assignments HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] } .. http:post:: /consumers/(string:group_name)/instances/(string:instance)/positions Overrides the fetch offsets that the consumer will use for the next set of records to fetch. :param string group_name: The name of the consumer group :param string instance: The ID of the consumer instance :jsonarr string topic: The topic :>jsonarr string key: The message key, formatted according to the embedded format :>jsonarr string value: The message value, formatted according to the embedded format :>jsonarr int partition: Partition of the message :>jsonarr long offset: Offset of the message :statuscode 404: * Error code 40403 -- Consumer instance not found :statuscode 406: * Error code 40601 -- Consumer format does not match the embedded format requested by the ``Accept`` header. **Example binary request**: .. sourcecode:: http GET /consumers/testgroup/instances/my_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.binary.v2+json **Example binary response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.binary.v2+json [ { "topic": "test", "key": "a2V5", "value": "Y29uZmx1ZW50", "partition": 1, "offset": 100, }, { "topic": "test", "key": "a2V5", "value": "a2Fma2E=", "partition": 2, "offset": 101, } ] **Example Avro request**: .. sourcecode:: http GET /consumers/avrogroup/instances/my_avro_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.avro.v2+json **Example Avro response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.avro.v2+json [ { "topic": "test", "key": 1, "value": { "id": 1, "name": "Bill" }, "partition": 1, "offset": 100, }, { "topic": "test", "key": 2, "value": { "id": 2, "name": "Melinda" }, "partition": 2, "offset": 101, } ] **Example JSON request**: .. sourcecode:: http GET /consumers/jsongroup/instances/my_json_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.json.v2+json **Example JSON response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.json.v2+json [ { "topic": "test", "key": "somekey", "value": {"foo":"bar"}, "partition": 1, "offset": 10, }, { "topic": "test", "key": "somekey", "value": ["foo", "bar"], "partition": 2, "offset": 11, } ] ------- Brokers ------- The brokers resource provides access to the current state of Kafka brokers in the cluster. .. http:get:: /brokers Get a list of brokers. :>json array brokers: List of broker IDs **Example request**: .. sourcecode:: http GET /brokers HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json **Example response**: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "brokers": [1, 2, 3] } .. _rest-proxy-v3: |crest| API v3 -------------- .. include:: includes/server-vs-standalone.rst When using the API in |cs|, all paths should be prefixed with ``/kafka`` as opposed to Standalone REST Proxy. For example, the path to list clusters is: * |cs|: ``/kafka/v3/clusters`` * Standalone |crest|: ``/v3/clusters`` |cs| provides an embedded instance of these APIs on the |ak| brokers for the v3 Admin API. The embedded APIs run on the Confluent HTTP service, ``confluent.http.server.listeners``. Therefore, if you have the HTTP server running, the |crest| v3 API is automatically available to you through the brokers. Note that the :ref:`Metadata Server (MDS) ` is also running on the Confluent HTTP service, as another endpoint available to you with additional configurations. .. tip:: To learn more, see the following sections: - :ref:`rest-api-usage-examples` for how to test these API endpoints from the command line. from the command line - :ref:`confluent-server-rest-config` - :ref:`Admin operations in Kafka REST API Features ` - `Confluent Admin REST APIs demo `__ .. openapi:: ../.hidden/ce-kafka-rest/api/v3/consolidated-openapi.yaml :format: markdown :group: :examples: .. _rest-api-usage-examples: REST API Usage Examples (curl) ------------------------------ This section provides a few examples of how to call the |crest-api| using `curl `__ commands to quickly test API endpoints from the command line. These examples demo the most recent API version, :ref:`rest-proxy-v3`, and JSON serialization format (see :ref:`kakarest-api-content-types`). (To test API calls for :ref:`rest-proxy-v2`, swap out ``v3`` for ``v2`` and remove “``/kafka``”. REST API v2 commands should not include “``kafka``”. Be sure to reference the :ref:`v2 API documentation `, as not all APIs shown for v3 are available in v2.) .. tip:: For examples of how to call these same APIs from source code for an app, see the `Confluent Admin REST APIs demo `__. A few logistics to take note of: - For your API testing, you may want to use `jq `_ along with ``--silent`` flag for `curl `__ to get nicely formatted output for the given commands. These additional formatting options are used in the examples below. - Although `jq has powerful filtering capabilities `__, you can pipe the ``curl`` and ``jq`` output through simple ``grep`` commands to further filter the results. This is demo'ed in the examples. - To get and set values using the APIs, you must know the URL for your cluster and the cluster ID. You can get this from the :ref:`Cluster Settings ` tab on |c3-short|. (`http://localhost:9021/ `_ on your web browser for a local cluster). - The examples show the default host and port to access the |ak| cluster on a local host (``localhost:8090``). -------------------------------- List and describe known clusters -------------------------------- To list and describe a cluster, use the API endpoint ``GET /clusters`` as shown. :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters/ | jq Example and result: :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters/ | jq "kind": "KafkaClusterList", "metadata": { "self": "http://localhost:8090/kafka/v3/clusters", "next": null }, "data": [ { "kind": "KafkaCluster", "metadata": { "self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg", "resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg" }, "cluster_id": "7cteo6omRwKaUFXj3BHxdg", "controller": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/brokers/0" }, "acls": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/acls" }, "brokers": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/brokers" }, "broker_configs": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/broker-configs" }, "consumer_groups": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/consumer-groups" }, "topics": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics" }, "partition_reassignments": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/-/partitions/-/reassignment" .. tip:: Currently both |ak| and |crest| are only aware of the |ak| cluster pointed at by the ``bootstrap.servers`` configuration. Therefore, only one |ak| cluster will be returned in the response. -------------- Create a topic -------------- To create a topic, use the topics endpoint ``POST /clusters/{cluster_id}/topics`` as shown below. :: curl --silent -X POST -H "Content-Type: application/json" \ --data '{"topic_name": ""}' http://localhost:8090/kafka/v3/clusters//topics | jq Example and result: :: curl --silent -X POST -H "Content-Type: application/json" \ --data '{"topic_name": "my-cool-topic"}' http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq "kind": "KafkaTopic", "metadata": { "self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic", "resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg/topic=my-cool-topic" }, "cluster_id": "7cteo6omRwKaUFXj3BHxdg", "topic_name": "my-cool-topic", "is_internal": false, "replication_factor": 0, "partitions": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions" }, "configs": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/configs" }, "partition_reassignments": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions/-/reassignment" -------------------------- Describe a specified topic -------------------------- To get a full description of a specified topic, use the topics endpoint ``GET /clusters/{cluster_id}/topics/{topic_name}`` as shown below. :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters//topics/ | jq Example and result: :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic | jq "kind": "KafkaTopic", "metadata": { "self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic", "resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg/topic=my-cool-topic" }, "cluster_id": "7cteo6omRwKaUFXj3BHxdg", "topic_name": "my-cool-topic", "is_internal": false, "replication_factor": 1, "partitions": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions" }, "configs": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/configs" }, "partition_reassignments": { "related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions/-/reassignment" --------------- List all topics --------------- To list detailed descriptions of all topics (internal and user created topics), use the topics endpoint ``GET /clusters/{cluster_id}/topics/``. Example: :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters//topics | jq This will provide a full description of every topic on the cluster, including replication factors, partitions, configs, and so forth. This output is similar to the |ak| command ``kafka-topics --describe`` (``kafka-topics --describe --bootstrap-server localhost:9092``). -------------------- List all topic names -------------------- To filter the topic list to show only topic names, use the endpoint ``GET /clusters/{cluster_id}/topics/`` as shown. :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq | grep '.topic_name' This provides information similar to the |ak| command ``kafka-topics --list`` (``kafka-topics --list --bootstrap-server localhost:9092``). ----------------------------------- List topics with a specified prefix ----------------------------------- To list all topics with a specified prefix, use the endpoint ``GET /clusters/{cluster_id}/topics/`` as shown. :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters//topics | jq | grep '.topic_name' | grep '' Example and result: :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq | grep '.topic_name' | grep 'my-' "topic_name": "my-cool-topic", "topic_name": "my-hot-topic", -------------- Delete a topic -------------- To delete a specified topic, use the API endpoint ``DELETE /clusters/{cluster_id}/topics/{topic_name}``. :: curl --silent -X DELETE http://localhost:8090/kafka/v3/clusters//topics/{} Example: :: curl --silent -X DELETE http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/{my-legacy-topic} | jq You can list topics again (or check on |c3-short|) to verify that the topic has been deleted. ----------------- List broker tasks ----------------- You can list the broker tasks by querying the endpoint ``GET/clusters/{cluster_id}/brokers/-/tasks`` as shown. This call will provide more interesting information if, for example, you are running a multi-broker cluster with :ref:`sbc` enabled, and processing a lot of data resulting in active broker tasks. To list tasks on all brokers: :: curl --silent -X GET http://localhost:8090/kafka/v3/clusters//brokers/-/tasks | jq To list the tasks on a specified broker, for example broker 3: :: curl -X DELETE http://localhost:8090/kafka/v3/clusters//brokers/3 | jq See the :ref:`sbc-tutorial` or the :ref:`sbc-docker-demo` to experiment with |sbc|. Accesslists ----------- |crest-api| now includes accesslists (``allowlist`` and ``blocklist``) to precisely limit which APIs are accessible. These lists can be configured with ``api.endpoints.allowlist`` and ``api.endpoints.blocklist`` configurations, for which the values are comma-separated lists of API identifiers. The possible API identifiers are the values of the ``@ResourceName`` annotation with which the API resources (classes or methods) are annotated. --------- Allowlist --------- When a non-empty ``allowlist`` is present, only the APIs that match an entry in the ``allowlist`` will be **accessible**. For example, the following configuration makes only v3 cluster Admin APIs accessible. .. sourcecode:: properties api.endpoints.allowlist=api.v3.clusters.* --------- Blocklist --------- When a non-empty ``blocklist`` is present, only the APIs that match an entry in the ``blocklist`` will be `inaccessible` (**not accessible**). For example, the following configuration makes only v3 cluster Admin APIs not accessible. .. sourcecode:: properties api.endpoints.blocklist=api.v3.clusters.* ---------------------------------- Using both Allowlist and Blocklist ---------------------------------- When **both** an ``allowlist`` and ``blocklist`` are present, only the APIs that match an entry in the ``allowlist`` will be accessible, **except** the ones that also match an entry in the ``blocklist``; which will not be accessible. The following configuration makes only the v3 cluster Admin APIs accessible, except for ``list`` which is not accessible. .. sourcecode:: properties api.endpoints.allowlist=api.v3.clusters.* api.endpoints.blocklist=api.v3.clusters.list .. tip:: API identifiers do not support regular expressions. They should match exactly an existing API identifier value. It is just a style choice to have the broader, class-level resource identifiers use the ***** character; it does not act as a regular expression wildcard. In the example given above, ``api.v3.clusters.*`` matches all v3 cluster APIs because ``api.v3.clusters.*`` (including the asterisk) is actual the identifier of the ``@ResourceName`` annotation put on the class containing these APIs. --------------- API identifiers --------------- Following is a list of the current API identifiers for |ak| REST. .. note:: This list is provided as a convenience, and is subject to change as the API evolves. Since it is currently compiled as a manual update, the list is not guaranteed to be always complete. .. csv-table:: :header: "API identifiers (v2 and v3)" api.v2.brokers.* api.v2.brokers.list api.v2.consumers.* api.v2.consumers.assign api.v2.consumers.commit-offsets api.v2.consumers.consume-avro api.v2.consumers.consume-binary api.v2.consumers.consume-json api.v2.consumers.consume-json-schema api.v2.consumers.consume-protobuf api.v2.consumers.create api.v2.consumers.delete api.v2.consumers.get-assignments api.v2.consumers.get-committed-offsets api.v2.consumers.get-subscription api.v2.consumers.seek-to-beginning api.v2.consumers.seek-to-end api.v2.consumers.seek-to-offset api.v2.consumers.subscribe api.v2.consumers.unsubscribe api.v2.partitions.* api.v2.partitions.get api.v2.partitions.get-offsets api.v2.partitions.list api.v2.produce-to-partition.* api.v2.produce-to-partition.avro api.v2.produce-to-partition.binary api.v2.produce-to-partition.json api.v2.produce-to-partition.json-schema api.v2.produce-to-partition.protobuf api.v2.produce-to-topic.* api.v2.produce-to-topic.avro api.v2.produce-to-topic.binary api.v2.produce-to-topic.json api.v2.produce-to-topic.json-schema api.v2.produce-to-topic.protobuf api.v2.root.* api.v2.root.get api.v2.root.post api.v2.topics.* api.v2.topics.get api.v2.topics.list api.v3.acls.* api.v3.acls.create api.v3.acls.delete api.v3.acls.list api.v3.balancer.* api.v3.balancer.any-uneven-load.get api.v3.balancer.get api.v3.broker-configs.* api.v3.broker-configs.alter api.v3.broker-configs.delete api.v3.broker-configs.get api.v3.broker-configs.list api.v3.broker-configs.update api.v3.broker-replica-exclusions.* api.v3.broker-replica-exclusions.create api.v3.broker-replica-exclusions.delete api.v3.broker-replica-exclusions.list api.v3.broker-replica-exclusions.search-by-broker api.v3.broker-tasks.* api.v3.broker-tasks.list api.v3.broker-tasks.remove-broker.get api.v3.broker-tasks.remove-broker.list api.v3.broker-tasks.search-by-type api.v3.brokers-configs.* api.v3.brokers-configs.list api.v3.brokers.* api.v3.brokers.broker-tasks.list api.v3.brokers.broker-tasks.search-by-type api.v3.brokers.delete api.v3.brokers.get api.v3.brokers.list api.v3.cluster-configs.* api.v3.cluster-configs.alter api.v3.cluster-configs.delete api.v3.cluster-configs.get api.v3.cluster-configs.list api.v3.cluster-configs.update api.v3.clusters.* api.v3.clusters.get api.v3.clusters.list api.v3.consumer-assignments.* api.v3.consumer-assignments.get api.v3.consumer-assignments.list api.v3.consumer-group-lag-summary.* api.v3.consumer-group-lag-summary.get api.v3.consumer-groups.* api.v3.consumer-groups.get api.v3.consumer-groups.list api.v3.consumer-lags.* api.v3.consumer-lags.get api.v3.consumer-lags.list api.v3.consumers.* api.v3.consumers.get api.v3.consumers.list api.v3.last-produced-time.* api.v3.link-configs.* api.v3.link-configs.alter api.v3.link-configs.delete api.v3.link-configs.get api.v3.link-configs.list api.v3.link-configs.update api.v3.links.* api.v3.links.create api.v3.links.delete api.v3.links.get api.v3.links.list api.v3.mirrors.* api.v3.mirrors.create api.v3.mirrors.failover api.v3.mirrors.get api.v3.mirrors.list api.v3.mirrors.list-all api.v3.mirrors.pause api.v3.mirrors.promote api.v3.mirrors.resume api.v3.partition-reassignments.* api.v3.partition-reassignments.get api.v3.partition-reassignments.list api.v3.partition-reassignments.search-by-topic api.v3.partitions.* api.v3.partitions.get api.v3.partitions.list api.v3.partitions.replica-statuses.list api.v3.produce.* api.v3.produce.produce-to-topic api.v3.replica-statuses.* api.v3.replicas.* api.v3.replicas.get api.v3.replicas.list api.v3.replicas.search-by-broker api.v3.topic-configs.* api.v3.topic-configs.alter api.v3.topic-configs.delete api.v3.topic-configs.get api.v3.topic-configs.list api.v3.topic-configs.update api.v3.topics.* api.v3.topics.create api.v3.topics.delete api.v3.topics.get api.v3.topics.last-produced-time api.v3.topics.list api.v3.topics.replica-statuses.list api.v3.topics.replica-statuses.list-all Suggested Resources ------------------- - Blog post: `Use Cases and Architectures for HTTP and REST APIs with Apache Kafka `__ - `Confluent Admin REST APIs demo `__