Confluent REST Proxy API Reference¶
Content Types¶
The REST Proxy uses content types for both requests and responses to indicate these data properties:
Serialization format:
json
API version (e.g.
v2
orv3
)Embedded formats:
json
,binary
,avro
,protobuf
andjsonschema
Important
The
jsonschema
andprotobuf
embedded types are supported beginning with REST Proxy v2.
REST Proxy supports the Avro®, JSON Schema,
and Protobuf serialization formats. The versions
of the REST Proxy API are v2
and v3
.
The embedded format is the format of data you are producing or consuming. These formats
are embedded into requests or responses in the serialization format. For
example, you can provide binary
data in a json
-serialized request; in
this case the data should be provided as a base64-encoded string.
- For
v2
, the content type will beapplication/vnd.kafka.binary.v2+json
. - For
v3
the content type will beapplication/json
.
If your data is JSON, you can use json
as the embedded format and embed it directly:
- For
v2
, the content type will beapplication/vnd.kafka.json.v2+json
. - For
v3
the content type will beapplication/json
.
With avro
, protobuf
, and jsonschema
embedded
types, you can directly embed JSON formatted data along with a schema (or schema ID) in the request.
These types use Schema Registry, and the ID of the schema is serialized in addition to the data and payload.
- The Avro content type is
application/vnd.kafka.avro.v2+json
. - The Protobuf content type is
application/vnd.kafka.protobuf.v2+json
. - The JSON schema content type is
application/vnd.kafka.jsonschema.v2+json
.
The format for the content type is:
application/vnd.kafka[.embedded_format].[api_version]+[serialization_format]
For more information, see Schema Registry API Reference.
The embedded format can be omitted when there are no embedded messages
(i.e. for metadata requests you can use application/vnd.kafka.v2+json
).
The preferred content type for v2
is application/vnd.kafka.[embedded_format].v2+json
.
However, other less specific content types are permitted, including application/vnd.kafka+json
to indicate no specific API version requirement (the most recent stable version
will be used), application/json
, and application/octet-stream
. The
latter two are only supported for compatibility and ease of use. In all cases,
if the embedded format is omitted, binary
is assumed. Although using these
less specific values is permitted, to remain compatible with future versions you
should specify preferred content types in requests and check the content types
of responses.
Your requests should specify the most specific format and version information
possible via the HTTP Accept
header
For v2
, you can specify format and version as follows:
Accept: application/vnd.kafka.v2+json
For v3
, do not specify the version. The latest version (v3
) will be used:
Accept: application/json
The server also supports content negotiation, so you may include multiple, weighted preferences:
Accept: application/vnd.kafka.v2+json; q=0.9, application/json; q=0.5
This can be useful when, for example, a new version of the API is preferred but you cannot be certain it is available yet.
See also
REST API Usage Examples (curl), which show how to test the APIs from the command line using curl
Errors¶
All API endpoints use a standard error message format for any requests that return an HTTP status indicating an error (any 400 or 500 statuses). For example, a request entity that omits a required field may generate the following response:
HTTP/1.1 422 Unprocessable Entity
Content-Type: application/vnd.kafka.v3+json
{
"error_code": 422,
"message": "records may not be empty"
}
Although it is good practice to check the status code, you may safely parse the
response of any non-DELETE API calls and check for the presence of an
error_code
field to detect errors.
Some error codes are used frequently across the entire API and you will probably want to have general purpose code to handle these, whereas most other error codes will need to be handled on a per-request basis.
-
ANY
/
¶ Status Codes: - 401 Unauthorized –
- Error code 40101 – Kafka Authentication Error.
- 403 Forbidden –
- Error code 40301 – Kafka Authorization Error.
- 404 Not Found –
- Error code 40401 – Topic not found.
- Error code 40402 – Partition not found.
- 422 Unprocessable Entity – The request payload is either improperly formatted or contains semantic errors
- 500 Internal Server Error –
- Error code 50001 – Zookeeper error.
- Error code 50002 – Kafka error.
- Error code 50003 – Retriable Kafka error. Although the operation failed, it’s possible that retrying the request will be successful.
- Error code 50101 – Only SSL endpoints were found for the specified broker, but SSL is not supported for the invoked API yet.
- 401 Unauthorized –
REST Proxy API v2¶
Tip
See REST API Usage Examples (curl) to learn how to test these API endpoints from the command line.
Topics¶
The topics resource provides information about the topics in your Kafka cluster and their current state. It also lets
you produce messages by making POST
requests to specific topics.
-
GET
/topics
¶ Get a list of Kafka topics.
Response JSON Object: - topics (array) – List of topic names
Example request:
GET /topics HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json ["topic1", "topic2"]
-
GET
/topics/
(string: topic_name)¶ Get metadata about a specific topic.
Parameters: - topic_name (string) – Name of the topic to get metadata about
Response JSON Object: - name (string) – Name of the topic
- configs (map) – Per-topic configuration overrides
- partitions (array) – List of partitions for this topic
- partitions[i].partition (int) – the ID of this partition
- partitions[i].leader (int) – the broker ID of the leader for this partition
- partitions[i].replicas (array) – list of replicas for this partition, including the leader
- partitions[i].replicas[j].broker (array) – broker ID of the replica
- partitions[i].replicas[j].leader (boolean) – true if this replica is the leader for the partition
- partitions[i].replicas[j].in_sync (boolean) – true if this replica is currently in sync with the leader
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
Example request:
GET /topics/test HTTP/1.1 Accept: application/vnd.kafka.v2+json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "name": "test", "configs": { "cleanup.policy": "compact" }, "partitions": [ { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, } ] }, { "partition": 2, "leader": 2, "replicas": [ { "broker": 1, "leader": false, "in_sync": true, }, { "broker": 2, "leader": true, "in_sync": true, } ] } ] }
-
POST
/topics/
(string: topic_name)¶ Produce messages to a topic, optionally specifying keys or partitions for the messages. If no partition is provided, one will be chosen based on the hash of the key. If no key is provided, the partition will be chosen for each message in a round-robin fashion.
For the
avro
,protobuf
, andjsonschema
embedded formats, you must provide information about schemas and the REST Proxy must be configured with the URL to access Schema Registry (schema.registry.url
). Schemas may be provided as the full schema encoded as a string, or, after the initial request may be provided as the schema ID returned with the first response.Parameters: - topic_name (string) – Name of the topic to produce the messages to
Request JSON Object: - key_schema (string) – Full schema encoded as a string (e.g. JSON serialized for Avro data)
- key_schema_id (int) – ID returned by a previous request using the same schema. This ID corresponds to the ID of the schema in the registry.
- value_schema (string) – Full schema encoded as a string (e.g. JSON serialized for Avro data)
- value_schema_id (int) – ID returned by a previous request using the same schema. This ID corresponds to the ID of the schema in the registry.
Request JSON Array of Objects: - records – A list of records to produce to the topic.
- records[i].key (object) – The message key, formatted according to the embedded format, or null to omit a key (optional)
- records[i].value (object) – The message value, formatted according to the embedded format
- records[i].partition (int) – Partition to store the message in (optional)
Response JSON Object: - key_schema_id (int) – The ID for the schema used to produce keys, or null if keys were not used
- value_schema_id (int) – The ID for the schema used to produce values.
Response JSON Array of Objects: - offsets (object) – List of partitions and offsets the messages were published to
- offsets[i].partition (int) – Partition the message was published to, or null if publishing the message failed
- offsets[i].offset (long) – Offset of the message, or null if publishing the message failed
- offsets[i].error_code (long) –
An error code classifying the reason this operation failed, or null if it succeeded.
- 1 - Non-retriable Kafka exception
- 2 - Retriable Kafka exception; the message might be sent successfully if retried
- offsets[i].error (string) – An error message describing why the operation failed, or null if it succeeded
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
- 422 Unprocessable Entity –
- Error code 42201 – Request includes keys and uses a format that requires schemas, but does
not include the
key_schema
orkey_schema_id
fields - Error code 42202 – Request includes values and uses a format that requires schemas, but
does not include the
value_schema
orvalue_schema_id
fields - Error code 42205 – Request includes invalid schema.
- Error code 42201 – Request includes keys and uses a format that requires schemas, but does
not include the
- 408 Request Timeout –
- Error code 40801 – Schema registration or lookup failed.
Example binary request:
POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.binary.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "a2V5", "value": "Y29uZmx1ZW50" }, { "value": "a2Fma2E=", "partition": 1 }, { "value": "bG9ncw==" } ] }
Example binary response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 2, "offset": 100 }, { "partition": 1, "offset": 101 }, { "partition": 2, "offset": 102 } ] }
Example Avro request:
POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.avro.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "value_schema": "{\"name\":\"int\",\"type\": \"int\"}", "records": [ { "value": 12 }, { "value": 24, "partition": 1 } ] }
Example Avro response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 2, "offset": 103 }, { "partition": 1, "offset": 104 } ] }
Example JSON request:
POST /topics/test HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.json.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "somekey", "value": {"foo": "bar"} }, { "value": [ "foo", "bar" ], "partition": 1 }, { "value": 53.5 } ] }
Example JSON response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 2, "offset": 100 }, { "partition": 1, "offset": 101 }, { "partition": 2, "offset": 102 } ] }
Partitions¶
The partitions resource provides per-partition metadata, including the current leaders and replicas for each partition.
It also allows you to consume and produce messages to single partition using GET
and POST
requests.
-
GET
/topics/
(string: topic_name)/partitions
¶ Get a list of partitions for the topic.
Parameters: - topic_name (string) – the name of the topic
Response JSON Array of Objects: - partition (int) – ID of the partition
- leader (int) – Broker ID of the leader for this partition
- replicas (array) – List of brokers acting as replicas for this partition
- replicas[i].broker (int) – Broker ID of the replica
- replicas[i].leader (boolean) – true if this broker is the leader for the partition
- replicas[i].in_sync (boolean) – true if the replica is in sync with the leader
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
Example request:
GET /topics/test/partitions HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json [ { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] }, { "partition": 2, "leader": 2, "replicas": [ { "broker": 1, "leader": false, "in_sync": true, }, { "broker": 2, "leader": true, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] } ]
-
GET
/topics/
(string: topic_name)/partitions/
(int: partition_id)¶ Get metadata about a single partition in the topic.
Parameters: - topic_name (string) – Name of the topic
- partition_id (int) – ID of the partition to inspect
Response JSON Object: - partition (int) – ID of the partition
- leader (int) – Broker ID of the leader for this partition
- replicas (array) – List of brokers acting as replicas for this partition
- replicas[i].broker (int) – Broker ID of the replica
- replicas[i].leader (boolean) – true if this broker is the leader for the partition
- replicas[i].in_sync (boolean) – true if the replica is in sync with the leader
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
- Error code 40402 – Partition not found
Example request:
GET /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "partition": 1, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true, }, { "broker": 2, "leader": false, "in_sync": true, }, { "broker": 3, "leader": false, "in_sync": false, } ] }
-
GET
/topics/
(string: topic_name)/partitions/
(int: partition_id)/offsets
¶ Get a summary of the offsets in this topic partition.
Parameters: - topic_name (string) – Name of the topic
- partition_id (int) – ID of the partition to inspect
Response JSON Object: - beginning_offset (int) – First offset in this partition
- end_offset (int) – Last offset in this partition
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
- Error code 40402 – Partition not found
Example request:
GET /topics/test/partitions/1/offsets HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "beginning_offset": 10, "end_offset": 50, }
-
POST
/topics/
(string: topic_name)/partitions/
(int: partition_id)¶ Produce messages to one partition of the topic. For the Avro, JSON Schema, and Protobuf embedded formats, you must provide information about schemas. This may be provided as the full schema encoded as a string, or, after the initial request may be provided as the schema ID returned with the first response.
Parameters: - topic_name (string) – Topic to produce the messages to
- partition_id (int) – Partition to produce the messages to
Request JSON Object: - key_schema (string) – Full schema encoded as a string (e.g. JSON serialized for Avro data)
- key_schema_id (int) – ID returned by a previous request using the same schema. This ID corresponds to the ID of the schema in the registry.
- value_schema (string) – Full schema encoded as a string (e.g. JSON serialized for Avro data)
- value_schema_id (int) – ID returned by a previous request using the same schema. This ID corresponds to the ID of the schema in the registry.
- records – A list of records to produce to the partition.
Request JSON Array of Objects: - records[i].key (object) – The message key, formatted according to the embedded format, or null to omit a key (optional)
- records[i].value (object) – The message value, formatted according to the embedded format
Response JSON Object: - key_schema_id (int) – The ID for the schema used to produce keys, or null if keys were not used
- value_schema_id (int) – The ID for the schema used to produce values.
Response JSON Array of Objects: - offsets (object) – List of partitions and offsets the messages were published to
- offsets[i].partition (int) – Partition the message was published to. This
will be the same as the
partition_id
parameter and is provided only to maintain consistency with responses from producing to a topic - offsets[i].offset (long) – Offset of the message
- offsets[i].error_code (long) –
An error code classifying the reason this operation failed, or null if it succeeded.
- 1 - Non-retriable Kafka exception
- 2 - Retriable Kafka exception; the message might be sent successfully if retried
- offsets[i].error (string) – An error message describing why the operation failed, or null if it succeeded
Status Codes: - 404 Not Found –
- Error code 40401 – Topic not found
- Error code 40402 – Partition not found
- 422 Unprocessable Entity –
- Error code 42201 – Request includes keys and uses a format that requires schemas, but does
not include the
key_schema
orkey_schema_id
fields - Error code 42202 – Request includes values and uses a format that requires schemas, but
does not include the
value_schema
orvalue_schema_id
fields - Error code 42205 – Request includes invalid schema.
- Error code 42201 – Request includes keys and uses a format that requires schemas, but does
not include the
Example binary request:
POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.binary.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "a2V5", "value": "Y29uZmx1ZW50" }, { "value": "a2Fma2E=" } ] }
Example binary response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] }
Example Avro request:
POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.avro.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "value_schema": "{\"name\":\"int\",\"type\": \"int\"}" "records": [ { "value": 25 }, { "value": 26 } ] }
Example Avro response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] }
Example JSON request:
POST /topics/test/partitions/1 HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.json.v2+json Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "records": [ { "key": "somekey", "value": {"foo": "bar"} }, { "value": 53.5 } ] }
Example JSON response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": null, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] }
Example PROTOBUF request:
POST /topics/test/partitions/1 HTTP/1.1 Content-Type: application/vnd.kafka.protobuf.v2+json Accept: application/vnd.kafka.v2+json, application/json { "value_schema": "syntax=\"proto3\"; message Foo { string f1 = 1; }" "records": [{"value": {"f1": "foo"}}] }
Example PROTOBUF response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] }
Example JSONSCHEMA request:
POST /topics/test/partitions/1 HTTP/1.1 Content-Type: application/vnd.kafka.jsonschema.v2+json Accept: application/vnd.kafka.v2+json, application/json { "value_schema": "{\"type\":\"object\",\"properties\":{\"f1\":{\"type\":\"string\"}}}", "records": [{"value": {"f1": "bar"}}] }
Example JSONSCHEMA response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "key_schema_id": null, "value_schema_id": 32, "offsets": [ { "partition": 1, "offset": 100, }, { "partition": 1, "offset": 101, } ] }
Consumers¶
The consumers resource provides access to the current state of consumer groups, allows you to create a consumer in a consumer group and consume messages from topics and partitions. REST Proxy can convert data stored in Kafka in serialized form into a JSON-compatible embedded format. These formats are supported:
- Raw binary data is encoded as base64 strings
- Avro data is converted into embedded
- JSON objects, and JSON is embedded directly
- Protobuf
- JSON Schema
Because consumers are stateful, any consumer instances created with the REST API are tied to a specific REST Proxy instance. A full URL is provided when the instance is created and it should be used to construct any subsequent requests. Failing to use the returned URL for future consumer requests will result in 404 errors because the consumer instance will not be found. If a REST Proxy instance is shutdown, it will attempt to cleanly destroy any consumers before it is terminated.
-
POST
/consumers/
(string: group_name)¶ Create a new consumer instance in the consumer group. The
format
parameter controls the deserialization of data from Kafka and the content type that must be used in theAccept
header of subsequent read API requests performed against this consumer. For example, if the creation request specifiesavro
for the format, subsequent read requests should useAccept: application/vnd.kafka.avro.v2+json
.Note that the response includes a URL including the host since the consumer is stateful and tied to a specific REST Proxy instance. Subsequent examples in this section use a
Host
header for this specific REST Proxy instance.Parameters: - group_name (string) – The name of the consumer group to join
Request JSON Object: - name (string) – Name for the consumer instance, which will be used in URLs for the consumer. This must be unique, at least within REST Proxy process handling the request. If omitted, falls back on the automatically generated ID. Using automatically generated names is recommended for most use cases.
- format (string) – The format of consumed messages, which is used to convert messages into
a JSON-compatible form. Valid values: “binary”, “avro”, “json”, “jsonschema”,
and
protobuf
. If unspecified, defaults to “binary”. - auto.offset.reset (string) – Sets the
auto.offset.reset
setting for the consumer - auto.commit.enable (string) – Sets the
auto.commit.enable
setting for the consumer - fetch.min.bytes (string) – Sets the
fetch.min.bytes
setting for this consumer specifically - consumer.request.timeout.ms (string) – Sets the
consumer.request.timeout.ms
setting for this consumer specifically. This setting controls the maximum total time to wait for messages for a request if the maximum request size has not yet been reached. It does not affect the underlying consumer->broker connection. Default value is taken from the REST Proxy config file
Response JSON Object: - instance_id (string) – Unique ID for the consumer instance in this group.
- base_uri (string) – Base URI used to construct URIs for subsequent requests against this consumer instance. This
will be of the form
http://hostname:port/consumers/consumer_group/instances/instance_id
.
Status Codes: - 409 Conflict –
- Error code 40902 – Consumer instance with the specified name already exists.
- 422 Unprocessable Entity –
- Error code 42204 – Invalid consumer configuration. One of the settings specified in the request contained an invalid value.
Example request:
POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "name": "my_consumer", "format": "binary", "auto.offset.reset": "earliest", "auto.commit.enable": "false" }
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/testgroup/instances/my_consumer" }
Example PROTOBUF request:
POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.protobuf.v2+json { "name": "my_consumer", "format": "protobuf", "auto.offset.reset": "earliest", "auto.commit.enable": "false" }
Example PROTOBUF response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.protobuf.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/my_protobuf_consumer" }
Example JSONSCHEMA request:
POST /consumers/testgroup/ HTTP/1.1 Host: kafkaproxy.example.com Content-Type: application/vnd.kafka.jsonschema.v2+json { "name": "my_consumer", "format": "jsonschema", "auto.offset.reset": "earliest", "auto.commit.enable": "false" }
Example JSONSCHEMA response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.jsonschema.v2+json { "instance_id": "my_consumer", "base_uri": "http://proxy-instance.kafkaproxy.example.com/consumers/my_jsonschema_consumer" }
-
DELETE
/consumers/
(string: group_name)/instances/
(string: instance)¶ Destroy the consumer instance.
Note that this request must be made to the specific REST Proxy instance holding the consumer instance.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
DELETE /consumers/testgroup/instances/my_consumer HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json
Example response:
HTTP/1.1 204 No Content
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/offsets
¶ Commit a list of offsets for the consumer. When the post body is empty, it commits all the records that have been fetched by the consumer instance.
Note that this request must be made to the specific REST Proxy instance holding the consumer instance.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - offsets – A list of offsets to commit for partitions
- offsets[i].topic (string) – Name of the topic
- offsets[i].partition (int) – Partition ID
- offset – the offset to commit
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
POST /consumers/testgroup/instances/my_consumer/offsets HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "offsets": [ { "topic": "test", "partition": 0, "offset": 20 }, { "topic": "test", "partition": 1, "offset": 30 } ] }
-
GET
/consumers/
(string: group_name)/instances/
(string: instance)/offsets
¶ Get the last committed offsets for the given partitions (whether the commit happened by this process or another).
Note that this request must be made to the specific REST Proxy instance holding the consumer instance.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - partitions – A list of partitions to find the last committed offsets for
- partitions[i].topic (string) – Name of the topic
- partitions[i].partition (int) – Partition ID
Response JSON Array of Objects: - offsets – A list of committed offsets
- offsets[i].topic (string) – Name of the topic for which an offset was committed
- offsets[i].partition (int) – Partition ID for which an offset was committed
- offsets[i].offset (int) – Committed offset
- offsets[i].metadata (string) – Metadata for the committed offset
Status Codes: - 404 Not Found –
- Error code 40402 – Partition not found
- Error code 40403 – Consumer instance not found
Example request:
GET /consumers/testgroup/instances/my_consumer/offsets HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] }
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json {"offsets": [ { "topic": "test", "partition": 0, "offset": 21, "metadata":"" }, { "topic": "test", "partition": 1, "offset": 31, "metadata":"" } ] }
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/subscription
¶ Subscribe to the given list of topics or a topic pattern to get dynamically assigned partitions. If a prior subscription exists, it would be replaced by the latest subscription.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - topics – A list of topics to subscribe
- topics[i].topic (string) – Name of the topic
Request JSON Object: - topic_pattern (string) – A REGEX pattern. topics_pattern and topics fields are mutually exclusive.
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
- 409 Conflict –
- Error code 40903 – Subscription to topics, partitions and pattern are mutually exclusive.
Example request:
POST /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "topics": [ "test1", "test2" ] }
Example response:
HTTP/1.1 204 No Content
Example request:
POST /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "topic_pattern": "test.*" }
Example response:
HTTP/1.1 204 No Content
-
GET
/consumers/
(string: group_name)/instances/
(string: instance)/subscription
¶ Get the current subscribed list of topics.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Response JSON Array of Objects: - topics – A list of subscribed topics
- topics[i] (string) – Name of the topic
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
GET /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "topics": [ "test1", "test2" ] }
-
DELETE
/consumers/
(string: group_name)/instances/
(string: instance)/subscription
¶ Unsubscribe from topics currently subscribed.
Note that this request must be made to the specific REST Proxy instance holding the consumer instance.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
DELETE /consumers/testgroup/instances/my_consumer/subscription HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
Example response:
HTTP/1.1 204 No Content
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/assignments
¶ Manually assign a list of partitions to this consumer.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - partitions – A list of partitions to assign to this consumer
- partitions[i].topic (string) – Name of the topic
- partitions[i].partition (int) – Partition ID
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
- 409 Conflict –
- Error code 40903 – Subscription to topics, partitions and pattern are mutually exclusive.
Example request:
POST /consumers/testgroup/instances/my_consumer/assignments HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] }
Example response:
HTTP/1.1 204 No Content
-
GET
/consumers/
(string: group_name)/instances/
(string: instance)/assignments
¶ Get the list of partitions currently manually assigned to this consumer.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Response JSON Array of Objects: - partitions – A list of partitions that are manually assigned to this consumer
- partitions[i].topic (string) – Name of the topic
- partitions[i].partition (int) – Partition ID
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
GET /consumers/testgroup/instances/my_consumer/assignments HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.v2+json
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] }
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/positions
¶ Overrides the fetch offsets that the consumer will use for the next set of records to fetch.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - offsets – A list of offsets
- offsets[i].topic (string) – Name of the topic for
- offsets[i].partition (int) – Partition ID
- offsets[i].offset (int) – Seek to offset for the next set of records to fetch
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
POST /consumers/testgroup/instances/my_consumer/positions HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "offsets": [ { "topic": "test", "partition": 0, "offset": 20 }, { "topic": "test", "partition": 1, "offset": 30 } ] }
Example response:
HTTP/1.1 204 No Content
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/positions/beginning
¶ Seek to the first offset for each of the given partitions.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - partitions – A list of partitions
- partitions[i].topic (string) – Name of the topic
- partitions[i].partition (int) – Partition ID
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
POST /consumers/testgroup/instances/my_consumer/positions/beginning HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] }
Example response:
HTTP/1.1 204 No Content
-
POST
/consumers/
(string: group_name)/instances/
(string: instance)/positions/end
¶ Seek to the last offset for each of the given partitions.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Request JSON Array of Objects: - partitions – A list of partitions
- partitions[i].topic (string) – Name of the topic
- partitions[i].partition (int) – Partition ID
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
Example request:
POST /consumers/testgroup/instances/my_consumer/positions/end HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Content-Type: application/vnd.kafka.v2+json { "partitions": [ { "topic": "test", "partition": 0 }, { "topic": "test", "partition": 1 } ] }
Example response:
HTTP/1.1 204 No Content
-
GET
/consumers/
(string: group_name)/instances/
(string: instance)/records
¶ Fetch data for the topics or partitions specified using one of the subscribe/assign APIs.
The format of the embedded data returned by this request is determined by the format specified in the initial consumer instance creation request and must match the format of the
Accept
header. Mismatches will result in error code40601
.Note that this request must be made to the specific REST Proxy instance holding the consumer instance.
Parameters: - group_name (string) – The name of the consumer group
- instance (string) – The ID of the consumer instance
Query Parameters: - timeout – Maximum amount of milliseconds the REST Proxy will spend fetching records. Other parameters controlling actual time spent fetching records: max_bytes and fetch.min.bytes. Default value is undefined. This parameter is used only if it’s smaller than the consumer.timeout.ms that is defined either during consumer instance creation or in the REST Proxy’s config file.
- max_bytes – The maximum number of bytes of unencoded keys and values that should be
included in the response. This provides approximate control over the size of
responses and the amount of memory required to store the decoded response. The
actual limit will be the minimum of this setting and the server-side
configuration
consumer.request.max.bytes
. Default is unlimited.
Response JSON Array of Objects: - topic (string) – The topic
- key (string) – The message key, formatted according to the embedded format
- value (string) – The message value, formatted according to the embedded format
- partition (int) – Partition of the message
- offset (long) – Offset of the message
Status Codes: - 404 Not Found –
- Error code 40403 – Consumer instance not found
- 406 Not Acceptable –
- Error code 40601 – Consumer format does not match the embedded format requested by the
Accept
header.
- Error code 40601 – Consumer format does not match the embedded format requested by the
Example binary request:
GET /consumers/testgroup/instances/my_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.binary.v2+json
Example binary response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.binary.v2+json [ { "topic": "test", "key": "a2V5", "value": "Y29uZmx1ZW50", "partition": 1, "offset": 100, }, { "topic": "test", "key": "a2V5", "value": "a2Fma2E=", "partition": 2, "offset": 101, } ]
Example Avro request:
GET /consumers/avrogroup/instances/my_avro_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.avro.v2+json
Example Avro response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.avro.v2+json [ { "topic": "test", "key": 1, "value": { "id": 1, "name": "Bill" }, "partition": 1, "offset": 100, }, { "topic": "test", "key": 2, "value": { "id": 2, "name": "Melinda" }, "partition": 2, "offset": 101, } ]
Example JSON request:
GET /consumers/jsongroup/instances/my_json_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1 Host: proxy-instance.kafkaproxy.example.com Accept: application/vnd.kafka.json.v2+json
Example JSON response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.json.v2+json [ { "topic": "test", "key": "somekey", "value": {"foo":"bar"}, "partition": 1, "offset": 10, }, { "topic": "test", "key": "somekey", "value": ["foo", "bar"], "partition": 2, "offset": 11, } ]
Brokers¶
The brokers resource provides access to the current state of Kafka brokers in the cluster.
-
GET
/brokers
¶ Get a list of brokers.
Response JSON Object: - brokers (array) – List of broker IDs
Example request:
GET /brokers HTTP/1.1 Host: kafkaproxy.example.com Accept: application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
Example response:
HTTP/1.1 200 OK Content-Type: application/vnd.kafka.v2+json { "brokers": [1, 2, 3] }
REST Proxy API v3¶
These APIs are available both on Confluent Server (as a part of Confluent Enterprise) and REST Proxy. When using the API in Confluent Server,
all paths should be prefixed with /kafka
. For example, the path to list clusters is:
- Confluent Server:
/kafka/v3/clusters
- REST Proxy:
/v3/clusters
Confluent Server provides an embedded instance of these APIs on the Kafka brokers for the v3 Admin API.
The embedded APIs run on the Confluent HTTP service, confluent.http.server.listeners
. Therefore, if
you have the HTTP server running, the REST Proxy v3 API is automatically available to you through the brokers.
Note that the Metadata Server (MDS) is also running on the Confluent HTTP service,
as another endpoint available to you with additional configurations.
Tip
See also, the following sections:
- REST API Usage Examples (curl) for how to test these API endpoints from the command line. from the command line
- Admin REST APIs Configuration Options
- Admin operations in Kafka REST API Features
- Confluent Admin REST APIs demo
Cluster¶
-
GET
/clusters
¶ List Clusters
Returns a list of known Kafka clusters. Currently both Kafka and Kafka REST Proxy are only aware of the Kafka cluster pointed at by the
bootstrap.servers
configuration. Therefore only one Kafka cluster will be returned in the response.Example request:
GET /clusters HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of Kafka clusters.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaClusterList", "metadata": { "self": "http://localhost:9391/v3/clusters", "next": null }, "data": [ { "kind": "KafkaCluster", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1", "resource_name": "crn:///kafka=cluster-1" }, "cluster_id": "cluster-1", "controller": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" }, "acls": { "related": "http://localhost:9391/v3/clusters/cluster-1/acls" }, "brokers": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers" }, "broker_configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/broker-configs" }, "consumer_groups": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups" }, "topics": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/-/partitions/-/reassignment" } } ] }
- 200 OK –
-
GET
/clusters/{cluster_id}
¶ Get Cluster
Returns the Kafka cluster with the specified
cluster_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The Kafka cluster.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaCluster", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1", "resource_name": "crn:///kafka=cluster-1" }, "cluster_id": "cluster-1", "controller": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" }, "acls": { "related": "http://localhost:9391/v3/clusters/cluster-1/acls" }, "brokers": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers" }, "broker_configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/broker-configs" }, "consumer_groups": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups" }, "topics": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/-/partitions/-/reassignment" } }
ACL¶
-
GET
/clusters/{cluster_id}/acls
¶ Search ACLs
Returns a list of ACLs that match the search criteria.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Query Parameters: - resource_type (string) – The ACL resource type.
- resource_name (string) – The ACL resource name.
- pattern_type (string) – The ACL pattern type.
- principal (string) – The ACL principal.
- host (string) – The ACL host.
- operation (string) – The ACL operation.
- permission (string) – The ACL permission.
Example request:
GET /clusters/{cluster_id}/acls HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of ACLs.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaAclList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/acls?principal=alice" }, "data": [ { "kind": "KafkaAcl", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/acls?resource_type=TOPIC&resource_name=topic-&pattern_type=PREFIXED&principal=alice&host=*&operation=ALL&permission=ALLOW" }, "cluster_id": "cluster-1", "resource_type": "TOPIC", "resource_name": "topic-", "pattern_type": "PREFIXED", "principal": "alice", "host": "*", "operation": "ALL", "permission": "ALLOW" }, { "kind": "KafkaAcl", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/acls?resource_type=CLUSTER&resource_name=cluster-1&pattern_type=LITERAL&principal=bob&host=*&operation=DESCRIBE&permission=DENY" }, "cluster_id": "cluster-1", "resource_type": "CLUSTER", "resource_name": "cluster-2", "pattern_type": "LITERAL", "principal": "alice", "host": "*", "operation": "DESCRIBE", "permission": "DENY" } ] }
-
POST
/clusters/{cluster_id}/acls
¶ Create ACLs
Creates an ACL.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
POST /clusters/{cluster_id}/acls HTTP/1.1 Host: example.com Content-Type: application/json { "resource_type": "UNKNOWN", "pattern_type": "UNKNOWN", "principal": "string", "host": "string", "operation": "UNKNOWN", "permission": "UNKNOWN" }
Status Codes: - 201 Created – No Content
-
DELETE
/clusters/{cluster_id}/acls
¶ Delete ACLs
Deletes the list of ACLs that matches the search criteria.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Query Parameters: - resource_type (string) – The ACL resource type.
- resource_name (string) – The ACL resource name.
- pattern_type (string) – The ACL pattern type.
- principal (string) – The ACL principal.
- host (string) – The ACL host.
- operation (string) – The ACL operation.
- permission (string) – The ACL permission.
Status Codes: - 200 OK –
The list of deleted ACLs.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "data": [ { "kind": "KafkaAcl", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/acls?resource_type=TOPIC&resource_name=topic-&pattern_type=PREFIXED&principal=alice&host=*&operation=ALL&permission=ALLOW" }, "cluster_id": "cluster-1", "resource_type": "TOPIC", "resource_name": "topic-", "pattern_type": "PREFIXED", "principal": "alice", "host": "*", "operation": "ALL", "permission": "ALLOW" }, { "kind": "KafkaAcl", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/acls?resource_type=CLUSTER&resource_name=cluster-1&pattern_type=LITERAL&principal=bob&host=*&operation=DESCRIBE&permission=DENY" }, "cluster_id": "cluster-1", "resource_type": "CLUSTER", "resource_name": "cluster-2", "pattern_type": "LITERAL", "principal": "alice", "host": "*", "operation": "DESCRIBE", "permission": "DENY" } ] }
Configs¶
-
GET
/clusters/{cluster_id}/broker-configs
¶ List Cluster Configs
Returns a list of configuration parameters for the specified Kafka cluster.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/broker-configs HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of cluster configs.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaClusterConfigList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/broker-configs", "next": null }, "data": [ { "kind": "KafkaClusterConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/broker-configs/max.connections", "resource_name": "crn:///kafka=cluster-1/broker-config=max.connections" }, "cluster_id": "cluster-1", "config_type": "BROKER", "name": "max.connections", "value": "1000", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_DEFAULT_BROKER_CONFIG", "synonyms": [ { "name": "max.connections", "value": "1000", "source": "DYNAMIC_DEFAULT_BROKER_CONFIG" }, { "name": "max.connections", "value": "2147483647", "source": "DEFAULT_CONFIG" } ] }, { "kind": "KafkaClusterConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/broker-configs/compression.type", "resource_name": "crn:///kafka=cluster-1/broker-config=compression.type" }, "cluster_id": "cluster-1", "config_type": "BROKER", "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_DEFAULT_BROKER_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_DEFAULT_BROKER_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] } ] }
-
POST
/clusters/{cluster_id}/broker-configs:alter
¶ Batch Alter Cluster Configs
Updates or deletes a set of Kafka cluster configuration parameters.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
POST /clusters/{cluster_id}/broker-configs:alter HTTP/1.1 Host: example.com Content-Type: application/json { "data": [ { "name": "max.connections", "operation": "DELETE" }, { "name": "compression.type", "value": "gzip" } ] }
Status Codes: - 204 No Content – No Content
-
GET
/clusters/{cluster_id}/broker-configs/{name}
¶ Get Cluster Config
Returns the configuration parameter specified by
name
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- name (string) – The configuration parameter name.
Example request:
GET /clusters/{cluster_id}/broker-configs/{name} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The cluster configuration parameter.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaClusterConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/broker-configs/compression.type", "resource_name": "crn:///kafka=cluster-1/broker-config=compression.type" }, "cluster_id": "cluster-1", "config_type": "BROKER", "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_DEFAULT_BROKER_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_DEFAULT_BROKER_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] }
-
PUT
/clusters/{cluster_id}/broker-configs/{name}
¶ Update Cluster Config
Updates the configuration parameter specified by
name
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- name (string) – The configuration parameter name.
Example request:
PUT /clusters/{cluster_id}/broker-configs/{name} HTTP/1.1 Host: example.com Content-Type: application/json { "value": "gzip" }
Status Codes: - 204 No Content – No Content
-
DELETE
/clusters/{cluster_id}/broker-configs/{name}
¶ Reset Cluster Config
Resets the configuration parameter specified by
name
to its default value.Parameters: - cluster_id (string) – The Kafka cluster ID.
- name (string) – The configuration parameter name.
Status Codes: - 204 No Content – No Content
-
GET
/clusters/{cluster_id}/brokers/{broker_id}/configs
¶ List Broker Configs
Return the list of configuration parameters that belong to the specified Kafka broker.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id}/configs HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of broker configs.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerConfigList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs", "next": null }, "data": [ { "kind": "KafkaBrokerConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs/max.connections", "resource_name": "crn:///kafka=cluster-1/broker=1/config=max.connections" }, "cluster_id": "cluster-1", "broker_id": 1, "name": "max.connections", "value": "1000", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_BROKER_CONFIG", "synonyms": [ { "name": "max.connections", "value": "1000", "source": "DYNAMIC_BROKER_CONFIG" }, { "name": "max.connections", "value": "2147483647", "source": "DEFAULT_CONFIG" } ] }, { "kind": "KafkaBrokerConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs/compression.type", "resource_name": "crn:///kafka=cluster-1/broker=1/config=compression.type" }, "cluster_id": "cluster-1", "broker_id": 1, "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_BROKER_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_BROKER_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] } ] }
-
POST
/clusters/{cluster_id}/brokers/{broker_id}/configs:alter
¶ Batch Alter Broker Configs
Updates or deletes a set of broker configuration parameters.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
POST /clusters/{cluster_id}/brokers/{broker_id}/configs:alter HTTP/1.1 Host: example.com Content-Type: application/json { "data": [ { "name": "max.connections", "operation": "DELETE" }, { "name": "compression.type", "value": "gzip" } ] }
Status Codes: - 204 No Content – No Content
-
GET
/clusters/{cluster_id}/brokers/{broker_id}/configs/{name}
¶ Get Broker Config
Return the configuration parameter specified by
name
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
- name (string) – The configuration parameter name.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id}/configs/{name} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The broker configuration parameter.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs/compression.type", "resource_name": "crn:///kafka=cluster-1/broker=1/config=compression.type" }, "cluster_id": "cluster-1", "broker_id": 1, "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_BROKER_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_BROKER_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] }
-
PUT
/clusters/{cluster_id}/brokers/{broker_id}/configs/{name}
¶ Update Broker Config
Updates the configuration parameter specified by
name
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
- name (string) – The configuration parameter name.
Example request:
PUT /clusters/{cluster_id}/brokers/{broker_id}/configs/{name} HTTP/1.1 Host: example.com Content-Type: application/json { "value": "gzip" }
Status Codes: - 204 No Content – No Content
-
DELETE
/clusters/{cluster_id}/brokers/{broker_id}/configs/{name}
¶ Reset Broker Config
Resets the configuration parameter specified by
name
to its default value.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
- name (string) – The configuration parameter name.
Status Codes: - 204 No Content – No Content
-
GET
/clusters/{cluster_id}/topics/{topic_name}/configs
¶ List Topic Configs
Return the list of configs that belong to the specified topic.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/configs HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of cluster configs.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaTopicConfigList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/configs", "next": null }, "data": [ { "kind": "KafkaTopicConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/configs/cleanup.policy", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/config=cleanup.policy" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "name": "cleanup.policy", "value": "compact", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_TOPIC_CONFIG", "synonyms": [ { "name": "cleanup.policy", "value": "compact", "source": "DYNAMIC_TOPIC_CONFIG" }, { "name": "cleanup.policy", "value": "delete", "source": "DEFAULT_CONFIG" } ] }, { "kind": "KafkaTopicConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/configs/compression.type", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/config=compression.type" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_TOPIC_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_TOPIC_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] } ] }
-
POST
/clusters/{cluster_id}/topics/{topic_name}/configs:alter
¶ Batch Alter Topic Configs
Updates or deletes a set of topic configs.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Example request:
POST /clusters/{cluster_id}/topics/{topic_name}/configs:alter HTTP/1.1 Host: example.com Content-Type: application/json { "data": [ { "name": "cleanup.policy", "operation": "DELETE" }, { "name": "compression.type", "value": "gzip" } ] }
Status Codes: - 204 No Content – No Content
-
GET
/clusters/{cluster_id}/topics/{topic_name}/configs/{name}
¶ Get Topic Config
Return the config with the given name.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- name (string) – The configuration parameter name.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/configs/{name} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The topic configuration parameter.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaTopicConfig", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/compression.type", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/config=compression.type" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "name": "compression.type", "value": "gzip", "is_default": false, "is_read_only": false, "is_sensitive": false, "source": "DYNAMIC_TOPIC_CONFIG", "synonyms": [ { "name": "compression.type", "value": "gzip", "source": "DYNAMIC_TOPIC_CONFIG" }, { "name": "compression.type", "value": "producer", "source": "DEFAULT_CONFIG" } ] }
-
PUT
/clusters/{cluster_id}/topics/{topic_name}/configs/{name}
¶ Update Topic Config
Updates the config with given name.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- name (string) – The configuration parameter name.
Example request:
PUT /clusters/{cluster_id}/topics/{topic_name}/configs/{name} HTTP/1.1 Host: example.com Content-Type: application/json { "value": "gzip" }
Status Codes: - 204 No Content – No Content
-
DELETE
/clusters/{cluster_id}/topics/{topic_name}/configs/{name}
¶ Reset Topic Config
Resets the config with given name to its default value.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- name (string) – The configuration parameter name.
Status Codes: - 204 No Content – No Content
Broker¶
-
GET
/clusters/{cluster_id}/brokers
¶ List Brokers
Return a list of brokers that belong to the specified Kafka cluster.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/brokers HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of brokers.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers", "next": null }, "data": [ { "kind": "KafkaBroker", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1", "resource_name": "crn:///kafka=cluster-1/broker=1" }, "cluster_id": "cluster-1", "broker_id": 1, "host": "localhost", "port": 9291, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs" }, "partition_replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/partition-replicas" } }, { "kind": "KafkaBroker", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/2", "resource_name": "crn:///kafka=cluster-1/broker=2" }, "cluster_id": "cluster-1", "broker_id": 2, "host": "localhost", "port": 9292, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/2/configs" }, "partition_replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/2/partition-replicas" } }, { "kind": "KafkaBroker", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/3", "resource_name": "crn:///kafka=cluster-1/broker=3" }, "cluster_id": "cluster-1", "broker_id": 3, "host": "localhost", "port": 9293, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/3/configs" }, "partition_replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/3/partition-replicas" } } ] }
-
GET
/clusters/{cluster_id}/brokers/{broker_id}
¶ Get Broker
Returns the broker specified by
broker_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The broker.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBroker", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1", "resource_name": "crn:///kafka=cluster-1/broker=1" }, "cluster_id": "cluster-1", "broker_id": 1, "host": "localhost", "port": 9291, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/configs" }, "partition_replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/partition-replicas" } }
-
DELETE
/clusters/{cluster_id}/brokers/{broker_id}
¶ Delete Broker
Deletes the broker that is specified by
broker_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Status Codes: - 202 Accepted – Accepted
- 400 Bad Request –
Illegal broker removal.
Example response:
HTTP/1.1 400 Bad Request Content-Type: application/json { "error_code": 400, "message": "The given broker removal operation for broker 1 from cluster cluster-1 failed for some reason like duplicate broker ids or partitions that would become unavailable as a result the removal. See the broker logs for more details." }
- 404 Not Found –
Broker not found.
Example response:
HTTP/1.1 404 Not Found Content-Type: application/json { "error_code": 404, "message": "Broker not found. Broker: 1 not found in the cluster: cluster-1" }
- 500 Internal Server Error –
Confluent Balancer disabled or not started.
Example response:
HTTP/1.1 500 Internal Server Error Content-Type: application/json { "error_code": 500, "message": "The Confluent Balancer component is disabled or not started yet." }
-
GET
/clusters/{cluster_id}/brokers/{broker_id}/partition-replicas
¶ Search Replicas by Broker
Returns the list of replicas assigned to the specified broker.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id}/partition-replicas HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of replicas.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReplicaList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/brokers/1/partition-replicas", "next": null }, "data": [ { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/replicas/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=2/replica=1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 2, "broker_id": 1, "is_leader": true, "is_in_sync": true, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions/3/replicas/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-3/partition=3/replica=1" }, "cluster_id": "cluster-1", "topic_name": "topic-2", "partition_id": 3, "broker_id": 1, "is_leader": false, "is_in_sync": true, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions/1/replicas/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-3/partition=1/replica=1" }, "cluster_id": "cluster-1", "topic_name": "topic-3", "partition_id": 1, "broker_id": 1, "is_leader": false, "is_in_sync": false, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" } } ] }
Consumer Group¶
-
GET
/clusters/{cluster_id}/consumer-groups
¶ List Consumer Groups
Returns the list of consumer groups that belong to the specified Kafka cluster.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of consumer groups.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumerGroupList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups", "next": null }, "data": [ { "kind": "KafkaConsumerGroup", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "is_simple": false, "partition_assignor": "org.apache.kafka.clients.consumer.RoundRobinAssignor", "state": "STABLE", "coordinator": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" }, "consumers": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers" } }, { "kind": "KafkaConsumerGroup", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-2", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-2" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-2", "is_simple": false, "partition_assignor": "org.apache.kafka.clients.consumer.StickyAssignor", "state": "PREPARING_REBALANCE", "coordinator": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/2" }, "consumers": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-2/consumers" } }, { "kind": "KafkaConsumerGroup", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-3", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-3" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-3", "is_simple": false, "partition_assignor": "org.apache.kafka.clients.consumer.RangeAssignor", "state": "DEAD", "coordinator": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/3" }, "consumers": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-3/consumers" } } ] }
-
GET
/clusters/{cluster_id}/consumer-groups/{consumer_group_id}
¶ Get Consumer Group
Returns the consumer group specified by the
consumer_group_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- consumer_group_id (string) – The consumer group ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups/{consumer_group_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The consumer group.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumerGroup", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "is_simple": false, "partition_assignor": "org.apache.kafka.clients.consumer.RoundRobinAssignor", "state": "STABLE", "coordinator": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" }, "consumers": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers" } }
-
GET
/clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers
¶ List Consumers
Returns a list of consumers that belong to the specified consumer group.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- consumer_group_id (string) – The consumer group ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of consumers.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumerList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers", "next": null }, "data": [ { "kind": "KafkaConsumer", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "instance_id": "consumer-instance-1", "client_id": "client-1", "assignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments" } }, { "kind": "KafkaConsumer", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-2", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-2" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-2", "instance_id": "consumer-instance-2", "client_id": "client-2", "assignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-2/assignments" } }, { "kind": "KafkaConsumer", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-2", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-2" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-2", "instance_id": "consumer-instance-2", "client_id": "client-2", "assignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-2/assignments" } } ] }
-
GET
/clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id}
¶ Get Consumer
Returns the consumer specified by the
consumer_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- consumer_group_id (string) – The consumer group ID.
- consumer_id (string) – The consumer ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The consumer.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumer", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "instance_id": "consumer-instance-1", "client_id": "client-1", "assignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments" } }
-
GET
/clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id}/assignments
¶ List Consumer Assignments
Returns a list of partition assignments for the specified consumer.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- consumer_group_id (string) – The consumer group ID.
- consumer_id (string) – The consumer ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id}/assignments HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of consumer group assignments.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumerAssignmentList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments", "next": null }, "data": [ { "kind": "KafkaConsumerAssignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments/topic-1/partitions/1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1/assignment=topic=1/partition=1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "topic_name": "topic-1", "partition_id": 1, "partition": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1" } }, { "kind": "KafkaConsumerAssignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments/topic-2/partitions/2", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1/assignment=topic=2/partition=2" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "topic_name": "topic-2", "partition_id": 2, "partition": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions/2" } }, { "kind": "KafkaConsumerAssignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments/topic-3/partitions/3", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1/assignment=topic=3/partition=3" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "topic_name": "topic-3", "partition_id": 3, "partition": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions/3" } } ] }
-
GET
/clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id}/assignments/{topic_name}/partitions/{partition_id}
¶ Get Consumer Assignment
Returns information about the assignment for the specified consumer to the specified partition.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- consumer_group_id (string) – The consumer group ID.
- consumer_id (string) – The consumer ID.
- topic_name (string) – The topic name.
- partition_id (integer) – The partition ID.
Example request:
GET /clusters/{cluster_id}/consumer-groups/{consumer_group_id}/consumers/{consumer_id}/assignments/{topic_name}/partitions/{partition_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The consumer group assignment.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaConsumerAssignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/consumer-groups/consumer-group-1/consumers/consumer-1/assignments/topic-1/partitions/1", "resource_name": "crn:///kafka=cluster-1/consumer-group=consumer-group-1/consumer=consumer-1/assignment=topic=1/partition=1" }, "cluster_id": "cluster-1", "consumer_group_id": "consumer-group-1", "consumer_id": "consumer-1", "topic_name": "topic-1", "partition_id": 1, "partition": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1" } }
Topic¶
-
GET
/clusters/{cluster_id}/topics
¶ List Topics
Returns the list of topics that belong to the specified Kafka cluster.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/topics HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of topics.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaTopicList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics", "next": null }, "data": [ { "kind": "KafkaTopic", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "is_internal": false, "replication_factor": 3, "partitions": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions" }, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/configs" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/-/reassignments" } }, { "kind": "KafkaTopic", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2", "resource_name": "crn:///kafka=cluster-1/topic=topic-2" }, "cluster_id": "cluster-1", "topic_name": "topic-2", "is_internal": true, "replication_factor": 4, "partitions": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions" }, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/configs" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions/-/reassignments" } }, { "kind": "KafkaTopic", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3", "resource_name": "crn:///kafka=cluster-1/topic=topic-3" }, "cluster_id": "cluster-1", "topic_name": "topic-3", "is_internal": false, "replication_factor": 5, "partitions": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions" }, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/configs" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions/-/reassignments" } } ] }
-
POST
/clusters/{cluster_id}/topics
¶ Create Topic
Creates a topic.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
POST /clusters/{cluster_id}/topics HTTP/1.1 Host: example.com Content-Type: application/json { "topic_name": "topic-X", "partitions_count": 64, "replication_factor": 3, "configs": [ { "name": "cleanup.policy", "value": "compact" }, { "name": "compression.type", "value": "gzip" } ] }
Status Codes: - 201 Created –
The created topic.
Example response:
HTTP/1.1 201 Created Content-Type: application/json { "kind": "KafkaTopic", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-X", "resource_name": "crn:///kafka=cluster-1/topic=topic-X" }, "cluster_id": "cluster-1", "topic_name": "topic-X", "is_internal": false, "replication_factor": 3, "partitions": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-X/partitions" }, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-X/configs" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-X/partitions/-/reassignments" } }
-
GET
/clusters/{cluster_id}/topics/{topic_name}
¶ Get Topic
Returns the topic with the given topic_name.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The topic.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaTopic", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "is_internal": false, "replication_factor": 3, "partitions": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions" }, "configs": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/configs" }, "partition_reassignments": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/-/reassignments" } }
-
DELETE
/clusters/{cluster_id}/topics/{topic_name}
¶ Delete Topic
Deletes the topic with the given topic_name.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Status Codes: - 204 No Content – No Content
Partition¶
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions
¶ List Partitions
Returns the list of partitions that belong to the specified topic.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of partitions.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaPartitionList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions", "next": null }, "data": [ { "kind": "KafkaPartition", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "leader": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/1" }, "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas" }, "reassignment": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/reassignment" } }, { "kind": "KafkaPartition", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=2" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 2, "leader": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/replicas/2" }, "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/replicas" }, "reassignment": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/reassignment" } }, { "kind": "KafkaPartition", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=3" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 3, "leader": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3/replicas/3" }, "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3/replicas" }, "reassignment": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3/reassignment" } } ] }
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}
¶ Get Partition
Returns the partition with the given partition_id.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- partition_id (integer) – The partition ID.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The partition
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaPartition", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "leader": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/1" }, "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas" }, "reassignment": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/reassignment" } }
-
GET
/clusters/{cluster_id}/topics/-/partitions/-/reassignment
¶ List All Replica Reassignments
Returns the list of all ongoing replica reassignments in the given Kafka cluster.
Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/topics/-/partitions/-/reassignment HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The ongoing replicas reassignments.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReassignmentList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/-/partitions/-/reassignment", "next": null }, "data": [ { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "adding_replicas": [ 1, 2 ], "removing_replicas": [ 3 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas" } }, { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions/2/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-2/partition=2/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-2", "partition_id": 2, "adding_replicas": [ 1 ], "removing_replicas": [ 2, 3 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-2/partitions/2/replicas" } }, { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions/3/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-3/partition=3/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-3", "partition_id": 3, "adding_replicas": [ 3 ], "removing_replicas": [ 1, 2 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-3/partitions/3/replicas" } } ] }
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions/-/reassignment
¶ Search Replica Reassignments By Topic
Returns the list of ongoing replica reassignments for the given topic.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions/-/reassignment HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The ongoing replicas reassignments.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReassignmentList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/-/partitions/-/reassignment", "next": null }, "data": [ { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "adding_replicas": [ 1, 2 ], "removing_replicas": [ 3 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas" } }, { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=2/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 2, "adding_replicas": [ 1 ], "removing_replicas": [ 2, 3 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/2/replicas" } }, { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=3/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 3, "adding_replicas": [ 3 ], "removing_replicas": [ 1, 2 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/3/replicas" } } ] }
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/reassignment
¶ Get Replica Reassignments
Returns the list of ongoing replica reassignments for the given partition.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- partition_id (integer) – The partition ID.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/reassignment HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The ongoing replicas reassignments.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReassignment", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/reassignment", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/reassignment" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "adding_replicas": [ 1, 2 ], "removing_replicas": [ 3 ], "replicas": { "related": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas" } }
Replica¶
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/replicas
¶ List Replicas
Returns the list of replicas for the specified partition.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- partition_id (integer) – The partition ID.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/replicas HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of replicas.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReplicaList", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas", "next": null }, "data": [ { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/replica=1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "broker_id": 1, "is_leader": true, "is_in_sync": true, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/2", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/replica=2" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "broker_id": 2, "is_leader": false, "is_in_sync": true, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/2" } }, { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/3", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/replica=3" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "broker_id": 3, "is_leader": false, "is_in_sync": false, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/3" } } ] }
-
GET
/clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/replicas/{broker_id}
¶ Get Replica
Returns the replica for the specified partition assigned to the specified broker.
Parameters: - cluster_id (string) – The Kafka cluster ID.
- topic_name (string) – The topic name.
- partition_id (integer) – The partition ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/topics/{topic_name}/partitions/{partition_id}/replicas/{broker_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The replica.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaReplica", "metadata": { "self": "http://localhost:9391/v3/clusters/cluster-1/topics/topic-1/partitions/1/replicas/1", "resource_name": "crn:///kafka=cluster-1/topic=topic-1/partition=1/replica=1" }, "cluster_id": "cluster-1", "topic_name": "topic-1", "partition_id": 1, "broker_id": 1, "is_leader": true, "is_in_sync": true, "broker": { "related": "http://localhost:9391/v3/clusters/cluster-1/brokers/1" } }
BrokerTask¶
-
GET
/clusters/{cluster_id}/brokers/-/tasks
¶ List Broker Tasks
Returns a list of all tasks for all brokers in the cluster specified with
cluster_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/brokers/-/tasks HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of tasks.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerTaskList", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/-/tasks", "next": null }, "data": [ { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1/tasks/add-broker", "resource_name": "crn:///kafka=cluster-1/broker=1/task=add-broker" }, "cluster_id": "cluster_id", "broker_id": 1, "task_type": "add-broker", "task_status": "SUCCESS", "sub_task_statuses": { "partition_reassignment_status": "COMPLETED" }, "created_at": "2019-10-12T10:20:40Z", "updated_at": "2019-10-12T10:20:45Z", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/2/tasks/remove-broker", "resource_name": "crn:///kafka=cluster-1/broker=2/task=remove-broker" }, "cluster_id": "cluster_id", "broker_id": 2, "task_type": "remove-broker", "task_status": "FAILED", "sub_task_statuses": { "partition_reassignment_status": "ERROR", "broker_shutdown_status": "CANCELED" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "error_code": 10006, "error_message": "Error while computing the initial remove broker plan for brokers [2] prior to shutdown.", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/2" } } ] }
-
GET
/clusters/{cluster_id}/brokers/{broker_id}/tasks
¶ List Broker Tasks of a specific Broker
Returns a list of all broker tasks for broker specified with
broker_id
in the cluster specified withcluster_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id}/tasks HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of tasks.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerTaskList", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/-/tasks", "next": null }, "data": [ { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1/tasks/add-broker", "resource_name": "crn:///kafka=cluster-1/broker=1/task=add-broker" }, "cluster_id": "cluster_id", "broker_id": 1, "task_type": "add-broker", "task_status": "IN_PROGRESS", "sub_task_statuses": { "partition_reassignment_status": "IN_PROGRESS" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1/tasks/remove-broker", "resource_name": "crn:///kafka=cluster-1/broker=1/task=remove-broker" }, "cluster_id": "cluster_id", "broker_id": 1, "task_type": "remove-broker", "task_status": "FAILED", "sub_task_statuses": { "partition_reassignment_status": "ERROR", "broker_shutdown_status": "CANCELED" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "error_code": 10006, "error_message": "Error while computing the initial remove broker plan for brokers [1] prior to shutdown.", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } } ] }
-
GET
/clusters/{cluster_id}/brokers/-/tasks/{task_type}
¶ List Broker Tasks of a specific TaskType
Returns a list of all broker tasks of specified
task_type
in the cluster specified withcluster_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- task_type (string) – The Kafka broker task type.
Example request:
GET /clusters/{cluster_id}/brokers/-/tasks/{task_type} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of tasks.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerTaskList", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/-/tasks", "next": null }, "data": [ { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1/tasks/add-broker", "resource_name": "crn:///kafka=cluster-1/broker=1/task=add-broker" }, "cluster_id": "cluster_id", "broker_id": 1, "task_type": "add-broker", "task_status": "IN_PROGRESS", "sub_task_statuses": { "partition_reassignment_status": "IN_PROGRESS" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/2/tasks/add-broker", "resource_name": "crn:///kafka=cluster-1/broker=2/task=add-broker" }, "cluster_id": "cluster_id", "broker_id": 2, "task_type": "add-broker", "task_status": "FAILED", "sub_task_statuses": { "partition_reassignment_status": "ERROR", "broker_shutdown_status": "CANCELED" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "error_code": 10006, "error_message": "Error while computing the initial add broker plan for brokers [2]", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/2" } } ] }
-
GET
/clusters/{cluster_id}/brokers/{broker_id}/tasks/{task_type}
¶ Get single Broker Task.
Returns a single Broker Task specified with
task_type
for broker specified withbroker_id
in the cluster specified withcluster_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
- task_type (string) – The Kafka broker task type.
Example request:
GET /clusters/{cluster_id}/brokers/{broker_id}/tasks/{task_type} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The broker task
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1/tasks/add-broker", "resource_name": "crn:///kafka=cluster-1/broker=1/task=1" }, "cluster_id": "cluster-1", "broker_id": 1, "task_type": "add-broker", "task_status": "FAILED", "sub_task_statuses": { "partition_reassignment_status": "ERROR" }, "created_at": "2019-10-12T07:20:50Z", "updated_at": "2019-10-12T07:20:55Z", "error_code": 10013, "error_message": "The Confluent Balancer operation was overridden by a higher priority operation", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }
RemoveBrokerTask¶
-
GET
/clusters/{cluster_id}/remove-broker-tasks
¶ List Remove Broker Tasks
Returns a list of remove-broker-tasks for the specified Kafka cluster.
/remove-broker-tasks
is deprecated and may be removed in a future release. Use the new/tasks
API instead.Parameters: - cluster_id (string) – The Kafka cluster ID.
Example request:
GET /clusters/{cluster_id}/remove-broker-tasks HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The list of remove broker tasks.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaRemoveBrokerTaskList", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/remove-broker-tasks", "next": null }, "data": [ { "kind": "KafkaRemoveBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/remove-broker-tasks/1", "resource_name": "crn:///kafka=cluster-1/remove-broker-task=1" }, "cluster_id": "cluster-1", "broker_id": 1, "partition_reassignment_status": "FAILED", "broker_shutdown_status": "CANCELED", "error_code": 10006, "error_message": "Error while computing the initial remove broker plan for brokers [1] prior to shutdown.", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }, { "kind": "KafkaRemoveBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/remove-broker-tasks/2", "resource_name": "crn:///kafka=cluster-1/remove-broker-task=2" }, "cluster_id": "cluster-1", "broker_id": 2, "partition_reassignment_status": "FAILED", "broker_shutdown_status": "CANCELED", "error_code": 10006, "error_message": "Error while computing the initial remove broker plan for brokers [2] prior to shutdown.", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/2" } } ] }
-
GET
/clusters/{cluster_id}/remove-broker-tasks/{broker_id}
¶ Get Remove Broker Task
Return the remove broker task for the specified
broker_id
.Parameters: - cluster_id (string) – The Kafka cluster ID.
- broker_id (integer) – The Kafka broker ID.
Example request:
GET /clusters/{cluster_id}/remove-broker-tasks/{broker_id} HTTP/1.1 Host: example.com
Status Codes: - 200 OK –
The remove broker task.
Example response:
HTTP/1.1 200 OK Content-Type: application/json { "kind": "KafkaRemoveBrokerTask", "metadata": { "self": "http://localhost:9391/kafka/v3/clusters/cluster-1/remove-broker-tasks/1", "resource_name": "crn:///kafka=cluster-1/remove-broker-task=1" }, "cluster_id": "cluster-1", "broker_id": 1, "partition_reassignment_status": "FAILED", "broker_shutdown_status": "CANCELED", "error_code": 10006, "error_message": "Error while computing the initial remove broker plan for brokers [1] prior to shutdown.", "broker": { "related": "http://localhost:9391/kafka/v3/clusters/cluster-1/brokers/1" } }
REST API Usage Examples (curl)¶
This section provides a few examples of how to call the Confluent REST API using curl commands to quickly test API endpoints from the command line. These examples demo the most recent API version, REST Proxy API v3, and JSON serialization format (see Content Types).
(To test API calls for REST Proxy API v2, swap out v3
for v2
and remove “/kafka
”.
REST API v2 commands should not include “kafka
”. Be sure to reference the v2 API documentation,
as not all APIs shown for v3 are available in v2.)
Tip
For examples of how to call these same APIs from source code for an app, see the Confluent Admin REST APIs demo.
A few logistics to take note of:
- For your API testing, you may want to use jq
along with
--silent
flag for curl to get nicely formatted output for the given commands. These additional formatting options are used in the examples below. - Although jq has powerful filtering capabilities, you can pipe
the
curl
andjq
output through simplegrep
commands to further filter the results. This is demo’ed in the examples. - To get and set values using the APIs, you must know the URL for your cluster and the cluster ID. You can get this from the Cluster Settings tab on Control Center. (http://localhost:9021/ on your web browser for a local cluster).
- The examples show the default host and port to access the Kafka cluster on a local host (
localhost:8090
).
List and describe known clusters¶
To list and describe a cluster, use the API endpoint GET /clusters
as shown.
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/ | jq
Example and result:
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/ | jq
"kind": "KafkaClusterList",
"metadata": {
"self": "http://localhost:8090/kafka/v3/clusters",
"next": null
},
"data": [
{
"kind": "KafkaCluster",
"metadata": {
"self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg",
"resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg"
},
"cluster_id": "7cteo6omRwKaUFXj3BHxdg",
"controller": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/brokers/0"
},
"acls": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/acls"
},
"brokers": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/brokers"
},
"broker_configs": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/broker-configs"
},
"consumer_groups": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/consumer-groups"
},
"topics": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics"
},
"partition_reassignments": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/-/partitions/-/reassignment"
Tip
Currently both Kafka and REST Proxy are only aware of the Kafka cluster pointed at by the bootstrap.servers
configuration.
Therefore, only one Kafka cluster will be returned in the response.
Create a topic¶
To create a topic, use the topics endpoint POST /clusters/{cluster_id}/topics
as shown below.
curl --silent -X POST -H "Content-Type: application/json" \
--data '{"topic_name": "<topic-name>"}' http://localhost:8090/kafka/v3/clusters/<cluster-id>/topics | jq
Example and result:
curl --silent -X POST -H "Content-Type: application/json" \
--data '{"topic_name": "my-cool-topic"}' http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq
"kind": "KafkaTopic",
"metadata": {
"self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic",
"resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg/topic=my-cool-topic"
},
"cluster_id": "7cteo6omRwKaUFXj3BHxdg",
"topic_name": "my-cool-topic",
"is_internal": false,
"replication_factor": 0,
"partitions": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions"
},
"configs": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/configs"
},
"partition_reassignments": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions/-/reassignment"
Describe a specified topic¶
To get a full description of a specified topic, use the topics endpoint GET /clusters/{cluster_id}/topics/{topic_name}
as shown below.
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/<cluster-id>/topics/<topic-name> | jq
Example and result:
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic | jq
"kind": "KafkaTopic",
"metadata": {
"self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic",
"resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg/topic=my-cool-topic"
},
"cluster_id": "7cteo6omRwKaUFXj3BHxdg",
"topic_name": "my-cool-topic",
"is_internal": false,
"replication_factor": 1,
"partitions": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions"
},
"configs": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/configs"
},
"partition_reassignments": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions/-/reassignment"
List all topics¶
To list detailed descriptions of all topics (internal and user created topics), use the topics endpoint
GET /clusters/{cluster_id}/topics/
.
Example:
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/<cluster-id>/topics | jq
This will provide a full description of every topic on the cluster, including replication factors, partitions, configs, and so forth.
This output is similar to the Kafka command kafka-topics --describe
(kafka-topics --describe --bootstrap-server localhost:9092
).
List all topic names¶
To filter the topic list to show only topic names, use the endpoint GET /clusters/{cluster_id}/topics/
as shown.
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq | grep '.topic_name'
This provides information similar to the Kafka command kafka-topics --list
(kafka-topics --list --bootstrap-server localhost:9092
).
List topics with a specified prefix¶
To list all topics with a specified prefix, use the endpoint GET /clusters/{cluster_id}/topics/
as shown.
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/<cluster-id>/topics | jq | grep '.topic_name' | grep '<prefix>'
Example and result:
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq | grep '.topic_name' | grep 'my-'
"topic_name": "my-cool-topic",
"topic_name": "my-hot-topic",
Delete a topic¶
To delete a specified topic, use the API endpoint DELETE /clusters/{cluster_id}/topics/{topic_name}
.
curl --silent -X DELETE http://localhost:8090/kafka/v3/clusters/<cluster-id>/topics/{<topic-name>}
Example:
curl --silent -X DELETE http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/{my-legacy-topic} | jq
You can list topics again (or check on Control Center) to verify that the topic has been deleted.
List broker tasks¶
You can list the broker tasks by querying the endpoint GET/clusters/{cluster_id}/brokers/-/tasks
as shown.
This call will provide more interesting information if, for example, you are running a multi-broker cluster with Self-Balancing Clusters enabled,
and processing a lot of data resulting in active broker tasks.
To list tasks on all brokers:
curl --silent -X GET http://localhost:8090/kafka/v3/clusters/<cluster-id>/brokers/-/tasks | jq
To list the tasks on a specified broker, for example broker 3:
curl -X DELETE http://localhost:8090/kafka/v3/clusters/<cluster-id>/brokers/3 | jq
See the Self-Balancing Tutorial or the Self-Balancing Clusters Demo (Docker) to experiment with Self-Balancing.
Blocklist¶
The Confluent Kafka REST now includes a blocklist to limit what APIs are exposed. Operators can now more easily select which APIs are exposed in the REST Proxy interface using the api.endpoints.blocklist
. For example:
api.endpoints.blocklist= api.v3.partitions.list,api.v3.cluster-configs.*
Below is the the full list of APIs that can be blocked using api.endpoints.blocklist
.
Blockable APIs (v2 and v3) |
---|
api.v2.brokers.* |
api.v2.brokers.list |
api.v2.consumers.* |
api.v2.consumers.assign |
api.v2.consumers.commit-offsets |
api.v2.consumers.consume-avro |
api.v2.consumers.consume-binary |
api.v2.consumers.consume-json |
api.v2.consumers.consume-json-schema |
api.v2.consumers.consume-protobuf |
api.v2.consumers.create |
api.v2.consumers.delete |
api.v2.consumers.get-assignments |
api.v2.consumers.get-committed-offsets |
api.v2.consumers.get-subscription |
api.v2.consumers.seek-to-beginning |
api.v2.consumers.seek-to-end |
api.v2.consumers.seek-to-offset |
api.v2.consumers.subscribe |
api.v2.consumers.unsubscribe |
api.v2.partitions.* |
api.v2.partitions.get |
api.v2.partitions.get-offsets |
api.v2.partitions.list |
api.v2.produce-to-partition.* |
api.v2.produce-to-partition.avro |
api.v2.produce-to-partition.binary |
api.v2.produce-to-partition.json |
api.v2.produce-to-partition.json-schema |
api.v2.produce-to-partition.protobuf |
api.v2.produce-to-topic.* |
api.v2.produce-to-topic.avro |
api.v2.produce-to-topic.binary |
api.v2.produce-to-topic.json |
api.v2.produce-to-topic.json-schema |
api.v2.produce-to-topic.protobuf |
api.v2.root.* |
api.v2.root.get |
api.v2.root.post |
api.v2.topics.* |
api.v2.topics.get |
api.v2.topics.list |
api.v3.acls.* |
api.v3.acls.create |
api.v3.acls.delete |
api.v3.acls.list |
api.v3.broker-configs.* |
api.v3.broker-configs.alter |
api.v3.broker-configs.delete |
api.v3.broker-configs.get |
api.v3.broker-configs.list |
api.v3.broker-configs.update |
api.v3.brokers.* |
api.v3.brokers.get |
api.v3.brokers.list |
api.v3.cluster-configs.* |
api.v3.cluster-configs.alter |
api.v3.cluster-configs.delete |
api.v3.cluster-configs.get |
api.v3.cluster-configs.list |
api.v3.cluster-configs.update |
api.v3.clusters.* |
api.v3.clusters.get |
api.v3.clusters.list |
api.v3.consumer-assignments.* |
api.v3.consumer-assignments.get |
api.v3.consumer-assignments.list |
api.v3.consumer-groups.* |
api.v3.consumer-groups.get |
api.v3.consumer-groups.list |
api.v3.consumers.* |
api.v3.consumers.get |
api.v3.consumers.list |
api.v3.partition-reassignments.* |
api.v3.partition-reassignments.get |
api.v3.partition-reassignments.list |
api.v3.partition-reassignments.search-by-topic |
api.v3.partitions.* |
api.v3.partitions.get |
api.v3.partitions.list |
api.v3.replicas.* |
api.v3.replicas.get |
api.v3.replicas.list |
api.v3.replicas.search-by-broker |
api.v3.topic-configs.* |
api.v3.topic-configs.alter |
api.v3.topic-configs.delete |
api.v3.topic-configs.get |
api.v3.topic-configs.list |
api.v3.topic-configs.update |
api.v3.topics.* |
api.v3.topics.create |
api.v3.topics.delete |
api.v3.topics.get |
api.v3.topics.list |