.. _kafkarest_intro:
|crest-long| for Apache Kafka
=============================
The |crest-long| provides a RESTful interface to an |ak-tm| cluster, making it
easy to produce and consume messages, view the state of the cluster, and perform
administrative actions without using the native |ak| protocol or clients.
Some example use cases are:
* Reporting data to |ak| from any frontend app built in any language not supported by official `Confluent clients `_
* Ingesting messages into a stream processing framework that doesn’t yet support |ak|
* Scripting administrative actions
There is a plugin available for |crest| that helps authenticate incoming
requests and propagates the authenticated principal to requests to |ak|. This
enables |crest| clients to utilize the multi-tenant security features of
the |ak| broker. For more information, see :ref:`kafkarest_security` and
:ref:`kafka-rest-security-plugins-install`.
The Admin REST APIs allow you to create and manage topics, manage MDS, and produce and consume to topics.
The Admin REST APIs are available in these forms:
- You can deploy :ref:`Confluent Server `,
which exposes Admin REST APIs directly on the brokers by default. |cs| is shipped with |cpe|.
- Admin REST APIs are being incrementally added to :cloud:`Confluent Cloud|overview.html`,
as documented at :cloud:`Confluent Cloud|api.html`.
- You can deploy standalone :ref:`Kafka REST Proxy nodes `, which in addition to Produce and Consume APIs, also
offer Admin REST APIs as of :ref:`API v3 `.
.. include:: ../.hidden/docs-common/home/includes/cloud-platform-cta.rst
API availability
----------------
.. include:: includes/server-vs-standalone.rst
Features
--------
The following functionality is currently exposed and available through Confluent REST APIs.
* **Metadata** - Most metadata about the cluster -- brokers, topics,
partitions, and configs -- can be read using ``GET`` requests for the
corresponding URLs.
* **Producers** - Instead of exposing producer objects, the API accepts produce
requests targeted at specific topics or partitions and routes them all through
a small pool of producers.
* Producer configuration - Producer instances are shared, so configs cannot
be set on a per-request basis. However, you can adjust settings globally by
passing new producer settings in the REST Proxy configuration. For example,
you might pass in the ``compression.type`` option to enable site-wide
compression to reduce storage and network overhead.
* **Consumers** - Consumers are stateful and therefore tied to specific REST Proxy instances. Offset
commit can be either automatic or explicitly requested by the user. Currently limited to
one thread per consumer; use multiple consumers for higher throughput. The REST Proxy uses either the high level consumer (v1 api) or the
new 0.9 consumer (v2 api) to implement consumer-groups that can read from topics. Note: the v1 API has been marked for deprecation.
* Consumer configuration - Although consumer instances are not shared, they do
share the underlying server resources. Therefore, limited configuration
options are exposed via the API. However, you can adjust settings globally
by passing consumer settings in the REST Proxy configuration.
* **Data Formats** - The |crest| can read and write data using JSON, raw bytes
encoded with base64 or using JSON-encoded Avro, Protobuf, or JSON Schema. With Avro, Protobuf, or
JSON Schema, schemas are registered and validated against |sr|.
* **REST Proxy Clusters and Load Balancing** - The |crest| is designed to
support multiple instances running together to spread load and can safely be
run behind various load balancing mechanisms (e.g. round robin DNS, discovery
services, load balancers) as long as instances are
:ref:`configured correctly`.
* **Simple Consumer** - The high-level consumer should generally be
preferred. However, it is occasionally useful to use low-level read
operations, for example to retrieve messages at specific offsets.
.. include:: ../includes/cp-demo-tip.rst
.. _kafka-rest-admin-features:
* **Admin operations** - With the :ref:`API v3 `, you can create
or delete topics, and update or reset topic configurations. For hands-on examples,
see the `Confluent Admin REST APIs demo `__.
(To start the demo, clone the Confluent `demo-scene `__
repository from GitHub then follow the guide for the `Confluent Admin REST APIs demo `__.)
Just as important, here's a list of features that *aren't* yet supported:
* **Multi-topic Produce Requests** - Currently each produce request may only
address a single topic or topic-partition. Most use cases do not require
multi-topic produce requests, they introduce additional complexity into the
API, and clients can easily split data across multiple requests if necessary
* **Most Producer/Consumer Overrides in Requests** - Only a few key overrides are exposed in
the API (but global overrides can be set by the administrator). The reason is
two-fold. First, proxies are multi-tenant and therefore most user-requested
overrides need additional restrictions to ensure they do not impact other
users. Second, tying the API too much to the implementation restricts future
API improvements; this is especially important with the new upcoming consumer
implementation.
* **Allow different serializer for key and value** - Currently, |crest| chooses the serializer based on the Content-Type header;
as a result, the serializer for key and value must be the same in this design.
Installation
------------
Before starting the |crest| you must start |ak| and |sr|. For installation instructions, see :ref:`installation`.
Deployment
----------
Starting the |crest-long| service is simple once its dependencies are
running:
.. sourcecode:: bash
# Start the REST Proxy. The default settings automatically work with the
# default settings for local Kafka nodes.
/bin/kafka-rest-start etc/kafka-rest/kafka-rest.properties
If you installed Debian or RPM packages, you can simply run ``kafka-rest-start``
as it will be on your ``PATH``. The ``kafka-rest.properties`` file contains
:ref:`configuration settings`. The default configuration
included with the REST Proxy has convenient defaults for a local testing
setup and should be modified for a production deployment. By default, the server
starts bound to port 8082 and does not specify a unique instance ID (required to
safely run multiple proxies concurrently).
If you started the service in the background, you can use the following
command to stop it:
.. sourcecode:: bash
bin/kafka-rest-stop
Development
-----------
To build a development version, you may need a development versions of
`common `_,
`rest-utils `_, and
`schema-registry `_. After
installing these, you can build the |crest-long|
with Maven. All the standard lifecycle phases work. During development, use
.. sourcecode:: bash
mvn -f kafka-rest/pom.xml compile
to build,
.. sourcecode:: bash
mvn -f kafka-rest/pom.xml test
to run the unit and integration tests, and
.. sourcecode:: bash
mvn exec:java
to run an instance of the proxy against a local |ak| cluster (using the default
configuration included with |ak|).
To create a packaged version, optionally skipping the tests:
.. sourcecode:: bash
mvn -f kafka-rest/pom.xml package [-DskipTests]
This will produce a version ready for production in
``target/kafka-rest-$VERSION-package`` containing a directory layout similar
to the packaged binary versions. You can also produce a standalone fat jar using the
``standalone`` profile:
.. sourcecode:: bash
mvn -f kafka-rest/pom.xml package -P standalone [-DskipTests]
generating
``target/kafka-rest-$VERSION-standalone.jar``, which includes all the
dependencies as well.
To run a local |ak| and |crest| cluster, for testing:
.. sourcecode:: bash
./testing/environments/minimal/run.sh
Suggested Resources
-------------------
- To learn more see this tutorial: `Getting Started with Apache Kafka and Confluent REST Proxy `__.
- Blog post: `Putting Apache Kafka to REST: Confluent REST Proxy 6.0 `__
- Blog post: `Use Cases and Architectures for HTTP and REST APIs with Apache Kafka `__
- `Confluent Admin REST APIs demo `__