Kafka Security & the Confluent Platform¶
The Schema Registry currently supports connecting to Kafka over SSL, and HTTPS for REST calls. However, SASL authentication for Kafka and ZooKeeper is not yet supported, but planned for a future release.
The REST Proxy currently supports HTTPS for REST calls. However, Kafka and ZooKeeper security features are not yet supported. These are planned for a future release. A current workaround while Kafka and ZooKeeper security features are not available is to have a secured Kafka cluster which coexists with
PLAINTEXT clients such as the REST proxy.
Since 0.9.0, Kafka brokers can listen on multiple ports, where each port supports the configured protocol. This can be done by adding a
to listeners, but it is critical that access via this port is restricted to trusted clients only. Network segmentation and/or authorization ACLs can be used
to restrict access to trusted IPs in such cases. If neither is used, the cluster is wide open and can be accessed by anyone.
Here is an example broker config with both
SSL ports enabled:
When connecting to the
PLAINTEXT port, you need to set
PLAINTEXT in the producer and consumer property. The documentation on adding security to a running cluster can provide additional insight on broker support for multiple security protocols as well as instructions for adding security to a running cluster.
Here are a few additional observations/strategies to keep in mind when securing your Kafka cluster along with Kafka-REST and Schema Registry:
- With respect to Kafka ACLs
One potential option is to only allow access to the
PLAINTEXT port for internal apps/tools via network segmentation, and then assume that the ANONYMOUS user is always an internal user. Another option is to take advantage of the fact that Kafka ACLS can also be applied on a per-host basis. For details, see the reference to
kafka-acls in the section on ACLs and authorization.
- Schema Registry
Relatively few services need access to the Schema Registry, and they are likely internal, so you can restrict access via firewall rules and/or network segmentation.
Note that if you have enabled ZooKeeper authentication in your cluster, then you will need to create the topic that schemas are stored in manually. As before, you must provide the path to the JAAS login file:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=<path to JAAS conf file>" $ bin/kafka-topics --create --topic _schemas --partitions 1 --replication-factor 3 --zookeeper <zookeeper host:port>
The topic must be configured to have exactly one partition and the replication factor should
be at least more than one (we recommend 3 for production environments). Note also that the topic
name must match the schema-registry configuration
kafkastore.topic (the default is “_schemas”).
Additionally, if you have enabled Kafka authorization, you will need to grant read and write access to this topic to the world.
$ export KAFKA_OPTS="-Djava.security.auth.login.config=<path to JAAS conf file>" $ bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal 'User:*' --allow-host '*' --operation Read --topic _schemas $ bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal 'User:*' --allow-host '*' --operation Write --topic _schemas
Once schema-registry has been updated with security support, these permissions can be lowered.