decorative image for blog on using apache kafka and zookeeper
March 24, 2022

How (and Why) to Use Kafka With ZooKeeper

Apache Kafka
Middleware

Apache Kafka and ZooKeeper are two of the most successful open source products in the computing world today. From small startup organizations to Fortune 100 companies, businesses rely on Kafka with ZooKeeper for their data streaming needs.

In this blog, we discuss how Kafka and ZooKeeper combine to facilitate streaming data, and the emerging features in Kafka (like KRaft Mode) that could change how teams approach problems currently solved with ZooKeeper.

Back to top

Apache Kafka and ZooKeeper Overview

Originally developed by LinkedIn, Kafka was opened sourced in 2011 to the Apache Software Foundation where it graduated from the Apache Incubator on October 23rd, 2012. The distributed event store and streams-processing platform was named after the author Franz Kafka by its lead developer, Jay Kreps, because it is a “system optimized for writing.”

ZooKeeper, on the other hand, was originally developed by Yahoo in the early 2000s, and started out as a Hadoop sub-project. It was originally developed to manage and streamline big data cluster processes and fix bugs that were occurring during the deployment of distributed clusters. In 2008, it was gifted to the Apache Software Foundation and was promoted soon thereafter to a top-level ASF project.

What Is Apache Kafka?

Kafka is a distributed event store and streams-processing platform, meaning simply it takes data from producers and streams them out to consumers.  

These producers and consumers can also be thought of “inputs” and “outputs," where data is taken from an “input” system and consumed by an “output” system.

Kafka Brokers

The main vehicle for this movement of data is the Kafka broker. The Kafka broker handles all requests from all clients (both producers and consumers as well metadata). It also manages replication of data across a cluster as well as within topics and partitions.

diagram showing typical kafka cluster

Kafka Topics

A Kafka topic is a grouping of messages that is used to organize messages for production and consumption. A producer places messages or records onto a given topic, then a consumer reads that record from the same topic. A topic is further broken down into partitions that house a number of records, each identified as a unique offset in a partition log.  

These records are stored in an unchangeable sequence so that records can be spread across multiple Kafka partitions as well as multiple brokers. This parallelism is what allows multiple consumers to read simultaneously across an entire cluster, and read consumer records at a rate ranging from 10's to 100,000's of records per second. The functionality makes Kafka second to none when it comes to sheer speed and throughput.

To coordinate all this flow data between consumers, producers, and brokers, Kafka leverages Apache ZooKeeper.

Back to top

What Is Apache ZooKeeper?

ZooKeeper is utilized by several open-source projects to provide a highly reliable control plane for distributed coordination of clustered applications through a hierarchical key-value store. The suite of services provided by ZooKeeper include distributed configuration services, synchronization services, leadership election services, and a naming registry. 

Other projects beside Kafka that utilize ZooKeeper services include Hadoop, HBase, SOLR, Spark, and NiFi among others.

A core concept of ZooKeeper is the znode. A ZooKeeper znode is data node that is tracked by a stat structure including data changes, ACL changes, a version number, and timestamp. The version number, along with the timestamp, allows ZooKeeper to coordinate updates and cache information. Each change to a znode’s data results in a version change which is used to makes sure znode changes are applied correctly.

NEW: The Decision Maker's Guide to Apache Kafka

A comprehensive guide with tips for implementing Kafka, security best practices, partitioning strategies, use cases, and more.

Get My Copy

Back to top

How Kafka and ZooKeeper Work Together

Kafka and ZooKeeper work in conjunction to form a complete Kafka Cluster ⁠— with ZooKeeper providing the aforementioned distributed clustering services, and Kafka handling the actual data streams and connectivity to clients.  

At a detailed level, ZooKeeper handles the leadership election of Kafka brokers and manages service discovery as well as cluster topology so each broker knows when brokers have entered or exited the cluster, when a broker dies and who the preferred leader node is for a given topic/partition pair. It also tracks when topics are created or deleted from the cluster and maintains a topic list. In general, ZooKeeper provides an in-sync view of the Kafka cluster.  

Kafka, on the other hand, is dedicated to handling the actual connections from the clients (producers and consumers) as well as managing the topic logs, topic log partitions, consumer groups ,and individual offsets.

Controller Election

The Kafka controller is an elected broker assigned by ZooKeeper that is responsible for managing partitions, partition log pair leadership, and replicas in addition to performing general cluster housekeeping such as partition assignments and maintaining an in-sync replica (ISR) list.

The Controller election relies heavily on ZooKeeper and can only consist of one broker at time. To elect the Controller, each broker attempts to create an “ephemeral node”  (a znode that will continue to persist until the session that created the znode is no longer active) called /controller. The first broker to create this ephemeral node will assume the role of controller and each successive broker request will receive “node already exists” message. Once the controller is established it is assigned a “controller epoch” which is as versioning mechanism. The current controller epoch is broadcasted across the cluster and if a broker receives a controller request from an older controller epoch, it is ignored. If a controller failure occurs the same election process is repeated, and a newer controller epoch is created and broadcasted to the cluster.

Cluster Membership

A Kafka cluster consists of a group of brokers that are identified by a unique numeric ID. When the brokers connect to their configured ZooKeeper instances, a group znode is created with each broker creating an ephemeral znode under this group znode. If a node fails, the ephemeral nature of this znode means it gets removed from the group znode. This process is repeated as brokers enter and exit the cluster.

Topic Configuration

For each Kafka topic there is a set of topic configurations, these configurations can be pertinent to both per-topic and global settings. Each topic configuration has a default value than can be overwritten at those global or per-topic levels. These topic configurations include, but are not limited to, things like replication factor — which sets the minimum of required replicants for a given topic. Max message size and flush rate (which forces an fsync of data to the logs based on set number of messages), unclean leader election, and message retention are also common topic configuration items.

Access Control Lists

Apache Kafka comes with a built-in authorizer that leverages ZooKeeper to store access control lists or ACLs. ACLs are a convenient way to manage access to Kafka resources such as topics and consumer groups. If an authorizer is enabled in the Kafka server.properties then it is required to set ACLs, otherwise access to resources is limited to super users. The default behavior is that when an authorizer is enabled, resources that have no associated ACL are only accessible by super users.

Quotas

Kafka brokers have the ability to control broker resources utilized by clients in the form of quotas. Kafka implements two main types of client quotas to this end. Network bandwidth quotas defined by a byte-rate threshold and a request rate quota that is defined by a CPU utilization threshold as a percentage of network I/O threads. It is possible for runaway consumers and producers to in effect DOS a given broker, group of brokers or even clients, setting Quotas can prevent this from happening.

Quotas can apply to both user+client-id and user or client-id groups. The most specific quota matching a given connection will be applied. For each connection in a quota group the quota configuration will be shared across the entire group. For example, if (user="my-user", client-id="my-client") has a produce quota of 10MB/sec, this is shared across all producer instances of user "my-user" with the client-id "my-client”. Default quotas can be overridden at any of the quota levels that needs to be lower or higher than the default. This is similar to per-topic configuration overrides.

Back to top

Is ZooKeeper Still Useful for Kafka Deployments?

With the most recent releases of Kafka, KIP-500 (or “ZooKeeper Removal”) has been introduced to the mainline releases. However, it is still considered an “early-access” item and is not ready for production due to a few lingering issues. Upgrading a ZooKeeper based cluster is not supported yet, and support for some topic configurations is still lacking. When combined with some missing security features, there are a few issues that will need to be addressed before it will be ready for production.

With that in mind, ZooKeeper will continue to be relevant to production clusters for quite some time. However, with some of the improvements made in KIP-500 recently, it’s probably not a bad idea for most organizations to start those upgrade/migration wheels turning.

KIP-500

KIP-500, also known as “Replace ZooKeeper with a Metadata Quorum,” was created on 30th of October 2019 and was first merged into version 2.8 and released on April 19th, 2021. This KIP introduces the Kafka Raft, or KRaft, mode of Self-Managed Metadata Quorum, and completely removes the necessity of ZooKeeper to manage a Kafka cluster. One of the most important improvements in KIP-500 is significantly decreased rebalancing times across clusters, making the entry and exit of brokers faster and therefore much more fault tolerant.

KIP-500 was released as an “early-access” item, and was considered not suitable for production in the previous 3.0 and 3.1 releases. In the 3.3 release notes, KRaft was marked production-ready (more details here). 

Kafka Raft Metadata Mode

When running Kafka in “KRaft mode” instead of relying on ZooKeeper for elections, Kafka uses the Raft consensus protocol/algorithm internal to the broker(s).

A broker is either a leader or a follower in Raft consensus and the leader is responsible for log replication to the followers. The leader utilizes regular heartbeats to inform followers of its status. Each follower has a timeout setting in which if a heartbeat is not received within that timeout parameter, a follower will change its status to “candidate” and request that a leader election occur.

Instead of “Controller Epoch” in KRaft mode, a leader is denoted by its “term”, like the controller epoch a new term is initialized when a new leader is elected. In KRaft mode only a small group of pre-selected servers can become a leader, these pre-selected servers that are eligible for leadership may consist of all the brokers in smaller clusters, or a sub-set of brokers in a larger cluster.

Back to top

Final Thoughts

Kafka and ZooKeeper make a powerful combination, and work together to power big-data engines for large and small organizations around the world. However, with the introduction of KIP-500, and major API changes in 3.1.0, the Kafka landscape is changing significantly. The developers have indicated that ZooKeeper mode will be deprecated in Kafka 3.5 and only KRaft mode will be supported in Kafka 4.0 (estimated release date: August 2023). 

If you're considering a Kafka and ZooKeeper deployment, be sure to think through what these changes might mean for your organization. If you have questions along the way, our experts are standing by to help.

Editor's Note: This blog was updated in October 2022 to include information about the current Kafka release (3.3) and planned future releases. 

Get 24/7/365 Technical Support for Your Kafka Deployments

OpenLogic provides SLA-backed, around the clock support for Apache Kafka  all delivered directly by experienced Enterprise Architects. Talk to an expert today to learn more.

Talk to an Expert

Additional Resources

Back to top