My Learning Notes on Apache Kafka

 

Why Apache Kafka?

  • Distributed
  • Resilient
  • Fault Tolerant
  • Highly Horizontal Scalability:
    • Can scale to 100s of brokers
    • Can scale to millions of messages per second
  • High performance (latency of less than 10 ms) – real time
  • High Throughput
  • Open Source

 

Use cases of Kafka:

  • Messaging System – Millions of messages can be sent and received in real time, using Kafka.
  • Activity Tracking – Kafka can be used to aggregate user activity data such as clicks, navigation and searched from different websites of an organization and such user activities can be sent to a a real time monitoring system and hadoop system for offline processing.
  • Real Time Stream Processing – Kafka can be used to process a continuous stream of information in real time and pass it to stream processing systems such as Storm.
  • Log Aggregation  Kafka can be used to collect physical log files from multiple systems and store them in a central location such as HDFS.
  • Commit Log Service – Kafka can be used as an external commit log for distributed systems.
  • Event Sourcing – A time ordered sequence of events can be maintained through Kafka.
  • Gather metrics from many different locations
  • Application Logs gathering
  • Stream processing (with the Kafka Streams API or Spark for example)
  • Decoupling of system dependencies
  • Integration with Spark, Flink, Storm, Hadoop and many other Big Data technologies

 

Kafka solves the following problems:

  • How data is transported with different protocols (TCP, HTTP, REST, FTP, JDBC, gRPC, etc)
  • How data is parsed for different data formats (Binary, CSV, JSON, Avro, Thrift, Protocol Buffers, etc..)
  • How data is shaped and may change.Decoupling of Data Streams & Systems

Source Target Decoupling

 

Decoupling Kafka

 

 

Apache Kafka Architecture:

 

Kafka Architecture

 

Kafka Big Picture

 

Kafka Messaging

KAFKA JARGONS:

BROKERS, PRODUCERS, CONSUMERS, TOPICS, PARTITIONS AND OFFSETS:

Topics:

Topics are a particular stream of data.

  • Similar to a table in a database (without all the constraints)
  • You can have as many topics as you want
  • A topic is identified by its name.
  • Topics are split into Partitions.

Example:

Topics Example

 

Partitions:

  • Partitions are similar like columns in a table
  • Each partition is ordered.
  • Each message within a partition gets an incremental id, called offset.
  • You as a user have to specify the number of Partitions for a Topic.
  • The first message to Partition 0 starts with offset 0 then increments thereafter, where offsets can go to infinite, since they are unbounded.
  • Each Partition can have different number of messages (basically offsets), since they are independent

Partitions Overview

 

Offsets & Few Gotcha’s:

  • Offsets are like primary id of a Column, which keeps on incrementing and cannot be changed or updated.
  • Offset only have a meaning for a specific partition.
    • Which means Partition 0, Offset 0 is different from Partition 1, Offset 0.
  • Ordered is only guaranteed only within a partition (not across partitions)
  • Data in Kafka is kept only for a limited time (default retention period is one week)
  • Offsets keep on incrementing, they can never go back to 0.
  • Once the data is written to a partition, it can’t be changed (Data within a portion is immutable). If you want to write a new message, you write it at the end.
  • Data is assigned randomly to a partition unless a key is provided.

 

Brokers:

  • A Kafka cluster is composed of multiple brokers (servers)
  • Each broker is identified with its ID (integer). It cannot be named like “My Broker” or something.
  • Each broker contains certain topic partitions. Each broker contains some kind of data but not all data, because Kafka is distributed.
  • After connecting to any broker (called a bootstrap broker), you will be connected to entire cluster
  • A good number to get started is 3 brokers, but some big clusters have over 100 brokers.

Cluster of Brokers Example:

Cluster of Brokers

 

Brokers and Topics

  • There is no relationship between the Broker number and the Partition Number
  • If there is a topic with more than the number of brokers, then any one of the randomly selected broker will have more partitions for that topic to equal the total number of portions for the topic.
    • Ex: If Topic -C with 4 partitions, then any one of the broker (101, 102, 103) will have more than one Partition.

Broker and Topic Partitions

Topic Replication Factor:

  • Topics should have a replication factor > 1 (usually between 2 and 3)
  • This way if a broker is down, another broker can serve the data.

 

Topic Replication Factor

Topic Replication Factor Failure

 

Concept of Leader for a Partition

  • At any given time, ONLY ONE broker can be leader for a given partition.
  • ONLY that leader can receive and serve data for a partition
  • The other brokers will synchronize the data.
  • Therefore each portion has one leader and multiple ISR (in-sync replica)
  • If Broker 101 is Lost, a ELECTION happens, the Partition 0 on Broker 102 becomes the Leader (because it was in sync replica before). When Broker 101 gets back Alive, it tries to become the leader again after replicating the data back from initial ISR (partition 0 on broker 102). All this is handled by Kafka internally.

What decides Leader’s and ISR’s are done through Zookeeper.

STAR indicates the LEADER

Leader of a Partition

 

Producers:

  • Producers write data to topics (which is made of partitions)
  • Producers automatically know to which broker and partition to write to, so the developer doesn’t need to know that.
  • In case of Broker failures, Producers will automatically recover.
  • If producer sends data without a key, then data is sent in Round Robin Fashion a little bit of data to each one of the brokers in the cluster.
  • Producer can choose to receive acknowledgement of data writes.
  • There are 3 kinds of Acknowledgement modes
    • ACKS=0 -> Producer won’t wait for acknowledgment (possible data loss)
    • ACKS=1 -> Producer will wait for leader acknowledgment (limited data loss). This is the default.
    • ACKS=all -> Leader + replicas acknowledgment (no data loss)

 

Producers

 

Producers with Message Keys:

  • Producers can choose to send a KEY with a message. A key can be a string, number, etc..,
  • if key== null, data is sent round robin to all brokers in the cluster
  • If a key is sent, then all messages for that key will always go to the same partition.
  • A key is basically sent if you need message ordering for a specific field. (ex: truck_id)
  • If a key chooses to go to a Specific partition, it will ALWAYS go that same partition. We CANNOT say that a particular key goes to a particular partition.
    • Ex: if data with key “truck_id_123” is going to partition 0, then the producer sending the data with key truck_id_123 will ALWAYS go to partition 0.
    • Similarly if data with key “truck_id_345” is going to partition 1, then the producer sending the data with key truck_id_345 will ALWAYS go to partition 1.
  • The KEY to PARTITION is guaranteed by Key Hashing technique, which depends on the number of partitions.

Producers with Message Keys

 

Consumers

  • Consumers read data from a topic (identified by name)
  • Consumers know which broker to read from
  • In case of broker failures, consumers know how to recover
  • Data is read in order within each partitions
  • There is no Reading in Order between Partitions (see image above for Partition 1 and Partition 2.)

Consumers

 

Consumer Groups

How are the consumers reading data from all these partition groups?

  • Consumers read data in consumer groups.
  • Consumer can be any client like java application or any other language we are using or can be a command line utility
  • Each consumer within a group reads from exclusive partitions
  • If you have more consumers than partitions, some consumers will be inactive.

Consumer Groups

What if you have too many consumers?

  • If you have more consumers than partitions, some consumers will be inactive.
  • In below, if consumer 3 goes down, then consumer 4 becomes active. Ideally we have same number (at most) of consumers as partitions.

Many Consumers than Partitions

 

Consumer Offsets

  • Kafka stores the offsets at which a consumer group has been reading
  • The offsets committed live in a Kafka topic named “__consumer_offsets” (double underscore followed by consumer then followed by a single underscore then finally followed by offsets)
  • When a consumer in a group has processed data received from Kafka, the consumer then should be committing the offsets to the topic name “__consumer_offsets”. This is done automatically in Kafka.
  • If a consumer dies, it will be able to read back from where it left off, thanks to the committed consumer offsets!.

Consumer Offsets

 

Delivery Semantics for Consumers

  • Consumers choose when to commit offsets
  • There are 3 delivery semantics:

At most once:

  • Offsets are committed as soon as the message is received.
  • If the processing goes wrong, the message will be lost (it won’t be read again)

At least once (usually preferred):

  • Offsets are committed only after the message is processed.
  • If the processing goes wrong, the message will be read again.
  • This can result in duplicate processing of messages. Make sure your processing is idempotent. (i.e processing again the messages won’t impact your systems)

Exactly Once:

  • Can be achieved for Kafka to Kafka workflows using Kafka Streams API. May be it can achieved by Spark an other stuff…
  • For Kafka to External Systems (like a database) workflows, use an idempotent consumer. (to make sure there are no duplicates whilee inserting in the final database)

Kafka Broker Discovery

  • Every Kafka broker is also called a “bootstrap server”
  • That means that you only need to connect to one broker, and you will be connected to the entire cluster.
  • Each broker know about all brokers, topics and partitions (metadata)
  • Kafka Client can connect to any broker automatically.

Kafka Broker Discovery

 

ZOOKEEPER

  • Zookeeper manages brokers. It holds the brokers together (keeps a list of them)
  • Zookeeper helps in performing leader election for partitions, when a broker goes down a new replicated partition of another broker becomes the leader.
  • Zookeeper sends notifications to kafka in case of changes (e.g new topic, broker dies, broker comes up, delete topics, etc…)
  • Kafka cannot work without a Zookeeper. So we first need to start Zookeeper.
  • Zookeeper by design operates with an odd number of servers (3,5,7) in production.
  • Zookeeper also follows the concept of Leaders and Followers. Zookeeper that has a leader (handles writes) the rest of the zookeeper servers are followers (handles reads)
  • Zookeeper does NOT store consumer offsets with Kafka > v0.10

 

Zookeeper for kafka

 

Kafka Guarantees

  • Messages are appended to a topic partition in the order they are sent.
  • Consumers read messages in the order stored in a topic partition
  • With a replication factor of N, producers and consumers can tolerate up to N-I brokers being down.
  • This is why a replication factor of 3 is a good idea:
    • Allows for one broker to be taken down for maintenance
    • Allows for another broker to be take down unexpectedly
  • As long as the number of partitions remains constant for a topic (no new partitions), the same key will always go to the same partition

 

Resources:

https://www.udemy.com/apache-kafka/learn/v4/

https://youtu.be/U4y2R3v9tlY

Source Code:

https://courses.datacumulus.com/kafka-beginners-bu5

http://media.datacumulus.com/kafka-beginners/code.zip

https://github.com/simplesteph/kafka-beginners-course

Advertisements