Apache Kafka’s Delivery Guarantees

Why and where to use each of three types of Kafka's delivery guarantees; At most once semantics, At least once semantics, and Exactly one semantics?!

Digital Delivery
8 min
Digital Delivery
Apache Kafka’s Delivery Guarantees

Apache Kafka is an open-source platform for reading and writing real-time streaming data. With so many benefits to using big data, solutions like Kafka are invaluable for businesses of all sizes and industries.

When working with streaming data platforms like Kafka, we would like to have "delivery guarantees," i.e. the ability to know how (and how many times) the system will attempt to deliver a message.

What Are Kafka’s Delivery Guarantees?

So what are Kafka’s delivery guarantees, and how does Kafka work behind the scenes to guarantee this behavior?

Kafka uses a producer-consumer pattern to work with streaming data. Some processes are producers responsible for sending messages, and others are consumers responsible for receiving and processing them.

However, working with real-time streaming data across distributed systems presents unique challenges, some messages may be accidentally lost, or sent multiple times if the consumer does not initially acknowledge receiving them.

We can illustrate these problems with the following analogy. Suppose that you want to send an important letter to someone far away, but you aren’t certain whether the letter will arrive.

You have essentially three options:

  • Send the letter only one time and just hope for the best.
  • Continue to send the letter multiple times at regular intervals until you receive an acknowledgment from the recipient.
  • Request the postal service to give you proof of delivery so that you know the letter has safely arrived.

These options are analogous to the multiple delivery guarantees available in Kafka.

You may choose one or more of these options, depending on the application and use case:

  • At-most-once delivery: The producer will send messages at most one time, and will not try sending them again if it receives an error or timeout message from the broker. This means that information will be lost if the consumer crashes before it has finished processing all the current messages on the topic.
  • At-least-once delivery: The producer will send messages at least one time. If the producer receives an error or timeout message from the broker, then it will attempt to resend the message. This means that messages should never be lost, although the producer may duplicate work (which may also result in duplicate outputs).
  • Exactly once delivery: The message will be delivered exactly one time. Failures and retries may occur, but the consumer is guaranteed to only receive a given message once.

Choosing between at-most-once, at-least-once, and exactly-once delivery will depend on which factors you want to prioritize.

If you care more about the consumer receiving a message and don’t mind duplicate outputs or extra work from the producer, then at-least-once delivery is a good option.

Certain use cases (such as financial transactions) require the guarantee of exactly-once delivery (e.g. to avoid accidentally duplicating a withdrawal or deposit).

At-Most-Once Delivery in Apache Kafka

At-most-once delivery is the default option for producer/consumer architectures because it requires no additional effort on the part of the developers.

Any messages that are lost due to errors or disruptions are simply disregarded.

This makes at-most-once delivery ideal for use cases such as Internet of Things (IoT) sensors, which are constantly sending new data and measurements.

At most once delivery in Kafka

At-Least-Once Delivery in Apache Kafka

At-least-once delivery requires the producer to maintain an extra state about message status and to resend failed messages.

This means that at-least-once delivery sacrifices some performance in exchange for the guarantee that all messages will be delivered.

At-least-once delivery is ideal for use cases such as analytics, where all messages must be received but some duplication of messages is acceptable.

at least one delivery guarantee in Kafka

Exactly Once Delivery in Apache Kafka

With exactly-once delivery enabled, Apache Kafka guarantees that a given message will be delivered once and only once. In a real-time, distributed environment, however, this is no small technical feat.

exactly once delivery in apache kafka

The problems with achieving exactly-once delivery in Apache Kafka include glitches, network issues, system crashes, and various other errors that disrupt the standard “read-process-write” pattern. This creates mistakes such as:

  • Multiple writes: Suppose that the consumer has processed a message and written output, but the producer does not receive the consumer’s acknowledgment of the message (e.g. due to network delays or outage). This causes the producer to retry sending the message, which leads to duplicate outputs.
  • Multiple reads: Now suppose that the consumer has processed a message from the producer, but is in the midst of writing when the system crashes. This causes the consumer to reread the message when the system restarts, which again leads to duplicate outputs.

Kafka has several ways to deal with these potential issues and enable its exactly-once delivery semantics.

  • Idempotent producer: With idempotency enabled in Kafka, all messages are tagged with a producer ID and a unique sequence number. If the producer tries to send a message again, the topic will only accept the message if the producer ID and sequence number are distinct. This prevents the same message from being accepted twice after a failure or crash.
  • Transactions: As the general concept of database transactions, Kafka transactions use an “all or nothing” approach: either all of the messages in a transaction are committed, or none of them are. Kafka 0.11 introduces the transaction coordinator and transaction log components that enable transactions in Kafka to be atomic. More specifically, the transaction coordinator keeps track of producer IDs to avoid “zombie instances” that are still running after a system restart, while the transaction log records the state of each transaction at each step of the process.
How Adservio Can Help

With Apache Kafka, it’s easier than ever to work with massive amounts of real-time streaming data—as long as you know what you’re doing.

If you don’t have a fleet of Kafka experts available in-house, it’s a wise choice to join forces with a professional and proven IT service provider who can help you install, deploy, and maintain your Kafka environment.

Adservio with its team of professionals helps companies achieve digital excellence in almost all use cases; from application development, analytics, software delivery, process automation, etc.

If you need assistance with Kafka, let us know your problem and we will take care of providing the solution.

Get in touch with our team of experts and let us know about your business needs and objectives.

Published on
October 20, 2021

Industry insights you won’t delete. Delivered to your inbox weekly.

Other posts