Apache Kafka Transaction Semantics

There are 3 types of Kafka transaction semantics; At most once semantics, At least once semantics, and Exactly one semantics. Why and where to use them?!

Digital Delivery
-
6 min
Digital Delivery
/
Apache Kafka Transaction Semantics

Apache Kafka is an open-source platform used for reading and writing massive amounts of real-time streaming data. Together with other Apache big data projects such as Spark and Hive, you can use Kafka to build a data pipeline that enables real-time analytics.

Companies ranging from Goldman Sachs, Target, Intuit, and Pinterest use Kafka in their daily operations, including 60 percent of Fortune 100 companies.

What are Transactions in Apache Kafka?

In Apache Kafka, a transaction is a group of one or more messages guaranteed to be either committed or discarded.

This is similar to the notion of a database transaction, a single unit of work containing one or more tasks that must all succeed or fail.

Transactions in Apache Kafka are necessary because many Kafka use cases require highly accurate behavior.

For example, financial institutions that handle real-time streaming data about user deposits and withdrawals need this information to be processed exactly once, no more and no less, to avoid having an incorrect balance.

Apache Kafka Transactions Semantics

The issue of Kafka transaction semantics comes into play when a network or computer failure occurs.

In this situation, messages might be missed or duplicated when going from the producer (the author of the message) to the consumer (the receiver of the message).

This leads to three types of transaction semantics in Kafka:

  • At-most-once semantics: Messages will be sent only once, regardless of whether the consumer has received them.
  • At-least-once semantics: Messages will potentially be sent more than once if the consumer has not acknowledged receiving them.
  • Exactly-once semantics: Messages will be sent only once, and the consumer is guaranteed to receive them.

1. At-most-once semantics

At-most-once semantics means that the consumer will receive the message either one or zero times, and there is no guarantee that the consumer will receive any given message.

As such, at-most-once semantics is only suitable for contexts where it is acceptable that the system will occasionally lose a message.

2. At-least-once semantics

At-least-once semantics means that the consumer is guaranteed to receive the message one or more times. This may be achieved through two different methods:

  • The producer detects when a message has failed to deliver and resends it.
  • The consumer continuously requests messages that have not been delivered.

At-least-once semantics is suitable for contexts where all messages need to be delivered, but without the extra stipulation that they are only delivered once.

3. "Exactly-once" semantics

Exactly-once semantics is the ultimate goal of message brokers like Kafka.

Given a single message, they would like a consumer to process this message a single time without having to duplicate work or have the producer resend this data.

The good news is that by using exactly-once semantics, it's possible with Apache Kafka, although it needs to be handled with care.

The potential issues with achieving exactly-once semantics are:

  • Producers may retry sending messages that have already been successfully written but have not been acknowledged by the consumer (e.g., due to network failure).
  • The consumer may crash after processing the input message and writing the output but before marking the input as consumed.

   This causes the consumer to reprocess the input message once it restarts, leading to duplicate output messages.

  • In a distributed network, duplicate messages can also be caused by "zombie instances", such as when existing applications in the network crash or lose connectivity, new instances are started to replace them.

    However, these existing instances are still operating, consuming the same messages and writing duplicate outputs.

How Do Transactions Work in Apache Kafka?

There are multiple ways in which Apache Kafka works to resolve these transactional issues to achieve exactly-once semantics.

First, Kafka treats the messages in a transaction as part of an atomic unit: either the producer will successfully write all of the messages or none of them.

If an error occurs partway through the processing of the transaction, the entire transaction will be aborted, and none of its messages can be read by the consumer.

For a transaction to be considered atomic, Kafka consumers must read the input message and write the output message together in the same operation, or not at all.

Second, Kafka deals with "zombie instances" by giving each producer a unique ID number that can be used to identify itself, even in the event of a restart.

When a producer starts up, Kafka requires it to check in with the Kafka broker, which looks for any open transactions corresponding to that ID.

If there are any pending transactions, the broker completes them.

Any producers with the same ID but an older epoch (a number associated with the ID) are treated as zombie instances and excluded from the network.

Together, these two practices ensure that consumers will only receive all the messages in a transaction, or none of them (if the transaction remains open or is aborted).

How Adservio Can Help

Kafka is a powerful tool for working with real-time streaming data, but only if you know how to use it.

In most of the cases, it is tremendously rewarding to join forces with a professionalized and experienced IT service provider for Apache Kafka who can help with everything from roadmaps and strategic planning to long-term support and maintenance.

Adservio is an IT and technology consulting partner that helps companies achieve digital excellence.

If you have any questions about implementing Apache Kafka within your organization, we can help.

Get in touch with our team of experts today to chat about your business needs and objectives.

Published on
October 19, 2021

Industry insights you won’t delete. Delivered to your inbox weekly.

Other posts