Apache Beam over Apache Kafka Stream processing

Stella picture Stella · Jun 14, 2018 · Viewed 8.4k times · Source

What are the differences between Apache Beam and Apache Kafka with respect to Stream processing? I am trying to grasp the technical and programmatic differences as well.

Please help me understand by reporting from your experience.

Answer

Guillaume Braibant picture Guillaume Braibant · Oct 25, 2018

Beam is an API that uses an underlying stream processing engine like Flink, Storm, etc... in one unified way.

Kafka is mainly an integration platform that offers a messaging system based on topics that standalone applications use to communicate with each other.

On top of this messaging system (and the Producer/Consummer API), Kafka offers an API to perform stream processing using messages as data and topics as input or output. Kafka Stream processing applications are standalone Java applications and act as regular Kafka Consummer and Producer (this is important to understand how these applications are managed and how workload is shared among stream processing application instances).

Shortly said, Kafka Stream processing applications are standalone Java applications that run outside the Kafka Cluster, feed from the Kafka Cluster and export results to the Kafka Cluster. With other stream processing platforms, stream processing applications run inside the cluster engine (and are managed by this engine), feed from somewhere else and export results to somewhere else.

One big difference between Kafka and Beam Stream API is that Beam makes the difference between bounded and unbounded data inside the data stream whereas Kafka does not make that difference. Thereby, handling bounded data with Kafka API has to be done manually using timed/sessionized windows to gather data.