7. STREAMING

7.1

Kafka streaming.

Kafka Streams simplifies application development by building on the Kafka producer and consumer libraries and leveraging the native capabilities of Kafka to offer data parallelism, distributed coordination, fault tolerance, and operational simplicity.

  • Lightweight ETL library within Kafka

  • Java application

  • Highly-scalable and fault tolerant

  • No need to create cluster

  • Supports exactly-once processing capabilities

  • One record at time processing (no batching)

  • Viable for all types of application

  • First-class integration with Kafka

  • Supports interactive queries to unify the worlds of streams and databases

  • Millisecond processing latency

Stream Partitions and Tasks

The messaging layer of Kafka partitions data for storing and transporting it. Kafka Streams partitions data for processing it. In both cases, this partitioning is what enables data locality, elasticity, scalability, high performance, and fault tolerance.

Kafka Streams uses the concepts of partitions and tasks as logical units of its parallelism model based on Kafka topic partitions. There are close links between Kafka Streams and Kafka in the context of parallelism:

  • Each stream partition is a totally ordered sequence of data records and maps to a Kafka topic partition.

  • A data record in the stream maps to a Kafka message from that topic.

Threads

Kafka Streams allows the user to configure the number of threads that the library can use to parallelize processing within an application instance.

State Stores

Kafka Streams provides so-called state stores, which can be used by stream processing applications to store and query data, which is an important capability when implementing stateful operations.

  • Every stream task in a Kafka Streams application may embed one or more local state stores that can be accessed via APIs to store and query data required for processing.

  • These state stores can either be a RocksDB database, an in-memory hash map, or another convenient data structure.

  • Kafka Streams offers fault-tolerance and automatic recovery for local state stores.

7.2

The fundamental distinction between Batching process and streaming process is about whether each piece of new data is processed when it arrives, or instead processed at a later time as part of a set of new data. That distinction divides processing into two categories: batch processing and stream processing.

In batch processing, newly arriving data elements are collected into a group. The whole group is then processed at a future time (as a batch, hence the term “batch processing”). Spark Streaming is an example of a system designed to support micro-batch processing.

Instead of processing the streaming data one record at a time, Spark Streaming discretizes the streaming data into tiny, sub-second micro-batches. In other words, Spark Streaming’s Receivers accept data in parallel and buffer it in the memory of Spark’s workers nodes.

7.3

Architecture

The key idea in Structured Streaming is to treat a live data stream as a table that is being continuously appended. This leads to a new stream processing model that is very similar to a batch processing model.

You will express your streaming computation as standard batch-like query as on a static table, and Spark runs it as an incremental query on the unbounded input table.

Let’s understand this model in more detail.

  • Consider the input data stream as the “Input Table”. Every data item that is arriving on the stream is like a new row being appended to the Input Table.

  • A query on the input will generate the “Result Table”. Every trigger interval (say, every 1 second), new rows get appended to the Input Table, which eventually updates the Result Table.

  • Whenever the result table gets updated, we would want to write the changed result rows to an external sink.

  • The “Output” is defined as what gets written out to the external storage. Note that Structured Streaming does not materialize the entire table.

  • It reads the latest available data from the streaming data source, processes it incrementally to update the result, and then discards the source data.

  • It only keeps around the minimal intermediate state data as required to update the result (e.g. intermediate counts in the earlier example).

Watermarking.

Now consider what happens if one of the events arrives late to the application. For example, say, a word generated at 12:04 (i.e. event time) could be received by the application at 12:11. The application should use the time 12:04 instead of 12:11 to update the older counts for the window 12:00 - 12:10.

This occurs naturally in our window-based grouping – Structured Streaming can maintain the intermediate state for partial aggregates for a long period of time such that late data can update aggregates of old windows correctly, as illustrated here.

However, to run this query for days, it’s necessary for the system to bound the amount of intermediate in-memory state it accumulates.

This means the system needs to know when an old aggregate can be dropped from the in-memory state because the application is not going to receive late data for that aggregate any more.

To enable this, in Spark watermarking was introduced, which lets the engine automatically track the current event time in the data and attempt to clean up old state accordingly. You can define the watermark of a query by specifying the event time column and the threshold on how late the data is expected to be in terms of event time. For a specific window starting at time T, the engine will maintain state and allow late data to update the state until (max event time seen by the engine - late threshold > T).

In other words, late data within the threshold will be aggregated, but data later than the threshold will start getting dropped.

Input Sources

  • File source - Reads files written in a directory as a stream of data. Supported file formats are text, csv, json, orc, parquet. Note that the files must be atomically placed in the given directory, which in most file systems, can be achieved by file move operations.

  • Kafka source - Reads data from Kafka. It’s compatible with Kafka broker versions 0.10.0 or higher.

  • Socket source (for testing) - Reads UTF8 text data from a socket connection. The listening server socket is at the driver. Note that this should be used only for testing as this does not provide end-to-end fault-tolerance guarantees.

Output Sources

  • File sink - Stores the output to a directory.

  • Kafka sink - Stores the output to one or more topics in Kafka.

  • Foreach sink - Runs arbitrary computation on the records in the output.

  • Console sink (for debugging) - Prints the output to the console/stdout every time there is a trigger.

  • Memory sink (for debugging) - The output is stored in memory as an in-memory table.

7.4

The “Output” is defined as what gets written out to the external storage. The output can be defined in a different mode:

  • Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table.

  • Append Mode (default) - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change.

  • Update Mode - Only the rows that were updated in the Result Table since the last trigger will be written to the external storage (available since Spark 2.1.1). Note that this is different from the Complete Mode in that this mode only outputs the rows that have changed since the last trigger. If the query doesn’t contain aggregations, it will be equivalent to Append mode.

Note that each mode is applicable on certain types of queries.

Last updated