# Dataflow Programming

Dataflow programming (DFP) is a programming paradigm where program execution is conceptualized as data flowing through a series of operations or transformations. Each operation may be represented as a node in a graph. Nodes are connected by directed arcs through which data flows. A node performs its operation when its input data are available. It sends out the result on the output arcs.

Due to social media (2000s) and IoT (2010s), there's a need to analyse data quickly and derive timely insights. There's also a need to explore and exploit parallel architectures. Dataflow programming aids parallelization without the added complexity that comes with traditional programming.

Dataflow programming includes two sub-fields: Flow-Based Programming (FBP) and Reactive Programming. Dataflow programs can be created in text or visually. There are many languages that support this paradigm.

## Discussion

• Could you explain dataflow programming with an example?

Consider the simple program shown in the figure. In a traditional von Neumann architecture, these statements would be executed sequentially. Thus, the program would execute in three units of time.

In dataflow programming, we represent the program as a dataflow graph that has three inputs (X, Y, 10), three operations (+, /, *) and one output (C). This is a directed graph. Nodes represent instructions or operations. Arcs represent variables. Equivalently, arcs represent data dependencies between instructions. The value in Y is also duplicated since it's sent to two instructions.

In dataflow execution model, this program takes only two units of time. This is because the addition and division instructions can execute in parallel. For each of these instructions, their respective inputs are available at the start. They have no dependencies on each other and therefore can be parallelized.

A factory assembly line offers an analogy. Just as raw materials and semi-finished products move on a conveyor belt from one station to the next, data in DFP moves from one node to the next.

• What are the main benefits of dataflow programming?

With the rise multi-core architectures, there's a need to exploit these via parallel execution. Von Neumann architecture is not conducive to parallelism because of its reliance on a global program counter and global updatable memory. With DFP, each node uses local memory and executes when its inputs are ready.

DFP enables parallel execution without extra burden on the programmer. There's no need to deal with multiple threads, semaphores and deadlocks. Each node in the dataflow graph is independent of others and executes without side effects. A node executes as soon as its inputs are available. If multiple nodes are ready to execute, they can execute in parallel.

DFP has enabled many visual programming languages that provide a more user-friendly interface so that even non-technical users can write programs. Such languages are also suited for rapid prototyping.

DFP promotes loose coupling between software components that's well-suited for service-oriented architectures. Its functional style makes it easier to reason about system behaviour. Its simplicity is in its elegance.

DFP is deterministic. This enables mathematical analysis and proofs.

• What are the main characteristics of dataflow programming?

A program is defined by its nodes, I/O ports and links. A node can have multiple inputs and outputs. Links connect nodes. Links can be viewed as buffers if they need to hold multiple data packets. This is useful when sending and receiving nodes operate at different speeds. Buffers also enable pipelining whereby nodes can execute simultaneously.

A data packet can take primitive/compound, structured/unstructured values, and even include other data packets. When multiple links join, packets are merged or concatenated. For link splitting, packets are duplicated (shallow or deep copy). Equivalently, nodes can do the join and split.

Execution can be either a push or pull model. Push does eager evaluation and reacts to change. Pull does lazy evaluation and uses callbacks. Nodes in push model fire when inputs are available whereas nodes in pull model fire when output is requested.

Execution can be synchronous or asynchronous. With synchronous execution, each node take one "unit of time". Fast nodes will wait for slow nodes to complete before outputting the results. With asynchronous execution, outputs are never delayed and systems can run at full speed.

• What are some traits expected of dataflow languages?

Dataflow languages are expected to be free from side effects. There's locality of effect. Scheduling is determined from data dependencies. Variables may be seen as values since they're assigned once and never modified. Order of statements is not important.

An example of side effect is a function modifying variables in outer scope. Locality of effect means that instructions can have far-reaching data dependencies. This is easily possible in imperative languages by reusing the same variables or using goto statements. Neither of these should be allowed by dataflow languages. The language should be designed such that it's easy to identify data dependencies and generate graphs that expose parallelism.

To address these, we can prohibit global variables and call-by-reference. Arrays are not modified but created anew when changes are made. Thus, arrays are treated as values rather than objects. Even errors are handled by error values instead of setting global status flags and interrupting the program. Since variables are really values, the statement S=X+Y is seen as a definition, not an assignment. In fact, languages that do all processing by applying operators to values have been called applicative languages.

• Could you list some dataflow languages?

Dataflow programming is not limited to visual programming. Early languages were textual with no graphical representation. Visual languages enable end-user programming. Textual languages are faster to work with but need expert knowledge.

Textual languages of the early 1980s were VAL (MIT) and ID (Univ.Cal.Irvine). Lucid was developed for program verification but can also be used for dataflow computation. In the late 1980s, SISAL was invented. Based on VAL, it offered Pascal-like syntax and outperformed Fortran on some benchmarks. Other textual languages include CAL, COStream, Lustre, Joule, Oz, Swift (scripts), SIGNAL, SystemVerilog, and VHDL. The make software commonly used in UNIX systems is also an example of dataflow programming.

Among DFVPLs are Keysight VEE, KNIME, LabVIEW, Microsoft Visual Programming Language, Orange, Prograph, Pure Data, Quartz Composer, and Simulink. More examples include Node-RED, Max MSP, and Lego Mindstorms.

Even spreadsheets are considered examples of visual DFP. Each cell is a node. When a cell value is updated, it sends its new value to other cells that depend on it.

• What other programming paradigms are closely related to dataflow programming?

Flow-Based Programming (FBP) is a subclass of DFP. While DFP can be synchronous or asynchronous, FBP is always asynchronous. FBP allows multiple input ports, has bounded buffers and applies back pressure when buffers fill up. Java, C#, C++, Lua and JavaScript support FBP. DrawFBP and Slang are visual languages for FBP.

Reactive Programming is event-driven, asynchronous and non-blocking. It's usually supported by callbacks and functional composition (declarative style of programming). Like DFP, focus is on the flow of data rather than the flow of control. Programming abstractions that support this include futures/promises, reactive streams, and dataflow variables. Akka Streams, Ratpack, Reactor, RxJava, and Vert.x are libraries for JVM that support reactive programming. In JavaScript, there's Bacon.js.

Like DFP, Functional Programming is free from side effects, mostly determinate and has locality of effect. However, it's possible to have a convoluted functional program that can't be implemented as a dataflow graph. It's been said that dataflow languages are essentially functional but use an imperative syntax.

Carl Hewitt's Actor Model from the 1970s is relevant to DFP. Actors are equivalent to nodes. Many languages such as Scala support the actor model.

• What are some shortcomings or challenges with dataflow programming?

Visual dataflow languages can be cumbersome for large programs with many nodes. Iterations and conditions are hard to represent visually. Debugging is difficult since multiple operations happen concurrently. In general, better tooling is needed.

Many DFVPLs aim to make programming easier for end users. The original motivation to exploit parallelism has become secondary.

DFP's deterministic nature can be a problem. Some use cases (such as ticket booking or database access) are inherently non-deterministic. To solve this, DFP must allow for non-deterministic merging of multiple input streams.

Spreadsheets, when seen via the dataflow paradigm, have their limitations. Formulas are hard to inspect. They're not suited for refactoring and reuse. Deployment and scalability are practical issues. However, solutions have been proposed to address all of these.

Among modern languages, Clojure, Groovy, Scala, Ruby, F# and C# support DFP but offer no syntactic support for dataflow variables. Even simple dataflow examples in F# and C# require lot of boilerplate code.

## Milestones

1950

The idea of dataflow networking can be traced to the work of John von Neumann and other researchers in the 1940s and 1950s in the context of "neuron net" model of finite state automata. In the early 1960s, Carl Adam Petri introduces the Petri Net model for concurrent systems. A dataflow network is a network of computers connected by communication links. Nodes are activated by the arrival of messages on their links.

1966

Sutherland proposes a two-dimensional graphical method to define programs. He gives examples of written statements and their graphical equivalents. Where a symbol appears in two places, two equivalent connections appear in the graphical form. Intermediate values need not be named in the graph. Graphs are also explicit about multiple outputs. He uses the terms "flow input" and "flow output".

1973

Kosinski publishes a paper titled A data flow language for operating systems programming. Programs use function composition. Primitive notion of iteration, recursion, conditionals, and data replication/aggregation/selection are employed. Researchers look to create dataflow languages since they find that imperative languages can't be compiled easily for dataflow hardware.

1974

Davis creates the first visual dataflow language called Data-Driven Nets (DDNs). Dataflow graphs become programming aids rather than simply representations of textual programs. Practical visual programming languages appear only in the early 1980s with Graphical Programming Language (GPL) and Function Graph Language (FGL). However, their development is hampered by graphical limitations of current hardware.

1975

Weng develops the Textual Data-Flow Language (TDFL), one of the first purpose-built dataflow languages. Many other languages follow suit: LAU (1976), Lucid (1977), Id (1978), LAPSE (1978), VAL (1979), Cajole (1981), DL1 (1981), SISAL (1983) and Valid (1984).

1978

Denning notes that dataflow languages are being developed at MIT, at IBM T.J. Watson Research Center, at the University of California at Irvine, and at IRIA Laboratory in France. A compiler would translate language syntax into a dataflow graph to be executed on an interpreter. When an operator is ready to run because its operands are available on the input links, the interpreter sends them as a packet to an extractor network, which routes the packet to a function unit. Results are fed back through an insertion network.

1985

Though DFP was expected to eclipse von Neumann architecture, by mid-1980s it's clear that this hasn't happened. Researchers realize that this is probably because parallelism in dataflow architectures is too fine-grained. This starts a trend towards hybrid architectures and coarse-grained parallelism. Many dataflow instructions are grouped together and run in sequence, though obeying the rules of dataflow execution model.

1990

Due to computers with better graphics, this decade sees the rise of Dataflow Visual Programming Languages (DFVPLs). LabView (1986) and NL (1993) are two well-known languages of the 1990s. The main motivation for DFVPLs is software engineering rather than parallelism. DFVPLs enable programmers to create programs by wiring dataflow graphs.

Author
No. of Edits
No. of Chats
DevCoins
2
2
1344
1
0
20
1
0
19
1976
Words
6
Likes
9417
Hits

## Cite As

Devopedia. 2020. "Dataflow Programming." Version 4, November 17. Accessed 2022-10-09. https://devopedia.org/dataflow-programming
Contributed by
3 authors

Last updated on
2020-11-17 11:04:56