Insights on Reactive Programming

In computing, reactive programming is a declarative programming paradigm concerned with data streams and the propagation of change. With this paradigm it is possible to express static (e.g., arrays) or dynamic (e.g., event emitters) data streams with ease, and also communicate that an inferred dependency within the associated execution model exists, which facilitates the automatic propagation of the changed data flow.

For example, in an imperative programming setting, {\displaystyle a:=b+c}a:=b+c would mean that {\displaystyle a}a is being assigned the result of {\displaystyle b+c}b+c in the instant the expression is evaluated, and later, the values of {\displaystyle b}b and {\displaystyle c}c can be changed with no effect on the value of {\displaystyle a}a. On the other hand, in reactive programming, the value of {\displaystyle a}a is automatically updated whenever the values of {\displaystyle b}b or {\displaystyle c}c change, without the program having to re-execute the statement {\displaystyle a:=b+c}a:=b+c to determine the presently assigned value of {\displaystyle a.}a.

Another example is a hardware description language such as Verilog, where reactive programming enables changes to be modeled as they propagate through circuits.

Reactive programming has been proposed as a way to simplify the creation of interactive user interfaces and near-real-time system animation.

For example, in a model–view–controller (MVC) architecture, reactive programming can facilitate changes in an underlying model that are reflected automatically in an associated view.

Several popular approaches are employed in the creation of reactive programming languages. Specification of dedicated languages that are specific to various domain constraints. Such constraints usually are characterized by real-time, embedded computing or hardware description. Another approach involves the specification of general-purpose languages that include support for reactivity. Other approaches are articulated in the definition, and use of programming libraries, or embedded domain-specific languages, that enable reactivity alongside or on top of the programming language. Specification and use of these different approaches results in language capability trade-offs. In general, the more restricted a language is, the more its associated compilers and analysis tools are able to inform developers (e.g., in performing analysis for whether programs are able to execute in actual real time). Functional trade-offs in specificity may result in deterioration of the general applicability of a language.

A variety of models and semantics govern the family of reactive programming. We can loosely split them along the following dimensions:

  • Synchrony: is the underlying model of time synchronous versus asynchronous?
  • Determinism: Deterministic versus non-deterministic in both evaluation process and results
  • Update process: callbacks versus dataflow versus actor

Essence of implementations:

Reactive programming language runtimes are represented by a graph that identifies the dependencies among the involved reactive values. In such a graph, nodes represent the act of computing and edges model dependency relationships. Such a runtime employs said graph, to help it keep track of the various computations, which must be executed anew, once an involved input changes value.

Change propagation algorithms:

Various abstract implementation approaches enable the specification of reactive programming. The flow of data is explicitly described with employment of such a graph. The most common algorithms are:

  • pull
  • push
  • hybrid push-pull

Degrees of explicitness:

Reactive programming languages can range from very explicit ones where data flows are set up by using arrows, to implicit where the data flows are derived from language constructs that look similar to those of imperative or functional programming. For example, in implicitly lifted functional reactive programming (FRP) a function call might implicitly cause a node in a data flow graph to be constructed. Reactive programming libraries for dynamic languages (such as the Lisp “Cells” and Python “Trellis” libraries) can construct a dependency graph from runtime analysis of the values read during a function’s execution, allowing data flow specifications to be both implicit and dynamic.

Sometimes the term reactive programming refers to the architectural level of software engineering, where individual nodes in the data flow graph are ordinary programs that communicate with each other.

Static or dynamic:

Reactive programming can be purely static where the data flows are set up statically, or be dynamic where the data flows can change during the execution of a program.

The use of data switches in the data flow graph could to some extent make a static data flow graph appear as dynamic, and blur the distinction slightly. True dynamic reactive programming however could use imperative programming to reconstruct the data flow graph.

Higher-order reactive programming:

Reactive programming could be said to be of higher order if it supports the idea that data flows could be used to construct other data flows. That is, the resulting value out of a data flow is another data flow graph that is executed using the same evaluation model as the first.

Similarities with observer pattern:

Reactive programming has principal similarities with the observer pattern commonly used in object-oriented programming. However, integrating the data flow concepts into the programming language would make it easier to express them and could therefore increase the granularity of the data flow graph. For example, the observer pattern commonly describes data-flows between whole objects/classes, whereas object-oriented reactive programming could target the members of objects/classes.

The above is a brief about Reactive Programming. Watch this space for more updates on the latest trends in Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *