Posts Tagged ‘deterministic concurrency’
deterministic implementation of concurrency
Deterministic treatments of concurrency are largely confined to synchronous programming for reactive systems, sometimes web based but often involving embedded devices, controlled by real time operating systems. For more general purpose computing in what are considered to be asynchronous environments, nondeterministic concurrency, expressed by process algebras and their derived languages, is often chosen. Is there an alternative to non-determinism in the general case?
There have been recent efforts to synchronise computer networks covering large geographical areas to nanosecond precision. In the physical sciences, simulations of environments nearly always involve a multi-dimensional array of values being updated deterministically in each cycle of a global clock. When researchers require a notion of non-determinism, they tend to rely on random numbers generated by deterministic machines, rather than process algebras with explicit non-deterministic choice operators. It is in any case possible to model the effects of non-determinism in a synchronised, deterministic environment, without the use of random numbers.
A two-way choice made by an active module occurring in some clock tick, can be simulated by reading a specific bit, and by that module being restricted from reading that bit, until that cycle. If necessary, the cycle in which a code segment is activated to make a choice, can be determined by a random number, as well as the choice itself. The absence of a global clock can be simulated if necessary, by restricting modules from being able to monitor other modules’ timestamps. A significant issue with the original presentation was the absence of a treatment of concurrent systems in which the degree of parallelism is opaque to the compiler before runtime, and where interactive modules dynamically fork and join during runtime.
As a result of addressing this issue, a novel parallelization of the Finite State Machine called the synchronic state diagram (SSD) has been devised. With greater functionality than the Petri Net, SSD can model solutions to the motivating examples for process algebras deterministically, as well as the gamut of Van der Aalst et al’s Parallel Workflow Patterns. Unlike Harel’s Statecharts and Lee’s treatment of concurrent finite state in the Ptolemy II environment, SSDs are holistic and integrated representations, are not based on cartesian product, and do not exhibit a combinatorial explosion of state tuples.
At the highest level of program design in Space, SSDs have now replaced the Finite State System of co-active states that was originally presented. The claim that Space and the Synchronic A-Ram are fully general purpose models for classical, discrete computation, can now be better justified. As one might expect, a parallelized notion of finite state also has a wide range of applications in computer architecture, including the design of control systems for SoCs, FPGAs, and FPOAs including Synchronic Engines. The organisation of control circuitry for the latter’s processing elements now seems clear, and EDA development is in progress.