Graph

Synchronic Computation (SC) is an alternative paradigm in Computer Science, which incorporates theory, hardware and software. The approach argues there are flaws in natural language and its offshoots, which become visible when trying to adapt them for parallel and concurrent computing environments, and that some of the ways further language structures originating in sequential environments have been adapted for parallelism are problematic, leading to a cascade of sub-optimal outcomes in syntactic and semantic structures, compilers, and machine model and software conceptualisations of parallelism.

SC proposes a broad solution to these issues involving a textual DAG based language in which syntactic forms are strings of strings of short strings of words, and the name of a semantic relation or function on objects in code, references not only a type but also a hardware resource instance, such as an ALU at a low level of abstraction, in an associated abstract machine. A DAG-like and partially semantic conception of type is put forward, where data structures and state transformers (code) are treated as distinct, and not equally as first class objects. In a series of incrementally beneficial stages, the strategy bootstraps a succession of technical contributions in language formats, compiler features and machine models, to address programmability, compilation and performance issues in parallelism and concurrency, and other foundational issues.

The approach is called synchronic, because linguistic and machine models are assumed to be synchronised environments, where asynchrony is only introduced towards the end of a compilation process, to accommodate the practical difficulties of synchronising large machines. The advantage is that it is easy in a clocked deterministic environment to bypass race hazards, resource contention and deadlocks, in such a way that the late introduction of asynchrony does not affect program semantics.

It has been a given that because (amongst other things) of the practical difficulties in synchronisation, asynchrony should be assumed from the outset in the development of formal models for parallelism, such as process algebras and distributed programming. Much of the effort in process algebra is devoted to techniques and proofs to ensure asynchronous processes are hazard and deadlock free, whilst having little to say about machine models. Synchronic Computation on the other hand bypasses these issues, whilst providing a methodology for the design of machines for specialised and more general purpose architectures. Physical constraints can be incrementally introduced into the design process in a least restrictive order, thus reducing conceptual bias towards pre-conceived architectural types.

Updated 20 Sep 2024.