Synchronic Computation is an alternative paradigm in Computer Science, which incorporates theory, hardware and software. The latter two arise out of a novel approach to theoretical computer science, which analyses the foundations of human and formal languages with a view to developing non-serial, multi-perspectival language forms, which in turn generate novel machine models for foundational informatics and high performance parallel computation.
The approach is called synchronic, because linguistic and machine models are assumed to be synchronised environments, where asynchrony is only introduced towards the end of a compilation process, to accommodate the practical difficulties of synchronising large machines. The advantage is that it is easy in a clocked deterministic environment to bypass race hazards, resource contention and deadlocks, in such a way that the late introduction of asynchrony does not affect program semantics.
It has been a given that because (amongst other things) of the practical difficulties in synchronisation, asynchrony should be assumed from the outset in the development of formal models for parallelism, such as process algebras and distributed programming. Much of the effort in process algebra is devoted to techniques and proofs to ensure asynchronous processes are hazard and deadlock free, whilst having little to say about machine models. Synchronic Computation on the other hand bypasses these issues, whilst providing a methodology for the design of machines for specialised and more general purpose architectures. Physical constraints can be incrementally introduced into the design process in a least restrictive order, thus reducing conceptual bias towards pre-conceived architectural types.
An 8 page summary of synchronic computation from 2011.