Posts by Alex Berka
Pivot
Recent work on developing a chip architecture, suggests existing FPGAs are more suitable than earlier thought, as targets for synchronic computation. Strategically and practically this would offer considerable benefits. Consequently a pivot away from chip design, to focus exclusively on novel forms of high level synthesis is being considered.
The Synchronic Approach
Isynchronise is developing innovative approaches to languages, compilers and reconfigurable processors, arising from a novel textual language system, and associated families of formal models of computation. The basic insight is that textual programming languages have until now been based on a natural language based conception of the relationship between syntax and semantics, which historically evolved in a sub-optimal manner, complicating the description of many to many relationships. This issue has had far-reaching consequences in obscuring promising avenues in math and computer science, and for high performance computation, and as a consequence significant opportunities are now waiting to be exploited.
Current projects include the development of programming languages, radically new compiler technologies, and high performance reconfigurable architectures supporting control intensive operations.
Public forum discussion on SC
There is some good discussion on the spatial approach and synchronic computation here; (http://lambda-the-ultimate.org/node/5614).
Interpretable Machine Learning
Interpretable machine learning is potentially landmark research. Developed originally at Duke University and then at the University of Maine, ProtoPNet provides a means of resolving the mystery of how deep neural networks classify inputs.
What future for the Von Neumann paradigm?
What future for the Von Neumann paradigm of computer architecture? AMD is to acquire Xilinx, after Intel got Altera. Is this the beginning of a shift away from ISA, or a last ditch attempt to rescue it?
Will micro-fluidics solve thermal issues with 3D-ICs ?
The promise of monolithic 3d integrated circuits, with multiple die layers connected through monolithic vias, has been held back in part by their thermal characteristics. This research from Ecole Polytechnique Federale de Lausanne may offer a solution by combining microfluidics and electronics within the same die layer to produce a monolithically integrated cooling structure.
#3dChips #integratedcircuits #coolingtechnology
Retuning transistors to replace LUTs?
Currently logic functions in reconfigurable architectures are implemented using relatively large scale LUTs. This research from Nanjing University and the National Institute for Materials Science in Japan, shows how just several transistors making up the same logic circuit, can be tuned to implement a 2:1 multiplexer, D-latch and 1-bit full adder and subtractor. #fpgas #circuits
BCTCS 2020
There will be a conference talk at BCTCS 2020, if it is still going ahead, on alpha-rams in April :
The alpha-ram family - bit level models for parallelism and concurrency.
Abstract
There are no bit-level machine models for parallelism and concurrency, amongst the standard formal models of computation, that permit computer simulations in tractable amounts of time and space, for the investigation of not just trivial programming constructs, but also more complex high level programs. Such a machine would provide a basis for investigating processes running on a basic device rather than in a formalism abstracted from hardware, without introducing biases from the particulars of higher level architectures. The α-Ram family of deterministic machines provide not only simple semantics and neutral machine platforms for language design, but also opportunities for developing specialized and more general purpose architectures. Physical constraints can be incrementally introduced into the design process in a least restrictive order, thereby reducing bias towards pre-conceived architectural types.
deterministic implementation of concurrency
Deterministic treatments of concurrency are largely confined to synchronous programming for reactive systems, sometimes web based but often involving embedded devices, controlled by real time operating systems. For more general purpose computing in what are considered to be asynchronous environments, nondeterministic concurrency, expressed by process algebras and their derived languages, is often chosen. Is there an alternative to non-determinism in the general case?
There have been recent efforts to synchronise computer networks covering large geographical areas to nanosecond precision. In the physical sciences, simulations of environments nearly always involve a multi-dimensional array of values being updated deterministically in each cycle of a global clock. When researchers require a notion of non-determinism, they tend to rely on random numbers generated by deterministic machines, rather than process algebras with explicit non-deterministic choice operators. It is in any case possible to model the effects of non-determinism in a synchronised, deterministic environment, without the use of random numbers.
A two-way choice made by an active module occurring in some clock tick, can be simulated by reading a specific bit, and by that module being restricted from reading that bit, until that cycle. If necessary, the cycle in which a code segment is activated to make a choice, can be determined by a random number, as well as the choice itself. The absence of a global clock can be simulated if necessary, by restricting modules from being able to monitor other modules’ timestamps. A significant issue with the original presentation was the absence of a treatment of concurrent systems in which the degree of parallelism is opaque to the compiler before runtime, and where interactive modules dynamically fork and join during runtime.
As a result of addressing this issue, a novel parallelization of the Finite State Machine called the synchronic state diagram (SSD) has been devised. With greater functionality than the Petri Net, SSD can model solutions to the motivating examples for process algebras deterministically, as well as the gamut of Van der Aalst et al’s Parallel Workflow Patterns. Unlike Harel’s Statecharts and Lee’s treatment of concurrent finite state in the Ptolemy II environment, SSDs are holistic and integrated representations, are not based on cartesian product, and do not exhibit a combinatorial explosion of state tuples.
At the highest level of program design in Space, SSDs have now replaced the Finite State System of co-active states that was originally presented. The claim that Space and the Synchronic A-Ram are fully general purpose models for classical, discrete computation, can now be better justified. As one might expect, a parallelized notion of finite state also has a wide range of applications in computer architecture, including the design of control systems for SoCs, FPGAs, and FPOAs including Synchronic Engines. The organisation of control circuitry for the latter’s processing elements now seems clear, and EDA development is in progress.