General
Isynchronise’s Next Chapter
Isynchronise has been an open source project to identify the legacy issues inherited from natural language impacting parallelism and high performance computing, and to refactor formal and programming language structures, and downstream models of computation. To advance the project generally, and to develop new architectural concepts for a system control chip and a programmable DSP accelerator called AlphaCore, Isynchronise is transitioning to a for profit enterprise to attract venture capital required to fund chip development.
Why This Change?
While the non-profit root allowed proof of concept, this new structure provides the agility and capital access to realise the strategy, and to attract the investment needed to bring AlphaCore to market faster, and to move beyond pilot programs to large-scale deployments that affect millions.
Architecture for Programmable Digital Signal Processing
New work involves a programmable DSP aimed at advanced predictive modelling, image enhancement, computer vision, and adaptive filtering. Although the Synchronic approach offers significant benefits, wire based connectivity poses a chip area challenge, which does not however apply in the context of a programmable DSP.
Pivot
Recent work on developing a chip architecture, suggests existing FPGAs are more suitable than earlier thought, as targets for synchronic computation. Strategically and practically this would offer considerable benefits. Consequently a pivot away from chip design, to focus exclusively on novel forms of high level synthesis is being considered.
The Synchronic Approach
Isynchronise is developing innovative approaches to languages, compilers and reconfigurable processors, arising from a novel textual language system, and associated families of formal models of computation. The basic insight is that textual programming languages have until now been based on a natural language based conception of the relationship between syntax and semantics, which historically evolved in a sub-optimal manner, complicating the description of many to many relationships. This issue has had far-reaching consequences in obscuring promising avenues in math and computer science, and for high performance computation, and as a consequence significant opportunities are now waiting to be exploited.
Current projects include the development of programming languages, radically new compiler technologies, and high performance reconfigurable architectures supporting control intensive operations.
Public forum discussion on SC
There is some good discussion on the spatial approach and synchronic computation here; (http://lambda-the-ultimate.org/node/5614).
Interpretable Machine Learning
Interpretable machine learning is potentially landmark research. Developed originally at Duke University and then at the University of Maine, ProtoPNet provides a means of resolving the mystery of how deep neural networks classify inputs.
What future for the Von Neumann paradigm?
What future for the Von Neumann paradigm of computer architecture? AMD is to acquire Xilinx, after Intel got Altera. Is this the beginning of a shift away from ISA, or a last ditch attempt to rescue it?
Will micro-fluidics solve thermal issues with 3D-ICs ?
The promise of monolithic 3d integrated circuits, with multiple die layers connected through monolithic vias, has been held back in part by their thermal characteristics. This research from Ecole Polytechnique Federale de Lausanne may offer a solution by combining microfluidics and electronics within the same die layer to produce a monolithically integrated cooling structure.
#3dChips #integratedcircuits #coolingtechnology
Retuning transistors to replace LUTs?
Currently logic functions in reconfigurable architectures are implemented using relatively large scale LUTs. This research from Nanjing University and the National Institute for Materials Science in Japan, shows how just several transistors making up the same logic circuit, can be tuned to implement a 2:1 multiplexer, D-latch and 1-bit full adder and subtractor. #fpgas #circuits
BCTCS 2020
There will be a conference talk at BCTCS 2020, if it is still going ahead, on alpha-rams in April :
The alpha-ram family - bit level models for parallelism and concurrency.
Abstract
There are no bit-level machine models for parallelism and concurrency, amongst the standard formal models of computation, that permit computer simulations in tractable amounts of time and space, for the investigation of not just trivial programming constructs, but also more complex high level programs. Such a machine would provide a basis for investigating processes running on a basic device rather than in a formalism abstracted from hardware, without introducing biases from the particulars of higher level architectures. The α-Ram family of deterministic machines provide not only simple semantics and neutral machine platforms for language design, but also opportunities for developing specialized and more general purpose architectures. Physical constraints can be incrementally introduced into the design process in a least restrictive order, thereby reducing bias towards pre-conceived architectural types.