Architectures & compute bottlenecks

We bring and work together with experts in materials science, system architecture and neuromorphic algorithms to design strategies for accelerating existing and future neuromorphic workloads and to develop materials, devices and circuits to build such accelerators. We iteratively map algorithmic compute requirements to hardware devices and architectures and in turn propagate physical device constraints into modifications and adjustments of the algorithms. To assess the overall system performance, we co-simulate physics, systems and algorithms in simulation frameworks and build functional prototypes.

Artificial & convolutional (deep) neural networks

The workhorse of today’s neuromorphic applications are artificial and convolutional neural networks (ANN/CNN), an architecture that mimics the human brain in some ways. When they contain more than a handful of hidden or internal layers, these networks are called “deep,” and have been extremely successful, for example at classifying images. The math behind ANN/CNNs is mainly linear algebra with matrix-matrix and matrix-vector multiplications, which we map to electrical and optical crossbar arrays.

Recurrent networks:

Reservoir computing

During inference in ANN/CNNs, data flows in a single direction, namely from input to output. In recurrent neural networks, internal feedback paths exist. Although this helps to create more efficient and compact networks, it is more challenging to train such networks due to more complex algorithms and convergence stability.

A special case of a recurrent network is called reservoir computing, where only the synaptic weights of the output layer are trained. The recurrent part of the network remains hidden and is not trained. This allows much more efficient training to take place at the expense of some computational flexibility. Feedback paths in reservoir networks induce temporal dependencies, which are governed by the physical properties of the reservoir. To build reservoirs, EM wave interference or other high-dimensional physical systems have been used.

Recurrent systems—and reservoir systems in particular—have shown great promise in processing time series (audio/speech, financial etc.). Training of the output layer can be accelerated similarly to ANN/CNNs, and we are also working to map the entire recurrent layer, the reservoir, to different physical systems that are suitable for various applications.

Ask the expert

Jonas Weiss

IBM Research scientist

## EU projects

Phase-Change Switch

Exploiting the abrupt metal-insulator transition of vanadium dioxide for electronic circuits and systems

NeuRAM^{3}

NEUral computing aRchitectures in Advanced Monolithic 3D-VLSI nano-technologies

PHRESCO

PHotonic REServoir COmputing

## SNF funding

NAPRECO

Novel Architectures for Photonic Reservoir Computing