The need for neuromorphic hardware

The exploding amount of data generated around the globe and the increasing complexity of computational task require a new generation of algorithms and processors to continue the success of efficiently solving problems with artificial computing systems. Huge progress has been made in software-based approaches to enable machine-learning algorithms to classify and analyze massive data sets. These new algorithms currently run on classical von-Neumann computing architectures, which is the workhorse of our IT infrastructure and which was originally developed and optimized for traditional workloads. However, entering the cognitive era requires novel architectures to accelerate and efficiently execute new learning algorithms.

The cognitive era re­quires nov­el ar­chi­tec­tures to ac­cel­er­ate and ef­fi­cient­ly exe­cute new learn­ing al­go­rithms.

—IBM scientist Stefan Abel

There are different approaches to realize such hardware platforms. Besides using large clusters of GPUs to accelerate vector/matrix multiplications as needed for training deep neural networks, dedicated CMOS architectures for low-power cognitive computing have successfully been fabricated [1]. Inspired by the efficiency of our human brain, researchers are looking even beyond such concepts by investigating “native” neuromorphic hardware concepts, such as cross-bar arrays of memristors [2].

Reservoir computing

At IBM Research – Zurich we are developing novel architectures that can solve cognitive tasks natively in hardware. In this respect, “reservoir computing” represents an example of trainable systems that can classify and predict dynamic data. A reservoir computing system is a recurrent neural network, where the information flow is nonlinear due to internal feedback loops. Compared to the wide-spread feedforward neural networks (Figure 1a), the training of recurrent systems is inherently more difficult. By removing synaptic weights within the network of hidden nodes (Figure 1b), reservoir computing strongly facilitates the training complexity of recurrent networks, albeit at the expense of reduced computationally flexibility. However, since the discovery of reservoir computing [3, 4], excellent performance in many tasks has been demonstrated, for example for nonlinear channel equalization, spoken digit recognition, and prediction of time series.

Schematics

Fig. 1. Schematics of (a) a feedforward neural network and (b) a reservoir computing system. Compared to general recurrent neural networks, there are no synaptic weighting elements un the network of hidden nodes.


The basic principle of reservoir computing is depicted in Figure 2 for the example of a classification task: When the separation of multiple classes requires a complex, high-order separation function, a reservoir system facilitates the classification task by transforming the input signal in a high-dimensional feature space. When properly choosing this transformation, states belonging to different classes can be separated by hyperplanes. Such planes can be determined by applying linear regression on the output states of the reservoir — a rather low-cost computational task compared to the training requirements of general recurrent neural networks.

 

Reservoir computing

 

Fig. 2. Principle of reservoir computing: The input states are transformed into a high-dimensional feature space where classification can be performed with linear operation.


Generally, a reservoir system must provide three key features: First, it has to transform the input signal nonlinearly. Second, the reservoir must provide a fading memory, which allows the intermixing of signals provided at different time stamps. Third, the transformation of the signals must be close to the bifurcation point but robust against noise.

Computing in the optical domain

As these requirements can be mapped directly into hardware, various concepts relying on different physical effects including mechanics, electronics, and optics have been realized. At IBM Research – Zurich, we are developing new integrated photonic circuits that create ultrafast reservoir computing systems. We are extending first concepts of silicon photonic delay lines [5] by embedding non-volatile optical synapses, nonlinear optical elements [6], and optical amplifiers [7] into hardware structures. These measures will allow us to increase the performance and network size over that of state-of-the-art photonic reservoir systems.


References

[1] Merolla, P. et al.A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science 345, 668–673 (2014).

[2] Gokmen, T., Vlasov, Y. “Acceleration of deep neural network training with resistive cross-point devices: Design considerations,” Front. Neurosci. 10, 1–13 (2016).

[3] Jaeger, H., Haas, H. “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).

[4] Maass, W., Natschläger, T., Markram, H. “Real-time computing without stable states: a new framework for neural computation based on perturbations,” Neural Comput. 14, 2531–2560 (2002).

[5] Vandoorne, K. et al.Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 5, 3541 (2014).

[6] Abel, S. et al. “A hybrid barium titanate–silicon photonics platform for ultraefficient electro-optic tuning,” J. Light. Technol. 34, 1688–1693 (2016).

[7] Hofrichter, J. et al., “A mode-engineered hybrid III-V-on-silicon photodetector,” European Conference on Optical Communication (ECOC), pp. 1-3, 2015.

Ask the experts

Stefan Abel

Stefan Abel

IBM Research scientist

Jean Fompeyrine

Jean Fompeyrine

IBM Research scientist

 


EU project

NeuRAM3 logo

PHRESCO

PHotonic REServoir COmputing