During inference in ANN/CNNs, data flows in a single direction, namely from input to output. In recurrent neural networks, internal feedback paths exist. Although this helps to create more efficient and compact networks, it is more challenging to train such networks due to more complex algorithms and convergence stability.
A special case of a recurrent network is called reservoir computing, where only the synaptic weights of the output layer are trained. The recurrent part of the network remains hidden and is not trained. This allows much more efficient training to take place at the expense of some computational flexibility. Feedback paths in reservoir networks induce temporal dependencies, which are governed by the physical properties of the reservoir. To build reservoirs, EM wave interference or other high-dimensional physical systems have been used.
Recurrent systems—and reservoir systems in particular—have shown great promise in processing time series (audio/speech, financial etc.). Training of the output layer can be accelerated similarly to ANN/CNNs, and we are also working to map the entire recurrent layer, the reservoir, to different physical systems that are suitable for various applications.