Biologically-inspired online training algorithms with neuromorphic hardware
Artificial Neural Networks (ANNs) are at the heart of today’s artificial intelligence systems that have demonstrated outstanding and even super-human performance. These networks were initially inspired by the operating principle of the human brain, and then evolved in parallel to facilitate further research in neuroscience. Meanwhile, the neuroscientific community has developed the Spiking Neural Network (SNN) model that incorporates more findings from biology, such as the biologically-realistic temporal dynamics. Despite the great achievements of ANNs, they remain only remotely connected to those findings and the powerful human brain. This leads to several drawbacks, including the lack of the ability to learn online from incoming data streams from sensors in a low-power setting, which limits their widespread applicability for battery-powered devices at the edge. One promising research avenue to overcome the gap between ANNs and the biological counterparts is to incorporate findings from neuroscience. Moreover, there have been many developments on hardware accelerators, including neuromorphic hardware based on resistive memory technologies, which enable the implementation of the SNNs more efficiently.
The aim of this project is to combine insights from neuroscience with advances from hardware accelerators to form a powerful system capable of solving challenging real-world tasks. The ultimate goal is to build a proof-of-concept system, demonstrating online learning, leveraging in-memory neuromorphic hardware. To do so, we will be taking advantage of the Phase Change Memory (PCM) devices, one of the most mature resistive memory technologies, and leverage biologically-plausible neural networks as well as online learning algorithms, such as e-prop  or OSTL . Following the developments of our recent work , we will investigate methods to increase the energy-efficiency as well as the performance of the employed SNNs and learning algorithms.
We will also realize the network architecture with PCM devices using an in-memory computing neuromorphic hardware set-up , developed at IBM, Zurich. This requires to investigate the performance of the developed algorithms under hardware constraints. To demonstrate the performance of the developed networks and learning algorithms, regression or classification tasks will be used, with a focus on online adaptation.
- Strong programming skills in Python.
- Experience with TensorFlow machine-learning framework.
- Analytical and problem-solving skills.
- Excellent communication and team skills.
IBM is committed to diversity at the workplace. With us you will find an open, multicultural environment. Excellent flexible working arrangements enable all genders to strike the desired balance between their professional development and their personal lives.
How to apply
If you are interested in this exciting position, please submit your most recent curriculum vitae, your diplomas, as well as a motivational letter.
For technical questions, please contact:
Dr. Stanislaw Wozniak firstname.lastname@example.org
Dr. Angeliki Pantazi email@example.com
- G. Bellec et al.. A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun., 11(3625):1–15, July 2020.
- T. Bohnstingl et al., Biologically-inspired training of spiking recurrent neural networks with neuromorphic hardware. AICAS, May 2022.
- T. Bohnstingl et al., Online Spatio-Temporal Learning in Deep Neural Networks. IEEE Trans. Neural Networks Learn. Syst., pages 1–15, March 2022.
- R. Khaddam-Aljameh et al.. HERMES Core – A 14nm CMOS and PCM-based In-Memory Compute Core using an array of 300ps/LSB Linearized CCO-based ADCs and local digital processing. In 2021 Symposium on VLSI Technology, pages 1–2. IEEE, June 2021.