On synaptic learning rules for spiking neurons - with Friedemann Zenke - #11
Listen now
Description
Today’s AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons. Today’s guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.  
More Episodes
In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this second...
Published 10/11/24