Episodes
Today’s AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons. Today’s guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.  
Published 04/27/24
Over the last ten years or so, the MindScope project at the Allen Institute in Seattle has pursued an industrylab-like approach to study the mouse visual cortex in unprecedented detail using electrophysiology, optophysiology, optical imaging and electron microscopy.  Together with collaborators at Allen, today’s guest has worked to integrate of these data into large-scale neural network, and in the podcast he talks about their ambitious endeavor.
Published 03/30/24
Today’s guest is a pioneer both in the fields of computational neuroscience and artificial intelligence (AI) and has had a front seat during their development.  His many contributions include, for example, the invention of the Boltzmann machine with Ackley and Hinton in the mid 1980s.  In this “vintage” episode recorded in late 2019 he describes the joint births of these adjacent scientific fields and outlines how they came about.
Published 03/16/24
Today’s guest has argued that the present dominant way of doing systems neuroscience in mammals (large-scale electric or optical recordings of neural activity combined with data analysis) will be inadequate for understanding how their brain works. Instead, he proposes to focus on the simple roundworm C.elegans with only 302 neurons and try to reverse engineer it by means of optical stimulation and recordings, and modern machine-learning techniques.   
Published 03/02/24
Over the last decade topological analysis has been established as a new tool for analysis of spiking data. Today’s guest has been a pioneer in adapting this mathematical technique for use in our field and explains concepts and example applications.  We also also talk about so-called threshold-linear network model, a generalization of Hopfield networks exhibiting a much richer dynamics, where Carina has done some exciting mathematical explorations
Published 02/03/24
Not all interesting network activity occurs in cortex. Networks in the spinal cord, the long thin tubular structure extending downwards from the neck, is responsible for setting up rhythmic motor activity needed for moving around. How do these so-called central pattern generators work? Today’s guest has, together with colleagues in Copenhagen, developed a neuron-based network theory for how these rhythmic oscillations may arise even without pace-maker neurons driving the collective. 
Published 01/06/24
We know a lot about of how neurons in the primary visual cortex (V1) of mammals respond to visual stimuli. But how does the vast information contained in the spiking of millions of neurons in V1 give rise to our visual percepts?  The guest’s theory is that V1 acts as a “saliency detector” directing the gaze to the most important object in the visual scene. Then V1 in collaboration with higher visual areas determines what this object is in an iterative feedforward-feedback loop. 
Published 12/09/23
A key goal of computational neuroscience is to build mathematical models linking single-neuron activity to systems-level activity. The guest has taken some bold steps in this direction by developing and exploring a multi-area model for the macaque visual cortex, and later also a model for the human cortex, using millions of simplified spiking neuron models.   We discuss the many design choices, the challenge of running the models, and what has been learned so far.
Published 11/18/23
It is widely thought that spikes (action potentials) are the main carrier of information in the brain. But what is the neural code, that is, what aspects of the spike trains carry the information? The detailed temporal structure or maybe only the average firing rate?  And is there information in the correlation between spike trains in populations of similar neurons?   The guest has thought about these and other coding questions throughout his career.
Published 11/04/23
Starting from the pioneering work of Hodgkin, Huxley and Rall in the 1950s and 60s, we have a well-founded biophysics-based mathematical understanding of how neurons integrate signals from other neurons and generate action potentials. Today’s guest wrote the classic book “Biophysics of Computation” on the subject in 1998. We discuss its contents, what has changed in the last 25 years, and also touch on his other main research interest: consciousness research.  
Published 10/28/23
The book “Models of the Mind” published in 2021 gives an excellent popular account of the history and questions of interest in theoretical neuroscience. I could think of no other person more suitable to invite for the inaugural episode of the podcast than its author Grace Lindsay. In the podcast we discuss highlights from the book as well as recent developments and the future of our field.  
Published 10/13/23