Neuron Bursts Can Mimic Famous AI Learning Strategy

AI engineers solved the credit assignment problem for machines with a powerful algorithm called backpropagation, popularized in 1986 with the work of Geoffrey Hinton, David Rumelhart and Ronald Williams. It’s now the workhorse that powers learning in the most successful AI systems, known as deep neural networks, which have hidden layers of artificial “neurons” between their input and output layers. And now, in a paper published in Nature Neuroscience in May, scientists may finally have found an equivalent for living brains that could work in real time.

This is a really big deal. This is a storied problem, and this advance might even lead to a Nobel Prize in time.


In the new model, the team considered neuron bursts a third output signal, a stream of 1s so close together it effectively becomes a 2. Rather than encoding anything about the external world, the 2 acts as a “teaching signal” to tell other neurons whether to strengthen or weaken their connections to each other, based on the error accrued at the top of the circuit.

Seems like if they figured this out it has implications for autism or ADHD. Perhaps in those cases, the brain is getting too much or too little of this teaching signal.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.