Biological to Artificial and Back: How a Core AI Algorithm May Work in the Brain – Singularity Hub

Blame is the main game when it comes to learning.

I know that sounds bizarre, but hear me out. Neural circuits of thousands, if not more, neurons control every single one of your thoughts, reasonings, and behaviors. Take sewing a face mask as an example: somehow, a set of neurons have to link up in specific ways to make sure you dont poke your finger with a sharp needle. Youll fail at the beginning before getting better at protecting your hand while sewing uniform stitches with efficiency.

So the question is, of those neurons that eventually allow you to sew with ease, which onesor which connections between which oneswhere to blame initially for your injuries? Are those ones responsible for your eventual proficiency? How exactly does the brain learn through mistakes?

In a new paper, some of the brightest minds in AIincluding Dr. Geoffrey Hinton, the godfather of deep learning, and folks at DeepMind, the poster child of neuro-AI crossoversargue that ideas behind a core algorithm that drives deep learning also operate within the brain. The algorithm, called backpropagation, was the spark that fired up the current revolution of deep learning as the de facto machine learning behemoth. At its core, backprop is an extremely effective way to assign blame to connections in artificial neural networks and drive better learning outcomes. While theres no solid proof yet that the algorithm also operates in the brain, the authors laid out several ideas that neuroscientists could potentially test out in living brain tissue.

Its a highly controversial idea, partly because it was brought up years ago by AI researchers and refuted by neuroscientists as biologically impossible. Yet recently, the bond between deep learning techniques and neuroscience principles has become increasingly entangled in a constructive feedback circle of ideas. As the authors argue, now may be a good time to revisit the possibility that backpropagationthe heart of deep learningmay also exist in some form in biological brains.

We think that backprop offers a conceptual framework for understanding how the cortex learns, but many mysteries remain with regard to how the brain could approximate it, the authors conclude. If true, it means that somehow our biological brains came up with principles for designing artificial ones that, incredibly, loosely reflect evolutions slow sculpting of our own brains through genes. AI, the product of our brains, will amazingly become a way to understand a core mystery of how we learn.

The neuroscience dogma of learning in the brain is the idea of fire together, wire together. In essence, during learning, neurons will connect to each other through synapses into a network, which slowly refines itself and allows us to learn a tasklike sewing a mask.

But how exactly does that work? A neural network is kind of like a democracy with individuals who are only in contact with their neighbors. Any single neuron only receives input from its upstream partner, and passes along information to its downstream ones. In neuroscience parlance, how strongly these connections are depend on synaptic weightsthink of it as a firmer or looser handshake, or transfer of information. Stronger synaptic weight isnt always better. The main point of learning is to somehow tune the weights of the entire population so that the main outcome is the one we wantthat is, stitching cloths rather than pricking your finger.

Think of it as a voting scenario in which neurons are individual voters who are socially isolated and only in contact with their immediate neighbors. The community, as a whole, knows who they want to vote for. But then an opponent gets electedso the question is, where did things go awry, and how can the network as a whole fix it?

Its obviously not a perfect analogy, but it does illustrate the problem of assigning blame. Neuroscientists can generally agree that neural networks adjust synaptic weights of their neuron members to push the outcome towards something bettera process we call learning. But in order to adjust weights, first the network has to know which connections to adjust.

Enter backpropagation. In deep learning, which consists of multiple layers of artificial neurons connected to each other, the same blame problem exists. Back in 1986, Hinton and his colleagues David Rumelhart and Ronald Willliams found that as information travels across different neural layers, by observing how far the output misses its mark from the desired one, its possible to mathematically compute an error signal. This signal can then be passed back through the neural network layers, with each layer individually receiving a new error signal based on its upper layers. Hence, the name backpropagation.

Its kind of like five people passing each other a basketball in a line, and the last throw misses. The coachin this case, backpropagationwill start from the final player, judge how likely it was his or her problem, and move back down the line to figure out who needs adjustment. In an artificial neural network, adjustment means changing the synaptic weight.

The next step is for the network to compute the same problem again. This time around, the ball goes in. That means whatever adjustments the backprop coach made worked. The network will adopt the new synaptic weights, and the learning cycle continues.

Sound like a logical way of learning? Totally! Backprop, in combination with other algorithms, has made deep learning the dominant technique in facial recognition, language translation, and AIs wins against humans in Go and poker.

The reality is that in deep neural networks, learning by following the gradient of a performance measure works really well, the authors said. Our only other measure of efficient learning is our own brainso is there any chance that the ideas behind backprop also exist in the brain?

30 years ago the answer was a hell no. Many reasons exist, but a main one is that artificial neural networks arent set up the way biological ones are, and the way backprop mathematically works just cant literally translate to what we know about our own brains. For example, backprop requires an error signal to travel along the same paths as the initial feed-forward computationthat is, the information pathway that initially generated the resultbut our brains arent wired that way.

The algorithm also changes synaptic weights through a direct feedback signal. Biological neurons, in general, dont. They can change their connections through more input, or other types of regulationhormones, chemical transmitters, and whatnotbut using the same physical branches and synapses for both forward and feedback signals, while not getting them mixed up, was considered impossible. Add to that the fact that synapses are literally where our brains store data, and the problem becomes even more complicated.

The authors of the new paper have a rather elegant solution. The key is to not take backprop literally, but just adopt its main principles. Here are two as an example.

One, if the brain cant physically use feedback signals to change its synaptic weights, we do know that it uses other mechanisms to change its connections. Rather than an entire biological network using the final outcome to try to change synaptic weights at all levels, the authors argue, the brain could instead alter the ability of neurons to fireand in turn, locally change synaptic weights so that the next time around, you dont prick your finger. It may sound like nit-picking, but the theory changes something rather impossible in the brain to an idea that could work based on what we know about brain computations.

As for the problem of neural branches supporting both feedforward computing signals and feedback adjustment signals, the authors argue that recent findings in neuroscience clearly show that the neurons arent a uniform blob when it comes to computation. Rather, neurons are clearly divided into segments, with each compartment receiving different inputs and computing in slightly different ways. This means its not crazy to hypothesize that neurons can simultaneously support and integrate multiple types of signalsincluding error signalswhile maintaining their memory and computational prowess.

Thats the simple distillation. Many more details are explained in the paper, which makes a good read. For now, the idea of backprop-like signals in the brain remains a conjecture; neuroscientists will have to carry out wet lab experiments to see if empirical data supports the idea. If the theory actually plays out in the brain, however, its another layerperhaps an extremely fundamental onethat links biological learning with AI. It would be a level of convergence previously unimaginable.

Image Credit: Gerd Altmann from Pixabay

Read more:
Biological to Artificial and Back: How a Core AI Algorithm May Work in the Brain - Singularity Hub

Related Posts