In the meantime, here's another photo of the hummer currently visiting Melissa Hahn's yard on the north shore of Long Island.
Let's resume our peek under the hood of the brain. In my last post I wrote about the crucial importance of Hebbian synapses. Each cortical neuron receives thousands of these synapses from its input cells, and in turn makes thousands of synapses on its targets. The strength of a synapse determines how much signal each input contributes to the overall output of a neuron. The strength of a synapse is in turn regulated by the sizes of the signals that reach it - the input ("presynaptic") and output ("postsynaptic") signals. Of course this creates something of a vicious circle: the stronger a synapse, the more that input contributes to the output, tending to further strengthen the synapse. But as long as some mechanism prevents all the synapses, or the overall output, from becoming too strong, this "Hebbian" rule leads to selective growth of some synapses at the expense of others, in a way that reflects the overall correlation structure of the input stream.
Ideally what one wants is for particular neurons to adjust their synaptic "recipe" so that the output of a neuron tends to track not the original sensory data, but one of the underlying "causes" of that data. The idea here is that what we literally see (eg a particular set of pixel intensities) reflects the occurence of certain objects in front of us - perhaps a cup, or a hand, or both. If there's a "cup" neuron which fires whenever there's a cup it could be very useful, because it could be used to trigger a particular action (e.g. drinking). But because the particular sets of pixel levels triggered by all possible images of cups is extremely variable, and overlapping those triggered by hands, it's not possible to immediately build a "cup" neuron. It has to be done gradually, in many levels, by gradually assembly more promitive features (lines, curves, handles etc). Hebbian synapses can slowly build such networks of cause-detecting neurons. Of course what we are doing when we "understand" something is successfully identify the causal structure, or "meaning", of our immediate or our past experience. For example, we gradually learn how to recognize and use a completely unfamiliar object, or even a scientific or legal concept.
Does the brain have Hebbian synapses and how do they actually work? Recent research has revealed that almost all synapses are Hebbian. The synapse contains specialized intricate machinery that allows it to measure the voltages of both input and output neurons and adjust the synapse strength accordingly. If the synapse fails to strengthen over a long period of time, it is removed, and other new ones, from alternative inputs, are created. Describing this fascinating machinery would take too long, but there is a master chemical, ionized calcium, that acts as the key signal that both input and output voltages are strong, and that ultimately triggers strengthening.
With this background, we can finally look directly at my own scientific research. I am interested in a simple, rather obvious, but previously unaddressed question. Given that Hebbian synapses exist and play a central role in learning to understand the world, how accurate must they be? In particular, because there are a quadrillion of them, it must be quite difficult for the brain to ensure that the right ones get tweaked at the right time: the needle in a billion haystacks problem. At first glance the Hebbian rule seems to solve this problem: each synapse has to follow the Hebb rule, and adjusts its own strength according to its own unique past history of coupled input and output voltages, WITHOUT AFFECTING THE STRENGTH OF ANY OF THE OTHER QUADRiLLION SYNAPSES.
The problem is that as synapses, as well as being Hebbian (individually adjusting their strength according to the past history of their conjoint pre- and post-synaptic voltages) must also do something else: they must rapidly communicate the signal of the presynaptic axon to the postsynaptic neuron. What this means is that synapses have to do 2 contradictory things: communicate and be independent, spread and stick, be coupled and be isolated. My co-workers and I hypothesized that real biological Hebb synapses (unlike those of theorists, modellers, and artificial intelligence experts) might not be able to accomplush these 2 contradictory tasks perfectly. In concrete terms, we think that the calcium master signal which tells a synapse to strengthen might very occasionly leak out of a synapse and affect some of the neighboring synapses. Indeed, there is some, rather controversial, evidence that this can happen.
What this means is that the grand speculation I outlined about Hebbian synapses leading automatically to networks of intelligent neurons might collapse in practice. This would mean that a century of neuroscience "advances" would have been on the wrong track, and that something else must be going on in the brain (dancing flames? tiny ghosts? quantum microtubules?).
We set out to test this possibility, and see if the available theoretical models of self-organizing neural networks that can exhibit intelligent behavior (and that have recently started to achieve practical success, e.g. "Siri", "Watson", self-driving cars etc) would indeed collapse if their synapses were even very slightly imperfect.
To do this we made a simple computer model of a Hebbian neural network learning to extract "causes" from "observations". We modeled the simplest possible situation, a single neuron learning to extract a single cause from a series of input data. This model is far simpler than those needed to learn how to recognize objects, but we reasoned that if even the simplest most robust models cannot work as advertised, more realistic and complicated models would be unlikely to do better. I'll describe this in more detail in the next post. I'll also let you know about a new development in the lawsuit saga.
No comments:
Post a Comment