About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
The Netlab development effort has led to a new method and device that produces learning factors for pre-synaptic neurons. The need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differs from backpropagation in that its use is not restricted to feed-forward only networks. This new learning algorithm and method, called Influence Learning, is described here and in other entries in this blog (see Resources section below) .
Influence Learning is based on a simple conjecture. It assumes that those forward neurons that are exercising the most influence over responses to the immediate situation will be more attractive to pre-synaptic neurons. That is, for the purpose of forming or strengthening connections, active pre-synaptic neurons will be most attracted to forward neurons that are exercising the most influence.
Perhaps the most relevant thing to understand about this process is that these determinations are based entirely on activities taking place while signals (stimuli) are propagating through the network. Unlike backpropagation, there is no need for an externally generated error signal to be pushed through the network, in backwards order, and in ever-diminishing magnitudes.
Support In Biological Observations
While influence learning in artificial neural network simulations is new, it is based on biological observations and underpinnings from discoveries made over twenty years ago. One of the biological observations that led to the above speculation about attraction to the exercise of influence was discussed briefly in the book The Neuron: Cell and Molecular Biology.
An experiment described in that book shows what happens when you cut (or pharmacologically block) the axon of a target neuron. In that experiment the pre-synaptic connections to the target neuron began to retract after its axon was cut. That is, the axons making presynaptic connections to the modified neuron went away when it no longer made synaptic connections to its own post-synaptic neurons.
The book also described how, when the target neuron’s axon was unblocked (or grew back), the axons from presynaptic neurons immediately began to reform and re-establish connections with the target. Based on these observations, the following possibility was asserted.
"...Maintenance of presynaptic inputs may depend on a post-synaptic factor that is transported from the terminal back toward the soma."
The following diagram depicts these observations schematically.
Figure 1 - Loss of presynaptic terminals after axotomy.
The Experiment As Depicted
The above image (Fig 1) shows what happens when the axon of a target neuron is cut. As shown, when its axon is cut or blocked, the axons of pre-synaptic neurons making contact with the target neuron begin to recede. When the target neuron's cut axon grows back, its pre-synaptic neurons begin to re-establish connections with it.
How This Relates To Influence Learning
This is one of the observations from the neurobiology field that led to the development of the influence learning algorithm.
The influence learning method models this return factor, and further hypothesizes that:
Whatever the pharmacological medium, the factor serves as an attractive agent to pre-synaptic neurons — The method posits that it is one of the underlying mechanisms employed by active pre-synaptic neurons (or more specifically, their growth cones). Pre-synaptic neurons use this mechanism for the purpose of dynamically determining what neurons to form or strengthen connections with. Connections may be excitatory or inhibitory.
The production or efficacy of the attractive agent is based, in part, on synaptic and post-synaptic signaling activity occurring at forward neurons. — The amount of attraction (or repulsion) is directly related to the amount of activity being influenced in forward neurons by activity at the output of the target neuron.
Activity at attracted growth cones may also be important — Attractiveness may also be affected by activity on the axons of pre-synaptic (attracted) neurons.
The attractive agent is a product of normal signal propagation of stimuli and action potentials — As has been alluded to, the attraction of pre-synaptic neurons is based entirely on propagation of signals (called action potentials in the biological literature). This does away with the need to produce an externally calculated error signal and propagate it backward through the network. That, in turn, allows any feedback configuration to be used without restriction.
In the machine implementation of influence learning, these factors are reduced to practice as Influence Factors with names like OI_E, and OI_I.
The specific biological mechanisms that produce the observed phenomenon are likely to be varied. In the experiment described above, it is not really known if the connections from pre-synaptic neurons actually retracted, or if the synapses remained, but simply became inactive (latent) when the target neuron's axon was blocked. Likewise, reformation of "connections" may be caused by the existing synapses becoming stronger, or may be caused by axonal (or dendritic) pathfinding mechanism. In fact, these two possibilities are not mutually exclusive, and it is very likely that both are employed in biological brains. The deciding factors may include, for example, development phase of the organism, or the presence or absence of cyclic chemical factors.
What If?
This may be the germ of what humans recognize as a form of "influence" at higher levels. Influence Learning asserts that these factors allow pre-synaptic neurons to determine which forward neurons are exercising the most influence. They then use this to determine which to be more likely to make connections with. In some cases, however, these calculated factors may allow pre-synaptic neurons to be repulsed by the exercise of influence.
Assume that these added assertions are correct; that these factors are a primary cause of attraction and aversion to influential forward activity. Is there any connection between this observed cellular level mechanism, and the macro-behavioral attraction to influence that can be observed in humans and other high-level organisms?
Resources
The Neuron: Cell and Molecular Biology
As stated, the biological observations are cited directly from the descriptions in this book. The schematically related figure is mine, but it is based on a figure in the book (18-23 on page 502) which depicts a similar experiment. In the book, however, it is related using drawings of biological neurons, not schematics. The same figure and discussion can also be found in the first edition of this book (copyright 1991). I will ask for permission to use the figure from the book and use it if permitted, but that's a long-shot.
"...In one of many surprise findings, Northwestern University scientists have discovered that axons can operate in reverse: they can send signals to the cell body, too. . ."
— Oh, did I mention? The explanation in this blog entry is based on a discussion that can be found in a 20 year old neurobiology textbook. awk-ward !
Introducing Influence Learning
A quick overview of the new learning method and mechanism. This is mostly a description of its advantages over existing methods. Some descriptive material is included.
Influence Learning Gets A Patent
An entry announcing the award of a patent to the learning method discussed here. Influence Learning is just a nick-name I've given it to avoid having to refer to it by the very long patent title.