About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
A neural network innovation described in the book: Netlab Loligo has been awarded a patent (#7,904,398). — Of the innovations described in the book, it is the second to receive letters patent (so far ). The patent is titled:
“Artificial Synapse Component Using Multiple Distinct Learning Means With Distinct Predetermined Learning Acquisition Times”
Patent titles serve mainly as an aid for future patent searchers. The patented innovation, along with the underlying concepts and principles that led to it are described and discussed in the book, where they are simply referred to as “Multitemporal Synapses.”
The primary advantage imparted by the innovation is that it gives adaptive systems a present moment in time. This allows them to quickly and intricately adapt to the detailed response needs of their present situation, without cluttering up long term memories with the minute details of those responses.
Multitemporal Synapses
This is a blog entry here that tries to describe Multitemporal Synapses. When time permits, I will try to provide a new blog entry with a clearer explanation using book excerpts (P.S. see above entry). It will be specifically geared to laymen. If you are interested, please subscribe to the feed.
Influence Learning Gets A Patent
Influence Based Learning was the first of Netlab's innovations to be granted a patent. This latest patent makes two (and counting, stay tuned).
The Netlab development effort has led to a new method and device that produces learning factors for pre-synaptic neurons. The need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differs from backpropagation in that its use is not restricted to feed-forward only networks. This new learning algorithm and method, called Influence Learning, is described here and in other entries in this blog (see Resources section below) .
Influence Learning is based on a simple conjecture. It assumes that those forward neurons that are exercising the most influence over responses to the immediate situation will be more attractive to pre-synaptic neurons. That is, for the purpose of forming or strengthening connections, active pre-synaptic neurons will be most attracted to forward neurons that are exercising the most influence.
Perhaps the most relevant thing to understand about this process is that these determinations are based entirely on activities taking place while signals (stimuli) are propagating through the network. Unlike backpropagation, there is no need for an externally generated error signal to be pushed through the network, in backwards order, and in ever-diminishing magnitudes.
Support In Biological Observations
While influence learning in artificial neural network simulations is new, it is based on biological observations and underpinnings from discoveries made over twenty years ago. One of the biological observations that led to the above speculation about attraction to the exercise of influence was discussed briefly in the book The Neuron: Cell and Molecular Biology.
An experiment described in that book shows what happens when you cut (or pharmacologically block) the axon of a target neuron. In that experiment the pre-synaptic connections to the target neuron began to retract after its axon was cut. That is, the axons making presynaptic connections to the modified neuron went away when it no longer made synaptic connections to its own post-synaptic neurons.
The book also described how, when the target neuron’s axon was unblocked (or grew back), the axons from presynaptic neurons immediately began to reform and re-establish connections with the target. Based on these observations, the following possibility was asserted.
"...Maintenance of presynaptic inputs may depend on a post-synaptic factor that is transported from the terminal back toward the soma."
The following diagram depicts these observations schematically.
A set of constructs and methods introduced and described in the book: Netlab Loligo will improve the ability of systems constructed with them to adapt to current short-term situations, and learn from those short-term experiences over the long term.
A New Learning Theory That Predicts A “Present Moment”
How do we, as biological organisms, manage to keep so much finely detailed information in our brains about how to respond to any given situation? That is, how do we manage to keep countless tiny intricacies stored away in our “subconscious” ready to be called upon at just the right time, right when we need them in the present moment?
According to this theory of learning, the answer to that question is: We don't.
Instead, our long term connections—those that immediately drive our responses at all times—are only concerned with getting us started in any given “present.” Responses stored in long-term connections start us along a trajectory that makes it easier for us to learn whatever short-term, detailed responses are needed for any given detailed situation.
Connections that drive short-term responses, on the other hand, form spontaneously in-the-moment, and quickly adapt to whatever present situation we currently find ourselves in. Just as significantly, connections driving short-term responses tend to dissipate as quickly as they form. This theory essentially says that each connection in the brain that drives responses (physical or internal) includes multiple distinct connection strengths, which each increase and decrease at different rates of speed.
How It's Done
Multi-temporality is achieved in Netlab's simulation environment by providing multiple weights per a connection point (i.e., synapse), which are referred to as Multitemporal[Note 1] synapses. Multitemporal synapses employ multiple weights. Each of the multiple weights associated with a given synapse represents a connection strength, and can be set to acquire and retain its strength at a different rate from the others. The methods also specify Weight-To-Weight Learning, which is a means of teaching a given weight in the set of multiple weights, using the value of other weights from the same connection. Together these constructs provide all the functionality required to model the theory of learning discussed above.
Following is a graphic excerpted from the book: Netlab Loligo, which shows a neuron containing three different weights for each connection point. Each weight is given its own learning algorithms, with its own learning-rate, and forget-rate.
It was the end of a tough, on-again off-again breakup, when Experimentalist and Theorist Scientist finally parted ways.
Experimentalist (Experimenter to his friends) remained deeply committed and humble throughout the whole ordeal.
If you asked, he would tell you, "I am nothing without Theorist."
Theorist, however, felt trapped and constrained. She increasingly saw Experimenter as holding her back; as being too rigid and set in his ways.
Where Are They Now?
Quite some time has passed now, since that fateful, final day. Experimentalist Scientist has not fared as well as Theorist. He spends many lonely nights in the lab, gathering information, letting it lead him where it will, to new experiments that answer more seemingly uninteresting questions. He is adrift, and no longer seems to have any sense of purpose. He has confided to me that he doesn't know what it's all for anymore, though he sometimes seeks emotional solace in the hospital district, helping sick people.
On a brighter note, Theorist seems to be handling the breakup much better. It probably doesn't hurt that she has clearly retained the richer, more stylish friends. Their accelerating lifestyles are colliding in so many different and quarky ways it makes your head feel like it's spinning on a string.
After the final break (and some speculate, even before it was over) Theorist began spending a great deal of time with Philosopher Theologian. The two of them have gotten along swimmingly together, and have even started doing that thing where they finish each others sentences. A fact which causes no end of heartbreak to Experimenter and his friends.
Some relationships are just classics, like Bogey and Bacall. It doesn't matter if we have chosen Theorist's circle, or Experimenter's, we are all deeply saddened by the apparent hopelessness of ever seeing these two back together. When they were together, they were a towering beacon of clarity and a brilliant light shining in all of our lives. Now we are left with only the old familiar tower.
Remember the Moog synthesizer days? Just get the waveform and the attack/decay envelopes right and you could perfectly match the sound of any musical instrument.
Not quite. Each instrument was played in a particular way by a human, and those particular ways added slight imperfections in timing and tone. Music turned out to be as much about the dance performed to make it, as it was about the sounds that were made.
Stanford University School of Medicine has developed a relatively simple new imaging technique that provides a very exact way to capture the synapses of a connectome with pinpoint 3D positional accuracy, and considerable contextual resolution.
Stanford has performed a study (see below), which was admittedly done primarily just to showcase the new technique. That said, the study managed to produce a very impressive new find.
“
In the course of the study, whose primary purpose was to showcase the new technique’s application to neuroscience, Smith and his colleagues discovered some novel, fine distinctions within a class of synapses previously assumed to be identical.
”