About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Other Blogs/Sites:
Neural Networks
Hardware (Robotics, etc.)
|
Wednesday, January 26. 2011
A set of constructs and methods introduced and described in the book: Netlab Loligo will improve the ability of systems constructed with them to adapt to current short-term situations, and learn from those short-term experiences over the long term.
How do we, as biological organisms, manage to keep so much finely detailed information in our brains about how to respond to any given situation? That is, how do we manage to keep countless tiny intricacies stored away in our “subconscious” ready to be called upon at just the right time, right when we need them in the present moment?
According to this theory of learning, the answer to that question is: We don't.
Instead, our long term connections—those that immediately drive our responses at all times—are only concerned with getting us started in any given “present.” Responses stored in long-term connections start us along a trajectory that makes it easier for us to learn whatever short-term, detailed responses are needed for any given detailed situation.
Connections that drive short-term responses, on the other hand, form spontaneously in-the-moment, and quickly adapt to whatever present situation we currently find ourselves in. Just as significantly, connections driving short-term responses tend to dissipate as quickly as they form. This theory essentially says that each connection in the brain that drives responses (physical or internal) includes multiple distinct connection strengths, which each increase and decrease at different rates of speed.
Multi-temporality is achieved in Netlab's simulation environment by providing multiple weights per a connection point (i.e., synapse), which are referred to as Multitemporal [Note 1] synapses. Multitemporal synapses employ multiple weights. Each of the multiple weights associated with a given synapse represents a connection strength, and can be set to acquire and retain its strength at a different rate from the others. The methods also specify Weight-To-Weight Learning, which is a means of teaching a given weight in the set of multiple weights, using the value of other weights from the same connection. Together these constructs provide all the functionality required to model the theory of learning discussed above.
Following is a graphic excerpted from the book: Netlab Loligo, which shows a neuron containing three different weights for each connection point. Each weight is given its own learning algorithms, with its own learning-rate, and forget-rate.

[Read more...]
Monday, January 17. 2011
|
"We've learned to fly the air as birds,
we've learned to swim the seas as fish,
yet we haven't learned to walk the Earth as brothers and sisters."
- Reverend Martin Luther King Jr.
|
|
Sunday, December 19. 2010
It was the end of a tough, on-again off-again breakup, when Experimentalist and Theorist Scientist finally parted ways.
Experimentalist (Experimenter to his friends) remained deeply committed and humble throughout the whole ordeal.
If you asked, he would tell you, "I am nothing without Theorist."
Theorist, however, felt trapped and constrained. She increasingly saw Experimenter as holding her back; as being too rigid and set in his ways.
Where Are They Now?
Quite some time has passed now, since that fateful, final day. Experimentalist Scientist has not fared as well as Theorist. He spends many lonely nights in the lab, gathering information, letting it lead him where it will, to new experiments that answer more seemingly uninteresting questions. He is adrift, and no longer seems to have any sense of purpose. He has confided to me that he doesn't know what it's all for anymore, though he sometimes seeks emotional solace in the hospital district, helping sick people.
On a brighter note, Theorist seems to be handling the breakup much better. It probably doesn't hurt that she has clearly retained the richer, more stylish friends. Their accelerating lifestyles are colliding in so many different and quarky ways it makes your head feel like it's spinning on a string.
After the final break (and some speculate, even before it was over) Theorist began spending a great deal of time with Philosopher Theologian. The two of them have gotten along swimmingly together, and have even started doing that thing where they finish each others sentences. A fact which causes no end of heartbreak to Experimenter and his friends.
Some relationships are just classics, like Bogey and Bacall. It doesn't matter if we have chosen Theorist's circle, or Experimenter's, we are all deeply saddened by the apparent hopelessness of ever seeing these two back together. When they were together, they were a towering beacon of clarity and a brilliant light shining in all of our lives. Now we are left with only the old familiar tower.
Oh well.
Tuesday, November 30. 2010
Remember the Moog synthesizer days? Just get the waveform and the attack/decay envelopes right and you could perfectly match the sound of any musical instrument.
Not quite. Each instrument was played in a particular way by a human, and those particular ways added slight imperfections in timing and tone. Music turned out to be as much about the dance performed to make it, as it was about the sounds that were made.
|
|
I wonder if this extends to language.
See also:
-djr
Wednesday, November 17. 2010
Stanford University School of Medicine has developed a relatively simple new imaging technique that provides a very exact way to capture the synapses of a connectome with pinpoint 3D positional accuracy, and considerable contextual resolution.
Stanford has performed a study (see below), which was admittedly done primarily just to showcase the new technique. That said, the study managed to produce a very impressive new find.
“
In the course of the study, whose primary purpose was to showcase the new technique’s application to neuroscience, Smith and his colleagues discovered some novel, fine distinctions within a class of synapses previously assumed to be identical.
”
|
|
[Read more...]
Sunday, October 17. 2010
|
|
Influence Based Learning, one of two new learning methods described in the book Netlab Loligo, has just been awarded a United States Patent. The official title of the patent is:
“Feedback-Tolerant Method And Device Producing Weight-Adjustment Factors For Pre-Synaptic Neurons In Artificial Neural Networks”
The title is a mouthful, primarily designed to help future patent searchers determine if their great idea has already been discovered and patented. It is fully described and discussed in the book, where it is simply referred to as Influence Learning.
As the patent-title expresses, one of the benefits it imparts over existing learning algorithms, is that it is feedback-tolerant. It will work fine with the current-day feed-forward networks configured as "slabs", but it also allows connecting neurons to pre-synaptic neurons as well. That is, it allows feedback, which means you don't have to configure your network with "hidden layers" anymore if you don't want to. You are free to use any connectome you'd like.
|
|
Tuesday, September 7. 2010
Influence learning is one of two new learning algorithms that have emerged (so far) from the Netlab development effort. This blog entry contains a brief overview describing how it works, and some of the advantages it brings to the task of neural network weight-adjustment.
This learning method is based on the notion that—like their collective counterparts—neurons may be attracted to, and occasionally repulsed by, the exercise of influence by others. In the case of neurons, the "others" would be other neurons. As simple as that notion sounds, it produces a learning method with a number of interesting benefits and advantages over the current crop of learning algorithms.
A neuron using influence learning is not nosy, and does not concern itself with how its post-synaptic (forward) neurons are learning. It simply trusts that their job is to learn, and that they are doing their job. In other words, a given neuron fully expects, and assumes that other neurons within the system are learning. Each one treats post-synaptic neurons that are exercising the most influence as role models for adjusting connection-strengths. The norm is for neurons to see influential forward neurons as positive role models, but neurons may also see influential forward neurons as negative role models.
As you might guess, the first benefit is simplicity. The method does not try to hide a lack of new ideas behind a wall of new computational complexity. It is a simple, new, method based on a simple, almost axiomatic, observation, and it can be implemented with relatively little computational power.
Influence Learning is completely free of feedback restrictions. That is, network connection-structures may be designed with any type, or amount of feedback looping. The learning mechanism will continue to be able to properly adapt connection-strengths regardless of how complex the feedback scheme is. The types of feedback designers are free to employ include servo feedback, which places the outside world (or some network structure that is closer to the outside world) directly in the signaling feedback path.
This type of "servo-feedback" is shown graphically in figure 6-5 of the book, which has been re-produced here.

[Read more...]
|
|