About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
It's called the fifth printing, but that is a bit of a misnomer. These days the actual physical printings are done in small runs in multiples of six. It is really more like the fifth “edit.”
In practice, corrections have been made to each new edit while its previous edit was running. After five edits, there have been hundreds of corrections. The issues addressed have included a lot of out-of-place commas, over-used hyphens, typos, and more grammar errors than I'd like to admit.
How do you know you've got the fifth (or better) printing?
Easy. Look at the bar code on the back cover. If the price code (the smaller bar-code, to the right of the ISBN number) says “90000” (no price specified), you have an older copy. If it reads “54795” (USD $47.95) you have the fifth or better edit.
In the book, Netlab Loligo, repeated calls are made for true random number generators (TRNGs) to be included in all CPUs, or at least in those that are intended for use in neural network applications. Naturally, I was very excited to see a headline about Intel having developed one with general purpose use in mind.
Intel's Low-Power “True” Random Number Generator
IEEE has an article about a new “true” random number generator from Intel that has been 10 years in development. Its primary advantage is that, while it is a true RNG, it operates entirely in digital mode using digital devices to obtain randomness from hardware. The slow, energy hogging, analog technology normally needed to glean randomness from Quantum phenomena has been eliminated. It has a few quirks, such as the need to force the outputs of its two mutex inverters high, and the seemingly unavoidable need to compensate using averaging techniques. I expand just a little on these quirks below.
In the spirit of not critiquing something without also offering, at least, a sincere attempt at a solution, I've forwarded a quick (if dirty) attempt at an “all logic gates” DTRNG (Digital True Random Number Generator) below. Only the equations were scratched out at the IEEE blog, I've since produced a circuit diagram graphic, which is included here as well.
From the press release:
“Once infected by spores, the worker ants ... leave the nest, find a small shrub and start climbing. The fungi directs all ants to the same kind of leaf: about 25 centimeters [(9.8 inches)] above the ground and at a precise angle to the sun (though the favored angle varies between fungi). How the fungi do this is a mystery.”
A neural network innovation described in the book: Netlab Loligo has been awarded a patent (#7,904,398). — Of the innovations described in the book, it is the second to receive letters patent (so far ). The patent is titled:
“Artificial Synapse Component Using Multiple Distinct Learning Means With Distinct Predetermined Learning Acquisition Times”
Patent titles serve mainly as an aid for future patent searchers. The patented innovation, along with the underlying concepts and principles that led to it are described and discussed in the book, where they are simply referred to as “Multitemporal Synapses.”
The primary advantage imparted by the innovation is that it gives adaptive systems a present moment in time. This allows them to quickly and intricately adapt to the detailed response needs of their present situation, without cluttering up long term memories with the minute details of those responses.
Multitemporal Synapses
This is a blog entry here that tries to describe Multitemporal Synapses. When time permits, I will try to provide a new blog entry with a clearer explanation using book excerpts (P.S. see above entry). It will be specifically geared to laymen. If you are interested, please subscribe to the feed.
Influence Learning Gets A Patent
Influence Based Learning was the first of Netlab's innovations to be granted a patent. This latest patent makes two (and counting, stay tuned).
The Netlab development effort has led to a new method and device that produces learning factors for pre-synaptic neurons. The need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differs from backpropagation in that its use is not restricted to feed-forward only networks. This new learning algorithm and method, called Influence Learning, is described here and in other entries in this blog (see Resources section below) .
Influence Learning is based on a simple conjecture. It assumes that those forward neurons that are exercising the most influence over responses to the immediate situation will be more attractive to pre-synaptic neurons. That is, for the purpose of forming or strengthening connections, active pre-synaptic neurons will be most attracted to forward neurons that are exercising the most influence.
Perhaps the most relevant thing to understand about this process is that these determinations are based entirely on activities taking place while signals (stimuli) are propagating through the network. Unlike backpropagation, there is no need for an externally generated error signal to be pushed through the network, in backwards order, and in ever-diminishing magnitudes.
Support In Biological Observations
While influence learning in artificial neural network simulations is new, it is based on biological observations and underpinnings from discoveries made over twenty years ago. One of the biological observations that led to the above speculation about attraction to the exercise of influence was discussed briefly in the book The Neuron: Cell and Molecular Biology.
An experiment described in that book shows what happens when you cut (or pharmacologically block) the axon of a target neuron. In that experiment the pre-synaptic connections to the target neuron began to retract after its axon was cut. That is, the axons making presynaptic connections to the modified neuron went away when it no longer made synaptic connections to its own post-synaptic neurons.
The book also described how, when the target neuron’s axon was unblocked (or grew back), the axons from presynaptic neurons immediately began to reform and re-establish connections with the target. Based on these observations, the following possibility was asserted.
"...Maintenance of presynaptic inputs may depend on a post-synaptic factor that is transported from the terminal back toward the soma."
The following diagram depicts these observations schematically.