About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Scientists at UC Berkeley have taken brain scans of subjects in an fMRI machine while they watched a movie clip. They then reconstructed the movie the subjects were watching using only the brain scan data, and a database of 18 million seconds of random video gleaned from the web.
First, they used fMRI imaging to measure brain activity in visual cortex as a person looked at several hours of movies. They then used those data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set). Next, they used fMRI to measure brain activity elicited by a second set of movies that were also distinct from the first set. Finally, they used the computational models to process the elicited brain activity, and reconstruct the movies in the second set.
The amount of new understanding this could allow us to gather about mind-brain correlates and first person knowledge should be considerable. If this lives up to the hype, a lot of new research ideas should come out of it. Keeping fingers crossed here.
In the above clip - the movie that each subject viewed while in the fMRI is shown in the upper left position. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject. [source: Gallant Lab (see resources below)]
It's called the fifth printing, but that is a bit of a misnomer. These days the actual physical printings are done in small runs in multiples of six. It is really more like the fifth “edit.”
In practice, corrections have been made to each new edit while its previous edit was running. After five edits, there have been hundreds of corrections. The issues addressed have included a lot of out-of-place commas, over-used hyphens, typos, and more grammar errors than I'd like to admit.
How do you know you've got the fifth (or better) printing?
Easy. Look at the bar code on the back cover. If the price code (the smaller bar-code, to the right of the ISBN number) says “90000” (no price specified), you have an older copy. If it reads “54795” (USD $47.95) you have the fifth or better edit.
In the book, Netlab Loligo, repeated calls are made for true random number generators (TRNGs) to be included in all CPUs, or at least in those that are intended for use in neural network applications. Naturally, I was very excited to see a headline about Intel having developed one with general purpose use in mind.
Intel's Low-Power “True” Random Number Generator
IEEE has an article about a new “true” random number generator from Intel that has been 10 years in development. Its primary advantage is that, while it is a true RNG, it operates entirely in digital mode using digital devices to obtain randomness from hardware. The slow, energy hogging, analog technology normally needed to glean randomness from Quantum phenomena has been eliminated. It has a few quirks, such as the need to force the outputs of its two mutex inverters high, and the seemingly unavoidable need to compensate using averaging techniques. I expand just a little on these quirks below.
In the spirit of not critiquing something without also offering, at least, a sincere attempt at a solution, I've forwarded a quick (if dirty) attempt at an “all logic gates” DTRNG (Digital True Random Number Generator) below. Only the equations were scratched out at the IEEE blog, I've since produced a circuit diagram graphic, which is included here as well.
From the press release:
“Once infected by spores, the worker ants ... leave the nest, find a small shrub and start climbing. The fungi directs all ants to the same kind of leaf: about 25 centimeters [(9.8 inches)] above the ground and at a precise angle to the sun (though the favored angle varies between fungi). How the fungi do this is a mystery.”
A neural network innovation described in the book: Netlab Loligo has been awarded a patent (#7,904,398). — Of the innovations described in the book, it is the second to receive letters patent (so far ). The patent is titled:
“Artificial Synapse Component Using Multiple Distinct Learning Means With Distinct Predetermined Learning Acquisition Times”
Patent titles serve mainly as an aid for future patent searchers. The patented innovation, along with the underlying concepts and principles that led to it are described and discussed in the book, where they are simply referred to as “Multitemporal Synapses.”
The primary advantage imparted by the innovation is that it gives adaptive systems a present moment in time. This allows them to quickly and intricately adapt to the detailed response needs of their present situation, without cluttering up long term memories with the minute details of those responses.
Multitemporal Synapses
This is a blog entry here that tries to describe Multitemporal Synapses. When time permits, I will try to provide a new blog entry with a clearer explanation using book excerpts (P.S. see above entry). It will be specifically geared to laymen. If you are interested, please subscribe to the feed.
Influence Learning Gets A Patent
Influence Based Learning was the first of Netlab's innovations to be granted a patent. This latest patent makes two (and counting, stay tuned).