About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Other Blogs/Sites:
Neural Networks
Hardware (Robotics, etc.)
|
Friday, September 23. 2011
It's called the fifth printing, but that is a bit of a misnomer. These days the actual physical printings are done in small runs in multiples of six. It is really more like the fifth “edit.”
In practice, corrections have been made to each new edit while its previous edit was running. After five edits, there have been hundreds of corrections. The issues addressed have included a lot of out-of-place commas, over-used hyphens, typos, and more grammar errors than I'd like to admit.
How do you know you've got the fifth (or better) printing?
Easy. Look at the bar code on the back cover. If the price code (the smaller bar-code, to the right of the ISBN number) says “90000” (no price specified), you have an older copy. If it reads “54795” (USD $47.95) you have the fifth or better edit.
[Read more...]
Tuesday, May 10. 2011
Hi,
The Preface of the book has now been added to the Excerpts Pages that are available here at the site. -djr
|
|
Tuesday, September 7. 2010
Influence learning is one of two new learning algorithms that have emerged (so far) from the Netlab development effort. This blog entry contains a brief overview describing how it works, and some of the advantages it brings to the task of neural network weight-adjustment.
This learning method is based on the notion that—like their collective counterparts—neurons may be attracted to, and occasionally repulsed by, the exercise of influence by others. In the case of neurons, the "others" would be other neurons. As simple as that notion sounds, it produces a learning method with a number of interesting benefits and advantages over the current crop of learning algorithms.
A neuron using influence learning is not nosy, and does not concern itself with how its post-synaptic (forward) neurons are learning. It simply trusts that their job is to learn, and that they are doing their job. In other words, a given neuron fully expects, and assumes that other neurons within the system are learning. Each one treats post-synaptic neurons that are exercising the most influence as role models for adjusting connection-strengths. The norm is for neurons to see influential forward neurons as positive role models, but neurons may also see influential forward neurons as negative role models.
As you might guess, the first benefit is simplicity. The method does not try to hide a lack of new ideas behind a wall of new computational complexity. It is a simple, new, method based on a simple, almost axiomatic, observation, and it can be implemented with relatively little computational power.
Influence Learning is completely free of feedback restrictions. That is, network connection-structures may be designed with any type, or amount of feedback looping. The learning mechanism will continue to be able to properly adapt connection-strengths regardless of how complex the feedback scheme is. The types of feedback designers are free to employ include servo feedback, which places the outside world (or some network structure that is closer to the outside world) directly in the signaling feedback path.
This type of "servo-feedback" is shown graphically in figure 6-5 of the book, which has been re-produced here.
[Read more...]
Thursday, August 12. 2010
A recent USC study applies a new technique that allows researches to more closely map the brain's wiring. One goal of the study is to better clarify our current understanding of the connection-structure of brains. Also, to try and settle the raging "It's an internetwork" / "It's a hierarchical pyramid" debate.
The Netlab abstraction is designed to facilitate a similar, but slightly different concept of brain wiring-structure, which is visually depicted in the cover art of the book:
Any Connectome You'd Like No 1
Netlab's idealized wiring-structure requirement (simplified)
The above diagram should be seen as a cross-section through a sphere, so the word "donut" in this entry-title takes some license. The interior/exterior connection-model, as depicted, does seem to find—at least passing— observational support in the USC study, e.g.,:
"The circuits showed up as patterns of circular loops, suggesting that at least in this part of the rat brain, the wiring diagram looks like a distributed network."
-djr
Sunday, July 18. 2010
This entry explores a cross-section of excerpts from the book. The cross-section, in this case is: the need for neural networks to interact with a complex external environment.
"Milieu, An External Environment:
Last but certainly not least: the population of similar cells simply could not learn, or exhibit any of the behaviors described, if they had no external environment with which to interact. This clearly axiomatic observation is so taken for granted in studies of biological organisms, that it is often overlooked when modeling them."
"On the other hand, consider the abstract concept of pure unsupervised learning. In a practical sense, this isn't really possible or doable either. If you place a learner directly into an isolation tank at birth, it will not learn. "
"...Of course, when the system is adaptive, and the external environment sits directly in the feedback path [diagram], this relationship can be stated in the opposite direction as well. That is to say, it is simultaneously about making output signals more consistent for each given set of input stimuli. Essentially, if we take both directions into account, it is the world changing the network, changing the world, changing the network, ... ad infinitum."
. . .
"To summarize, feedback loops of signals originating in the brain and returning can remain inside the brain, go outside the brain but remain inside the organism, or include complex chains of causal activities completely outside of the organism."
"We have seen that we can not expect a biological learning entity to learn anything, if we simply place it directly into an isolation tank at birth. We shouldn't expect anything that we build to exhibit sentient behaviors either, if we don't give it the ability to be an integral, interacting part of the complex world around it. Without such interactive capabilities, anything we build will have a restriction that is functionally identical to a biological organism in an isolation-tank. Even if all other facilities to support consciousness are in place, it will not be capable of using them, if denied the ability to interact with a complex milieu."
|
|
Netlab Loligo - New Approaches to Neural Network Simulation
—
This is the book. If you have an interest in machine hosted neural network simulations and have been looking for something that radically departs from the current list of formulaic, paint-by-numbers approaches, this book may be just what you've been looking for. It is in stock and shipping now, if you would like to check it out now, prior to the press-release being distributed. It is also available from Amazon.com
Not sure? Check out the excerpts pages for a little bit of a peek into the book.
|
Tuesday, May 4. 2010
|
|
Netlab Loligo - New Approaches to Neural Network Simulation
—
This book describes Netlab Loligo, which is conceptually, a device-oriented breadboard system for neural network design and development. There is a Matlab tool-kit with the same name out there. Don't let that fool you. Netlab is different. It is decidedly NOT just another attempt to mathematically extend existing data-points collected from 20-year-old laboratory observations.
The book describes my attempt, as a programmer, to abstract (for the purpose of implementing on a computer) some of the knowledge and understanding gained in Cell and Molecular Neurobiology. It also describes, in general terms, the resulting suite of software utilities (a device-description language compiler, “linker”, and runtime), which I am currently working to make available through a web-based application.
It will be a couple (few) more weeks before I start promoting it, but it is in stock and shipping now, for those who want to check it out right away. If you have an interest in machine hosted neural network simulations and have been looking for something that departs from the current formulaic, paint-by-numbers approaches, this book may be just what you've been looking for.
Not sure? Check out the excerpts pages for a little bit of a peek into the book.
Also, if you are somebody known in the field of neural networks, robotics, cognitive or behavioral sciences, or mind/brain research, and you influence purchasing decisions, contact me. Reviewers are needed.
|
|
|