About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Other Blogs/Sites:
Neural Networks
Hardware (Robotics, etc.)
|
Friday, March 8. 2013
Developing an A.I. robot?:
Program it to cry when it detects something very bad or very good.
Developing a conscious robot?:
Program it to want to cry when it detects something very bad or very good.
- Related Background
- Related Glossary Entries
Tuesday, January 1. 2013
Some recent glossary updates have included:
- Connection Strength [Refreshed]
Banging away some more on the "basics" drum. Attempting to address some common confusion surrounding weight-values vs. the connection-strengths they represent.
- Catastrophic Forgetting [New]
A definition of a fairly common term from neural network literature, which labels one of the major problems that have been encountered. The entry also includes a discussion of how the problem is fully resolved by Multitemporal Synapses.
- Interference [New]
Definition and some resources regarding current understanding and ideas on interference in neuronal memory acquisition.
- The Stability Plasticity Problem [New]
Definition of another term from neural network literature, which is used as a more general label for the problem that is responsible for catastrophic forgetting. This entry also explains how the problem it labels is resolved by Multitemporal Synapses.
- Memory [Refreshed]
Added some pics to the section on non-neuronal biological learning, along with a terse discussion of things like herding, schooling, and flocking behaviors. These are used to demonstrate how learning can occur in biological systems via an extra-neuronal mechanism.
- Multitemporal Synapses [Refreshed]
Better explanations and editing plus a diagram (5.1) from the book (originally from the patent application).
- Kinetic Depth Effect [Refreshed]
Added content, plus reorganized.
- Multilayer Perceptron (MLP)[New]
Quick add to support other entries.
Friday, April 20. 2012
|
“Certainly, one of the most relevant and obvious characteristics of a present moment is that it goes away, and that characteristic must be represented internally.”
| |
Stated plainly[1], the principle behind multitemporal synapses is that we maintain the blunt “residue” of past lessons in long-term connections, while everything else is quickly forgotten, and learned over again, in the instant. In other words, we re-learn the detailed parts of our responses as we are confronted with each new current situation.[2]
One of the primary benefits of applying this principle, in the form of multitemporal synapses, is a neural network construct that is completely free of the usual problems associated with catastrophic forgetting. When you eliminate catastrophic forgetting from your neural network structure, the practical result is the ability to develop networks that continuously learn from their surroundings, just like their natural counterparts.
|
|
One major challenge with conventional neural network models has been in how to maintain connections that store enough intricate in-the-moment response-details to deal with any contingency that the system may encounter. Conventionally, such details would overwhelm long-term lessons stored in permanent connections-weights. This characteristic of conventional neural network models is known as The Stability Plasticity Problem, and is the underlying cause of " catastrophic forgetting."
When an artificial neural network that has learned a training set of responses, then encounters a new response to be learned, the result is usually ‘ catastrophic forgetting’ of all earlier learning. Training on the new detail alters connections that are maintained by the network in a holistic (global) fashion. Because of this, it is almost certain that such a change will radically alter the outputs that were desired for the original training set.
[Read more...]
Monday, March 19. 2012
The McGurk effect is a perception illusion, which shows how our perception of reality can be affected by interactions between multiple senses. The presentation of the McGurk effect demonstrated in the following video also shows, convincingly, that our visual processes can completely override our auditory perceptions of speech — at least in certain circumstances.
In the above video, you will see the speaker's lips form an 'f'-sound. You will “hear” an 'f'-sound even though the actual sound being produced is a 'b'-sound (dubbed in over the video).
In this video, the 'f' perception reported by your eyes completely overrides the 'b' perception reported by your ears. Can we conclude, from this, that visual processing in the brain is given full priority over auditory processing?
That may be a bit hasty.
[Read more...]
Saturday, March 3. 2012
The site was switched to a new hosting service at the end of February. The blog and glossary were the pieces I was most anxious about, but they seem to have handled the move just fine.
So far, this host seems to be providing much faster responses. It should also provide better up-time.
Responses have gone from often taking 40-70 seconds, down to less than ten seconds. In fact, I haven't counted a single response greater than 12 yet.
The previous provider would regularly (about once a month) make changes that completely hid most, or all, of the site's content from the search-engines and in-links. Those down-times would typically last from two to six days. Many down-times, including the last one, only ended when I wrote some defensive code to work around their new server-settings.
Hoping this provider will do better in that department as well.
So far, I'm happy with it.
Thursday, January 19. 2012
Spent some time today doing minor edits to glossary entries. Of all the small edits, the most significant change made was to add the following section to the entry for weights.
“ . . . . . . .
Netlab's Compatibility Mode
ANN models that use floating point signed-value weights in the conventional fashion are math-centric. That is, they typically are concerned only with the signed numeric weight-value, rather than with the connection-strength represented by its absolute value. In this case, for example, increasing the weight value will make it more positive, regardless of whether it is representing an excitatory or inhibitory connection.
Netlab's default behavior is to operate directly on connection-strength representations, regardless of how they are implemented internally. Netlab neurons facilitate the conventional practice, however, by allowing it to be specified in the learning method for each weight-layer.
The table below shows how Netlab facilitates compatibility with existing practices. The table documents how the translation is carried out between the traditional math-centric convention, and Netlab's connection-strength-centric convention.
Connection-Type->
v--Operation
|
Excitatory
|
Inhibitory
|
Increase
|
Increase Connection Strength
|
Decrease Connection Strength
|
Decrease
|
Decrease Connection Strength
|
Increase Connection Strength
|
Translations performed when conventional adjustment practice is specified for a connection.
”
One possible analogy for the conventional, value-based, adjustment practice is that of adjusting for a specific water temperature from a faucet. If the water is too cold, for example, adjusting the weight value is comparable to simultaneously increasing the hot and reducing the cold (hot being the negative inhibitory weights, and cold being the positive excitatory weights in this analogy). Conversely, if the water is too hot, it is adjusted by simultaneously decreasing the hot, and increasing the cold.
In this way, Netlab is able to fully support the practice of working directly with the numeric value of a signed weight, but it also supports its own alternative strategy of adjusting connection strength representations. This strategy seems to be more representative of what has been learned about the cell, and molecular biology of neurons. The faucet analogy used above to describe the value-based adjustment is not sufficient to describe this strategy [1].
- Related glossary entries:
===========
Notes:
[1] - This is not to say the connection-strength adjustment strategy can't be related with an analogy, just that I have been too lazy, or too unfocused to come up with one that feels satisfyingly apt.
Thursday, January 5. 2012
Linguists have recently discovered [1] that almost all words are metaphorical at their base, and some people (e.g., me) posit that they all are. Though speculative, it is at least conceivable that even the sub-language signaling in the brain, which eventually leads to language, is also metaphorical. Consider that the bell may become a metaphor for food in the mind of Pavlov's dog.
Language is also able to relate ambiguity about the concepts it conveys. The word “life,” for example, can mean life-biology, or life-consciousness. Up until now, it has been perfectly acceptable to use these two meanings interchangeably. There simply has never been an instance of consciousness that existed outside of a biological body — at least none that we could directly experience with our physical senses.
|
|
Things may be changing now. . .
[Read more...]
|
|