Influence learning is one of two new learning algorithms that have emerged (so far) from the Netlab development effort. This blog entry contains a brief overview describing how it works, and some of the advantages it brings to the task of neural network weight-adjustment.
This learning method is based on the notion that—like their collective counterparts—neurons may be attracted to, and occasionally repulsed by, the exercise of influence by others. In the case of neurons, the "others" would be other neurons. As simple as that notion sounds, it produces a learning method with a number of interesting benefits and advantages over the current crop of learning algorithms.
A neuron using
influence learning is not nosy, and does not concern itself with
how its post-synaptic (forward) neurons are learning. It simply trusts that their job is to learn, and that they are doing their job. In other words, a given neuron fully expects, and assumes that other neurons within the system are learning. Each one treats post-synaptic neurons that are exercising the most influence as role models for adjusting connection-strengths. The norm is for neurons to see influential forward neurons as positive role models, but neurons may also see influential forward neurons as negative role models.
As you might guess, the first benefit is simplicity. The method does not try to hide a lack of new ideas behind a wall of new computational complexity. It is a simple, new, method based on a simple, almost axiomatic, observation, and it can be implemented with relatively little computational power.
Influence Learning is completely free of feedback restrictions. That is, network connection-structures may be designed with any type, or amount of feedback looping. The learning mechanism will continue to be able to properly adapt connection-strengths regardless of how complex the feedback scheme is. The types of feedback designers are free to employ include servo feedback, which places the outside world (or some network structure that is closer to the outside world) directly in the signaling feedback path.
This type of "servo-feedback" is shown graphically in figure 6-5 of the book, which has been re-produced here.
A single network can be spread out over any number of processors running asynchronously and in parallel.
Influence learning mechanisms are only concerned with factors within neurons that are nearby from a connection standpoint. Because of this, a designer is free to produce a very large neural network, in which smaller parts of the network are running on many different processors, each with their own storage, and even their own clock-speeds.
Learning based on influence over outputs is inherently self-limiting and does not over-train weights. Neural networks employing influence based learning can be "always on", and continuously adapt to their environment. This is sometimes called "online learning", though, in common usage that term makes no distinction between continual, and continuous learning.
Online vs. Continuous - As stated, the term online learning is often used to convey the concept of learning methods which are merely continual. Conventionally neural network systems are trained on a particular representative training set (sometimes called "exemplars"). The learning function in a traditional neural network is turned
off once it has been "fully" trained on a set of exemplars. At the time it is placed into service it has no ability to learn how to respond to any new situations it may encounter. That said, in some cases, systems have been developed that allow the learning to be turned on and off while they are in service, and this has been termed
online learning.
The difference between
continuous and
continual becomes an important distinction, however, when discussing influence learning. Influence learning does not rely on an externally derived indication of whether to turn on connection-weight adjustments. Weight-adjustments are tacitly limited based on need. This allows adaptation to be always on, and continuously making adjustments as needed for the present situation. In other words, a system can be designed to continuously adapt to the environment with which it interacts. This is especially true when influence learning is combined with
multitemporal sysnapses and
weight-to-weight learning.
As stated, the learning method recognizes no justifiable reason to know anything about the inner-workings used by its role-models (post-synaptic neurons) to achieve learning. It simply assumes that whatever method they are using is working, and then uses their activity as a template to drive its own learning process.
Because the learning algorithm permits feedback, there is no longer any requirement that neural networks be structured in the traditional layered configuration. That said, a designer may freely choose to limit his or her neural network to be a pure feed-forward structure, with its characteristic layers. It is also the case that networks designed with feedback will often form their own layers as they learn.
In these cases, restrictions on the number of layers that can be employed in such networks no longer exist. Influence learning is only concerned with activity that occurs during signal propagation. There is no reverse propagation of ever-diminishing error values back to pre-synaptic (formerly "hidden") layers of neurons.
This post should give you a fairly good idea of the inner-workings and underlying concept of this new learning method. If you get it, and you're interested, you will find a much more detailed description and discussion in the book: "Netlab Loligo: New Approaches to Neural Network Simulation". Which is
available here, or from
Amazon.com
It should also be noted that this learning method is
now patented, and descriptions do not constitute grant of license.
Tracked: Sep 27, 09:04
Woo HOOO! The Influence Learning Algorithm, one of two new learning algorithms described in the book Netlab Loligo, has just been awarded a United States Patent. The official title of the patent is: “Fee
Tracked: Oct 17, 16:31
Tracked: Oct 21, 04:26
Tracked: Oct 24, 10:48
Tracked: Jan 20, 05:30
Tracked: Feb 06, 13:00
The book Netlab Loligo describes a new method and device that produces learning factors for pre-synaptic neurons. This need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differ
Tracked: Mar 07, 19:00