In Netlab, there are two different senses, or connotations, of the word convergence
, which can be used to describe two related types of convergence.
- Adaptive Convergence is just "convergence." This is the conventional form of the word, as it is used within most existing artificial neural network literature. It describes a set of weights during supervised training, as they begin to find (converge on) the values needed to produce the correct (trained) response.
- Reactive Convergence is the special sense of the word convergence, which is used in Netlab™. It describes convergence of propagating signals within networks that employ signal feedback (i.e., "reactive feedback"). It has nothing to do with changes in weight-values or training.
convergence describes convergence of weight values
, while reactive
convergence describes convergence of signal
There is a physical, neurobiological connotation of the word as well, which will also be described in this entry (see the section below titled "Convergence In Biology").
. . . . .
(or just: Convergence)
When used without a qualifier in the field of neural networks, convergence is generally understood to mean the conventional usage. In the context of conventional artificial neural networks
convergence describes a progression towards a network state where the network has learned to properly respond to a set of training patterns within some margin of error. A convergence error of 10 percent for example, means a network has converged on a training set when it produces output responses that are within ten percent of the desired output values for all the input patterns in the trained repertoire.
There is a form of adaptive convergence
, which is peculiar to Netlab™ and it's multitemporal synapses
. This usage will be described further, in its own section, below (titled "Mixed Mode Convergence Scenarios").
. . . . .
In Netlab™ the term convergence may also be used to describe a process that is entirely based on the propagation of signals (stimuli) through the network. Unlike the conventional ANN
usage, this connotation has nothing to do with training, or changing weight
values. When used in this fashion, it will usually be qualified as reactive
convergence, though may not always be so qualified.
In conventional feed-forward-only networks, the term "reactive convergence
" has no meaning, since, without feedback, there are no "convergence" dynamics that need to be described in such a fashion.
In neural networks that employ reactive feedback
, the network may oscillate, or be unstable for multiple iterations before it settles on a given set of responses to a given set of inputs. In this case, the network will be said to converge
when (or where
) the oscillations (or "ringing") settle, and the network is producing some usable output. The output is not required to be static (steady) to be converged, only to be providing usable, correct, responses to a given present-moment encounter. Such responses may, in fact, be cyclical in nature, over time.
Note also, that the network doesn't necessarily always converge if it has not fully learned how to respond to a given situation. In this case, the oscillations may very well serve as a form of trial and error for whatever learning processes might be used, but the un-converged characteristic of the stimuli, when described in reactive terms, is purely a description of the reactive, propagating, signals, and not of any changes in the connection-weights
This reactive connotation of the word "convergence" is an exact match for how it is used in electronics design (see conversation below on SPICE®
. . . . .
Mixed-Mode Convergence Scenarios
In that last example, a network that hadn't reactively converged had signal values that were oscillating randomly between extremes. Notice that this is similar to how the weight-values might be said to be behaving in a network that has not achieved adaptive (conventional) convergence.
In this sense, the "un-converged" label being applied to the reactive signal
propagation, looks almost identical to how the un-converged nature of the weight values is expressed.
In Netlab™ networks, it is often the case that both of these two forms of convergence are happening simultaneously and in tandem with each other. The erratically changing signal values are helping the short-term weights within multitemporal synapses "hunt" for values that mimic responses already started by the long term weights.
. . . . .
Convergence In Biology
When describing biological neural networks there is yet another definition which is completely unrelated to the definitions described above.
In biological terms, convergence refers to the combining of multiple signals from multiple sources into a smaller number of signals from a new smaller set of sources. For example, a neuron combining, perhaps, thousands of signals coming in through thousands of synapses, into a single, representative, output on its output axon. It may also be used, similarly, to describe the phenomenon of multiple sensory receptors giving information to a smaller number of neural cells.
. . . . .
Convergence In Electronics Design
(e.g., as used in SPICE® simulations)
Electronic circuit don't normally provide learning facilities, so their only convergence is reactive. When designing electronic circuits in SPICE®, convergence is based on the reactive settling time of the circuit-values over multiple iterations. If the values do not settle after some pre-specified number of iterations, a "no convergence" error will be produced.
. . . . .
The term "convergence," without a qualifier, means:
- Reactive convergence when talking about electronics design
- Adaptive convergence when talking about Neural Network Simulations
One of the best general
definitions of the word convergence that I have seen was in a search-engine blurb for a link that was 404 at WordIQ.com
Convergence - Definition. Convergence means approaching a definite value, as time goes on; or approaching a definite point, or a common view or opinion, or a fixed state of affairs.