Key Word Search
by term...

by definition...

for letter: "C"
Results
Ca
____
Calcium
____
cAMP
____
Catastrophic Forgetting
____
Catastrophic Interference
____
Cation
____
Cerebellar Model Arithmetic Computer
____
Cerebellar Model Articulation Controller
____
Cerebellum
____
Chalmers, David
____
Charles Darwin
____
Charybdotoxin
____
Chemical Influence
____
CI
____
Ciliary Synapse
____
Classical Conditioning
____
CMAC
____
CMOS
____
Cole, Kenneth S.
____
Commissure
____
Complimentary Metal Oxide Semiconductor
____
Complimentary Metal Oxide Silicon
____
Conditioned Avoidance
____
Conditioned Inhibition
____
Conditioned Reflex
____
Conditioned Reflexes (Book)
____
Conditioned Response
____
Conditioned Stimulus
____
Connection-Strength
____
Consciousness
____
Constant Learning
____
Continual
____
Continual vs Continuous
____
Continually
____
Continuous
____
Continuous Learning
____
Continuous Weight Change
____
Continuously
____
Control
____
Convergence
____
CR
____
Creativyst Table Exchange Format
____
CREB
____
Cretan Paradox
____
CRISPR
____
Critical Phenomena
____
CS
____
CSV
____
CTX
____
Current
____
CWC




Unless otherwise indicated, all glossary content is:
(C) Copyright 2008-2022
Dominic John Repici
~ ALL RIGHTS RESERVED ~
No part of this Content may be copied without the express written permision of Dominic John Repici.






























 



 
Convergence

 
In Netlab, there are two different senses, or connotations, of the word convergence, which can be used to describe two related types of convergence.

  • Adaptive Convergence is just "convergence." This is the conventional form of the word, as it is used within most existing artificial neural network literature. It describes a set of weights during supervised training, as they begin to find (converge on) the values needed to produce the correct (trained) response.

  • Reactive Convergence is the special sense of the word convergence, which is used in Netlab™. It describes convergence of propagating signals within networks that employ signal feedback (i.e., "reactive feedback"). It has nothing to do with changes in weight-values or training.
Simply, adaptive convergence describes convergence of weight values, while reactive convergence describes convergence of signal values.

There is a physical, neurobiological connotation of the word as well, which will also be described in this entry (see the section below titled "Convergence In Biology").



. . . . .
Adaptive Convergence
(or just: Convergence)


When used without a qualifier in the field of neural networks, convergence is generally understood to mean the conventional usage. In the context of conventional artificial neural networks convergence describes a progression towards a network state where the network has learned to properly respond to a set of training patterns within some margin of error. A convergence error of 10 percent for example, means a network has converged on a training set when it produces output responses that are within ten percent of the desired output values for all the input patterns in the trained repertoire.

There is a form of adaptive convergence, which is peculiar to Netlab™ and it's multitemporal synapses. This usage will be described further, in its own section, below (titled "Mixed Mode Convergence Scenarios").



. . . . .
Reactive Convergence


In Netlab™ the term convergence may also be used to describe a process that is entirely based on the propagation of signals (stimuli) through the network. Unlike the conventional ANN usage, this connotation has nothing to do with training, or changing weight values. When used in this fashion, it will usually be qualified as reactive convergence, though may not always be so qualified.

In conventional feed-forward-only networks, the term "reactive convergence" has no meaning, since, without feedback, there are no "convergence" dynamics that need to be described in such a fashion.

In neural networks that employ reactive feedback, the network may oscillate, or be unstable for multiple iterations before it settles on a given set of responses to a given set of inputs. In this case, the network will be said to converge when (or where) the oscillations (or "ringing") settle, and the network is producing some usable output. The output is not required to be static (steady) to be converged, only to be providing usable, correct, responses to a given present-moment encounter. Such responses may, in fact, be cyclical in nature, over time.

Note also, that the network doesn't necessarily always converge if it has not fully learned how to respond to a given situation. In this case, the oscillations may very well serve as a form of trial and error for whatever learning processes might be used, but the un-converged characteristic of the stimuli, when described in reactive terms, is purely a description of the reactive, propagating, signals, and not of any changes in the connection-weights.

This reactive connotation of the word "convergence" is an exact match for how it is used in electronics design (see conversation below on SPICE®).



. . . . .
Mixed-Mode Convergence Scenarios


In that last example, a network that hadn't reactively converged had signal values that were oscillating randomly between extremes. Notice that this is similar to how the weight-values might be said to be behaving in a network that has not achieved adaptive (conventional) convergence.

In this sense, the "un-converged" label being applied to the reactive signal propagation, looks almost identical to how the un-converged nature of the weight values is expressed.

In Netlab™ networks, it is often the case that both of these two forms of convergence are happening simultaneously and in tandem with each other. The erratically changing signal values are helping the short-term weights within multitemporal synapses "hunt" for values that mimic responses already started by the long term weights.



. . . . .
Convergence In Biology
(Physical Convergence)


When describing biological neural networks there is yet another definition which is completely unrelated to the definitions described above.

In biological terms, convergence refers to the combining of multiple signals from multiple sources into a smaller number of signals from a new smaller set of sources. For example, a neuron combining, perhaps, thousands of signals coming in through thousands of synapses, into a single, representative, output on its output axon. It may also be used, similarly, to describe the phenomenon of multiple sensory receptors giving information to a smaller number of neural cells.



. . . . .
Convergence In Electronics Design
(e.g., as used in SPICE® simulations)


Electronic circuit don't normally provide learning facilities, so their only convergence is reactive. When designing electronic circuits in SPICE®, convergence is based on the reactive settling time of the circuit-values over multiple iterations. If the values do not settle after some pre-specified number of iterations, a "no convergence" error will be produced.



. . . . .
Unqualified Convergence


The term "convergence," without a qualifier, means:
  • Reactive convergence when talking about electronics design
  • Adaptive convergence when talking about Neural Network Simulations

One of the best general definitions of the word convergence that I have seen was in a search-engine blurb for a link that was 404 at WordIQ.com:
Convergence - Definition. Convergence means approaching a definite value, as time goes on; or approaching a definite point, or a common view or opinion, or a fixed state of affairs.


Also: Kinetic Depth Effect        

 
 


































Web-based glossary software: (c) Creativyst, 2001-2022