Also called Feedback Signals
Feedback describes the use of output signals, which are fed back to be used as input signals. These are usually used in tandem with other input signals, and cause Turing-indeterminate responses of the systems that employ them.
The term does not necessarily
describe direct feedback, from the output of a neuron to its input. It may also describe indirect feedback, such as when the output of a given neuron is connected to pre-synaptic
neuron(s) which eventually are connected (possibly through other neurons) back to the signal-producing neuron's input.
Finally, and possibly most importantly, the output of a given neuron or set of neurons may produce signals that have an effect on the external environment. The changes in the external environment are sensed by input sensors which, in turn, produce signals that are eventually part of the input to the neurons that produced the initial output.
. . . . . . .
In Neural Networks
Many traditional neural network learning algorithms
restrict network topologies to forward signaling only. That is, they do not allow feedback
in the design of the neural network. This leads to topological layers in these designs, which are sometimes referred to as Multilayer Perceptrons, or MLP
Netlab introduces a new learning algorithm, called influence learning
, which is based on attraction to influence exerted over other neurons during signal propagation
. This algorithm is completely feedback-tolerant.