In neural networks, information is represented conceptually in the strengths of the connections between neurons, or between outside signal sources and neurons. Connection strengths in artificial neural networks are often encoded in the values of connection weights at the inputs of neurons. Input signals are modulated by the connection weight values at the inputs, which, in turn, determine how much of the input signal is conveyed by the synapse.

. . . . . . .
Connection-Strength vs. Weight Value

In conventional neural network models, Weight values are numbers representing a combination of both the connection-strength, and the type (inhibitory or excitatory) of a connection. The connection-type (inhibitory or excitatory) can be accommodated in a variety of ways, but usually it is represented by the sign of the weight-value. If the weight-value is negative, the weight represents an inhibitory connection and if the weight-value is positive it represents an excitatory connection.

There is some confusion associated with these math-centric models that use signed, floating-point numbers to represent type:
  • inhibitory connection-strengths are enhanced (made stronger) by being decreased (by being made more negative). Conversely, they are reduced by being increased (by being made less negative).

  • Excitatory connection-strengths, on the other hand, are enhanced by being increased (made more positive), and reduced by being decreased (made less positive).

Stated more generally, a connection-strength's relationship to a conventional signed weight-value is that the connection-strength is reduced when the value moves closer to zero, and the connection-strength is increased when the weight-value is moved farther away from zero.

Thus, when a weight value is negative, increasing its value (making it more positive) reduces the strength of the connection between the pre- and post-synaptic neurons.

One more way to say it: In math-centric systems, the strength of the connection is represented in the magnitude of the absolute value of the weight, regardless of its sign (negative or positive)

More about weight-values vs connection-strengths, and how Netlab™ accommodates this existing convention, can be found in a section titled: "Netlab's Compatibility Mode", which is in this glossary's entry for weights.

. . . . . . .
Alternative Representations

There are many ways to represent connection-strengths in neural networks. Any mechanism that can be altered in a way that can mimic the effect of changing strengths of a connection between two entities can be used to represent connection-strengths. A variable resistor is probably the most obvious example of this. These days, in fact, a lot of excitement is being generated around the possibility of using memristors as weights in neural network circuits.

That said, it is important to understand that any mechanism that serves to modulate the strength of interactions between two things can be thought of, in a general sense, as a provider of connection-strengths. To give an extreme example, a gate between two fenced areas can be left half open, thereby modulating the connection strength between the two areas by half. This reduces the flow of live-stock between them.

Also: Weight     Inhibitory Input     Excitatory Input


Web-based glossary software: (c) Creativyst, 2001-2015