A set of constructs and methods introduced and described in the book:
Netlab Loligo will improve the ability of systems constructed with them to adapt to current short-term situations, and learn from those short-term experiences over the long term.
How do we, as biological organisms, manage to keep so much finely detailed information in our brains about how to respond to any given situation? That is, how do we manage to keep countless tiny intricacies stored away in our “subconscious” ready to be called upon at just the right time, right when we need them in the present moment?
According to this theory of learning, the answer to that question is: We don't.
Instead, our long term connections—those that immediately drive our responses at all times—are only concerned with getting us started in any given “present.” Responses stored in long-term connections start us along a trajectory that makes it easier for us to learn whatever short-term, detailed responses are needed for any given detailed situation.
Connections that drive short-term responses, on the other hand, form spontaneously in-the-moment, and quickly adapt to whatever present situation we currently find ourselves in. Just as significantly, connections driving short-term responses tend to dissipate as quickly as they form. This theory essentially says that each connection in the brain that drives responses (physical or internal) includes multiple distinct connection strengths, which each increase and decrease at different rates of speed.
Multi-temporality is achieved in Netlab's simulation environment by providing multiple weights per a connection point (i.e.,
synapse), which are referred to as Multitemporal
[Note 1] synapses.
Multitemporal synapses employ multiple
weights. Each of the multiple weights associated with a given synapse represents a connection strength, and can be set to acquire and retain its strength at a different rate from the others. The methods also specify
Weight-To-Weight Learning, which is a means of teaching a given weight in the set of multiple weights, using the value of other weights from the same connection. Together these constructs provide all the functionality required to model the theory of learning discussed above.
Following is a graphic excerpted from the book: Netlab Loligo, which shows a neuron containing three different weights for each connection point. Each weight is given its own learning algorithms, with its own learning-rate, and forget-rate.
A variety of analogies and metaphors are employed in the book to try to relate and clarify the underlying concept of this theory of learning. They range from an analogy that uses the electron probability cloud forming around a molecular nucleus, to the following explanation. This explanation asks you to step back and observe your own responses as a means of relating how you are spontaneously adapting to short-term response needs.
Giving Adaptive Systems A “Present”
If you imagine yourself in any present moment, you will be better able to understand the motivation, theory, and driving goal that is behind the multitemporal synapse mechanisms described above. Imagine yourself, for example, sitting in a pub or café with some friends, engaged in a conversation. In the present moment, you are quickly acquiring new short-term “correct” responses. You don't realize it, but the connection-based responses you are learning in the here-and-now are also fading away just as quickly. It is like a metaphorical row-boat in a stream that is constantly flowing against it. Forward effort (learning in this analogy) is quickly overtaken by the current pushing the boat back toward its starting location.
In fact, your short-term, learned responses (memories) don't last long at all, from just a second or two, to perhaps a minute or two. That is okay, because your short-term responses are just as quickly acquired, sometimes taking less than a second to form. This is because the blunt essence of past response-connections, which have impressed into longer-term connection mechanisms at the same synapses, are driving the beginnings of a “correct” response. That is, the longterm-weighted responses are immediately moving muscles, and changing internal activation-levels in the present, even before your short-term response connections have begun to form. This "nudge in the right direction" by blunt (or coarse), long-term learned-responses, permits new, short-lived, short-term responses to be acquired very quickly “in the moment.” This, in turn, produces a short-term “present” response, which is capable of much more finely detailed contextual responses than would be possible if the details were all kept in a single, long-term, connection space.
Stated plainly, multitemporal synapses, combined with weight-to-weight learning provide a highly detailed set of “fine,” or “specific,” responses that are learned “in the moment,” and which are driven by already starting “coarse,” or “blunt” responses maintained for the same connection-points in longer-term connection-mechanisms. This goal, which is made achievable in the multi-weight synapses, and neuron components discussed, is extremely useful and practical for adaptive systems, such as industrial robotics.
While some of the mathematically oriented works of the past two decades will allude to the above goal: for example by giving themselves titles like “Long Short-Term Memory (LSTM),” a careful reading reveals that none of them actually provide for it in any substantial way
The underlying principle is simple. In order to produce an internal representation of any real world phenomenon, a brain must be able to represent, or model, that phenomenon's most essential characteristics. In order to produce an internal representation of a present moment, the same rule applies. Certainly one of the most relevant and obvious characteristics of a present moment is that it goes away, and this characteristic must be represented internally. In order to represent this particular characteristic of a present moment phenomenon, its internal representation must include a component that continually falls away. The faster weights within multi-temporal synapses provide this representational facility for the immediate present, while longer-term present moments can be represented by connection-weights that decay more slowly.
- The ability to adapt and respond to an intricately detailed present moment, without cluttering long-term memory with limitless numbers of intricate short-term details.
- The ability to extract from each of many present moments, a long-term connectome that represents a blunt, starting response, which will make adaptation to future, similar present-moments faster and easier.
- The ability to continuously learn without the need to ever externally control when learning is turned off or on.
- The learning method defined as a part of multitemporal synapses is called weight-to-weight learning. It imposes no restrictions on the amount or type of feedback used in a continuously learning system. This has not been true of past learning techniques.
The book provides a full explanation of the observations about biological learning and temporality that have led to these mechanisms and methods.
=======================
[Note 1] On occasion the word polytemporal has been used to refer to these connection points, but they will be referred to as multitemporal going forward in deference to more traditionalist writing style.
[back]
Sigh . . .
- Complimentary Learning Systems — CLS
What else can one do, but document the behavior, and hope the truth eventually prevails?
YEAH baby! The second of the innovations described in the book Netlab Loligo has been awarded a patent (#7,904,398). The patent, which is for methods and devices implementing Multitemporal Synapses is titled:
Tracked: Mar 10, 19:42