Key Word Search
by term...

by definition...

for letter: "M"
Results
Machine Consciousness
____
Magnetic Resonance Imaging
____
Maria Goeppert-Mayer
____
Marr, David
____
Mathematical Symbols
____
Max Planck
____
MC
____
Memories
____
Memory
____
Memory Acquisition Time
____
Memristor
____
Mentions
____
Michael Faraday
____
MLP
____
Modulate
____
Modulation
____
Monatomic
____
MRI
____
mRNA
____
Mullis, Kary B.
____
Multilayer Perceptron
____
Multiphoton Microscopy
____
Multitemporal Connection
____
Multitemporal Connection Point
____
Multitemporal Synapse
____
Multi-Weight Connection




Unless otherwise indicated, all glossary content is:
(C) Copyright 2008-2022
Dominic John Repici
~ ALL RIGHTS RESERVED ~
No part of this Content may be copied without the express written permision of Dominic John Repici.






























 



 
Memory

 
Memory is any measurable change in a system that has a direct causal relationship to an experienced event. In other words, memory is any persistent change to a system (i.e., its properties, characteristics, or attributes) that is caused by an externally experienced event.

Memory (the caused change to the characteristics of a system) can be permanent or temporary. In order to be memory, the change must remain for some period after the causal event stops.



In computers

Memory is represented and implemented, in computers, as a group of cells that are capable of holding a representation of a Boolean (logical) 1 or 0. The Boolean state of each bit is usually represented as a voltage, such as +5 volts to mean 1 (true), or 0 volts to represent 0 (false). Because these values can have no valid intermediate values they are binary, hence the memory cells represent binary bits. A 1 or zero bit can be written into an individually accessible cell, and the state of the specific cell can be polled at a later time.

Note that the thing that makes a computer memory valuable is not simply that it is capable of holding logical bits (which are, in turn, capable of encoding information), but that each bit is individually addressable. That is, there is a behavior associated with a group of memory that must also be implemented in order to make the memory usable. In essence, this behavior (repeatable addressing of individual bits within the collection) is just as important to, and therefor just as much a part of the more general concept of memory. In this example, the combination of memory with the behavior of individual addressing is a very good analogy for learning.



In biology

Memory in biological systems is also implemented through behavior. In the case of populations of individual cells, memory is achieved through attrition, which is the base mechanism in the process of adaptation. Attrition and adaptation also produce memory at the higher level, in groups of organisms, which act as individual cells within societies and civilizations. For example:

Example of non-neuronal memory in biological systems — Over generations, predators perceive groups of prey animals (schools of fish, flocks of birds, herds of beasts) as a single large adversary, causing a learned grouping behavior in the prey-species as a whole.

In the above example, those prey animals that do not stay close to the group are picked off by predators, lowering their odds of reproducing. Over time this leaves only those who exhibit a tendency to stay close to others in the group. The fact that the learning group, in this example, consists of individuals that have neurons raises a valid concern. While the prey animals have neurons, in this example, they are not the driving factor producing the memory in the larger, corporate, organism. That said, it remains a concern.

To address this concern, we may consider similar learning behaviors that will occur in groups of organisms that have no neurons. The book "Netlab Loligo" presents one such scenario ("A Seven Step Explanation" starting on page 43).




The example from the book (see the fragment of fig. 3.1, above) considers the generational dynamics of a group of single-cells. Each cell in the group has a propulsion mechanism that changes speeds based on the amount of nutrients detected in the surrounding environment. In that discussion, it can be seen that the cells will learn a variety of behaviors as a group based on generational attrition. They will even learn to "congregate" based on similarities in their propulsion responses to nutrients.

Finally, one can also observe similar behavior-based memory formation in plants. While plants have no neurons, they must attract pollinator organisms. Those plants that are relatively more attractive to the pollinators have increased odds of reproducing, while those that are less attractive are culled. This scenario does not require neurons on the part of the learning organism, but does require them in the pollinators. This is because the pollinators must form some internal (first person) perception of the colors and shapes exhibited by the plants. Insects, via their neurons, associate such perceptions with sustenance. In the beauty and diversity of flowers, we get to glimpse an accurate, though symbolic, representation of their perceptions.

These three examples demonstrate three different configurations for acquiring memory. In the first, there is an effector group (the predator), and a learning group (the prey). Both have neurons, but the neurons are not required in the learning group. In the second, there is no effector group, and the learning group(single-cells) acquire memory without any neurons being involved in the process. Finally, in the third example, the learning group (plants) has no neurons, but the acquisition of memories depends on neuronal associative behavior within an effector group (insects).




In neurobiology

Neuron cells have processes which act, functionally, as individual cells do. That is, a neuron-cell has processes which can, themselves, come and go, without the entire cell having to die or reproduce. For example, neurons form synapses, which are essentially information connections from other neurons. These connections can experience adaptation. They can completely die off, or new connections can form. In essence, the synapses themselves take the place of entire cells. This permits the cells—with all their complex and costly machinery—to remain, while their synapses engage in the adaptive processes normally associated with cell-death, birth, and development.

Synapses can also model adaptation in a more continuous fashion. Not only can synapses form and die off (taking the place of cell birth and death respectively). They can also modulate the strength of their interconnection, based on adaptive pressures.

At the other extreme, memory and learning based on adaptation via cell-death is also seen in biological brains. Most (but not all) such learning occurs at prenatal or early neonatal stages of development, when entire populations of neurons are being born and then dieing off.



In Neural Networks

Memory is usually implemented via altering the strengths of synaptic connections between neurons. In traditional artificial neural networks (ANNs), these connections are usually represented with numeric values called weights, or connection-weights. There is no reason, however, that they may not also be represented in other ways, including, for example, the conduction or resistance of active or passive elements within an electronic circuit. Memristors, may, in fact, be used in this capacity one day.



. . . . . . .
Resources

Also: Learning     The Stability-Plasticity Problem     Memristor

 
 


































Web-based glossary software: (c) Creativyst, 2001-2022