About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
There are many easy-to-guess advantages to moving consciousness to a machine platform. If/when "we" as a society become machine-based beings:
We will be able to move from one body to another with the same ease we biology-based beings now change vehicles.
We will be able to live and work in the vacuum of space without having to take along a very-hard-to-maintain bubble of pressurized air.
We will be able to travel on a beam of electro-magnetic energy to far-away worlds where spare bodies have been shipped.
And those faraway worlds will not need to have a biosphere, or have bubbles of biosphere constructed/cultivated.
In like fashion, we will be able to move to and from orbits and Lagrange points where bodies have been previously placed.
Our sustenance will not be limited to carbohydrates that require a biosphere in which to grow. Most sources will be harvestable directly from light, heat, and kinetics, which all exist in the vacuum of space.
While this is not by any means a complete list, there are likely to be many less obvious advantages, too. This one for example:
We will be able to have multiple independent bodies, each with its own short-term situational memories, working off of a single set of long-term experiential memories that have been accumulated over time.
Consider multiple bodies working in a manufacturing environment, each working off of a single experienced individual's learning and acquired expertise in manufacturing processes. Each maintaining its own short-term memories which will form and decay quickly to respond to the fine-grained details of its immediate individual situation
It may even be possible that the single individual's long-term experiential memory may be able to continue to gain learning from each body's short-term situational connections as they form and decay in response to their own current situations.
"Recent observations have thoroughly established that order in groups of small particles, easily visible under a low-power microscope, can be caused spontaneously by Brownian-like movement of smaller spheres that in turn is caused by random molecular motion." — from: a paper by Frank Lambert at Entropysite.
. . . . . . .
References:
Adams, M.; Dogic, Z.; Keller, S.L.; Fraden, S. Nature 1998, 393, 349-352 and references therein.
Laird, B. B. J. Chem. Educ. 1999, 76, 1388-1390.
Dinsmore, A. E.; Wong, D. T.; Nelson, P.; Yodh, A. G. Phys. Rev. Letters 1998, 80, 409-412.
Scientists have made measures for observing spike-timing-dependent associative memory formation that are much more precise than previous measurement techniques. In the process, they have found that a fairly established proposition of the STDP (spike-timing-dependent-plasticity) theory may not always be correct.
Specifically, it has long been held that a spike preceding a spike on a related synapse would strengthen the association, while the same spike trailing a spike on the related synapse would cause the association to become weaker (i.e., tend to extinction). The experimenters, however, found that the connections in a specific class of excitatory neurons were strengthened, regardless of the firing order (leading or trailing) of the two connections.
While we're on Chalmers interviews, here are ten minutes of him talking about his "hard problem" of consciousness. His description of the hard problem has finally moved us forward, off of the obfuscatory kludge that was Turing's "test." Turing merely tested the ability of an algorithm to fool a person on the other side of the screen into thinking it was conscious. It has led to ever more complex implementations of programs like ELIZA, which, at their core, are the antithesis of consciousness.
Here, presented for your enjoyment. David Chalmers:
One of the primary problems that traditional connectionists, and their neural networks, have been unable to solve can be stated like this:
How do biological learners deal with the dichotomy of needing to provide extremely detailed responses to any given situation, while possessing non-infinite resources with which to hold all those details?
Netlab's take on this problem has been quite different than the traditionally espoused theories and solutions[Note].
In a nutshell, Netlab recognizes that natural neural networks don't try to hold every single detail ever experienced. Instead, like all biological solutions, they act as the ultimate realists, and make the best of their real circumstances. Since they can't keep every single detail they've ever learned about how to respond to each new situation, they, instead, adapt to each new situation. How they do that can be summed up in three steps:
Present-moment requires a blank-slate every time we encounter it. That is, it requires memories (connection-strengths) that form very quickly in response, and then decay very quickly when we no longer need them. The only alternative is to keep every tiny detail, of every response to past "present-moments" we've ever experienced.
In order for blank short-term weights to adapt and respond quickly to new experiences, we must draw upon longer-term learning. We have found, however, that a new response can't simply be inserted into existing responses represented in the strengths of connections between neurons. Each learned response maintained in long-term connections must be taught by being interleaved with other responses, where each response-presentation can only have a slight effect on long-term connections.
This problem is solved by breaking up the speed of learning —and decay— into two (or, likely, more) different temporal spaces. First, a short term weight-space, which decays very quickly, and uses responses begun by long term responses to promote very fast learning (think hand-over-hand prompting). — The other side of this coin is a set of long term connection-strengths —at respective connections— which learn a little bit from each short-term response re-learned, as needed, in the short-term weights.
. . . . . . .
Note
Though the worst of the church have been trying (perhaps a little too hard?) to re-write the recent history (past 20 years, or so), a cursory reading of the non-back-filled literature easily demonstrates that the field had been mostly oblivious about this solution, up until after multitemporal synapses were introduced. [Plagiarism Index], e.g.,
Complimentary Learning Systems — CLS
What else can one do, but document the behavior, and hope the truth eventually prevails?