About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Linguists have recently discovered [1] that almost all words are metaphorical at their base, and some people (e.g., me) posit that they all are. Though speculative, it is at least conceivable that even the sub-language signaling in the brain, which eventually leads to language, is also metaphorical. Consider that the bell may become a metaphor for food in the mind of Pavlov's dog.
Language is also able to relate ambiguity about the concepts it conveys. The word “life,” for example, can mean life-biology, or life-consciousness. Up until now, it has been perfectly acceptable to use these two meanings interchangeably. There simply has never been an instance of consciousness that existed outside of a biological body — at least none that we could directly experience with our physical senses.
Merry Christmas, Happy Hanukkah, and happy new year. May your days be filled with happiness, love, and joy this Christmas season, and may your new year be a blessing to you and others.
The Glossary entry for William of Ockham here at the site has a new section titled “In Other Words?”. This new section attempts to provide a nutshell explanation of William's original advice more accurately than the nutshell statement commonly used today. The advice in question is commonly referred to as Ockham's Razor. Here's the suggested new nutshell definition from the glossary entry.
"Always express things using the most general representation possible for the context in which the representation is being used."
The glossary entry goes on to clarify that this is just an attempted improvement over the current vague fashion statement, and it welcomes other suggestions.
The book on the Netlab project often returns to the notion that learning is merely a form of adaptation and that, conversely, adaptation is merely a form of long-term learning. This, in turn, all fits under the umbrella notion that memory is behavior.
The idea that learning is adaptation is learning is forwarded as a possibility, mainly as a better means of discussing the concepts. This (in my opinion) provides a clearer and more converged understanding of how memory works in biological organisms. This could be very wrong, of course, so it's important to describe it properly. That way it, and not a straw man, can be critiqued. This article represents one such attempt to properly describe it. . .
Batesian Mimicry
Batesian mimicry is when a non-noxious/non-poisonous plant or animal projects the appearance of a poisonous plant or animal, allowing it to avoid being eaten by predators.
Those predators, goes the logic, which have partaken of the poisonous organism and survived, would have become very sick, and would have learned to avoid ingesting anything that appears to be that organism in the future. This will include those organisms who are not poisonous, but merely look, or act, like the poisonous organism.
Dennis Ritchie, the creator of the C programming language, died on Saturday after battling a long illness. The C programming language, arguably, changed the world. It can be found at the heart of most modern computer applications, operating systems, and successor programming languages.
Dennis Ritchie Creator of the C programming language
This article provides a layman's-level discussion of neural network technology within the framework of a sketchy historical sequence. Neural networks are described while presenting an overview of just one of the many routes taken by the field in the last half-century or so.
It is not for those interested in a full history of neural networks (i.e., connectionism). It is just a quick backgrounder, which should suffice to give readers a little bit of perspective into how we got from "there" to "here." The actual history of this field is storied, and sometimes even checkered and controversial. I highly recommend to anybody who is interested, that you get a good book or two on the subject.
This entry will also serve as a place to accumulate links to resources and information on the subject of neural networks and their history at this layman's level.
Scientists at UC Berkeley have taken brain scans of subjects in an fMRI machine while they watched a movie clip. They then reconstructed the movie the subjects were watching using only the brain scan data, and a database of 18 million seconds of random video gleaned from the web.
First, they used fMRI imaging to measure brain activity in visual cortex as a person looked at several hours of movies. They then used those data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set). Next, they used fMRI to measure brain activity elicited by a second set of movies that were also distinct from the first set. Finally, they used the computational models to process the elicited brain activity, and reconstruct the movies in the second set.
The amount of new understanding this could allow us to gather about mind-brain correlates and first person knowledge should be considerable. If this lives up to the hype, a lot of new research ideas should come out of it. Keeping fingers crossed here.
In the above clip - the movie that each subject viewed while in the fMRI is shown in the upper left position. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject. [source: Gallant Lab (see resources below)]