Cognitive Architecture for Direction of Attention Founded on Subliminal Memory Searches, Pseudorandom and Nonstop

arXiv.org Artificial Intelligence

By way of explaining how a brain works logically, human associative memory is modeled with logical and memory neurons, corresponding to standard digital circuits. The resulting cognitive architecture incorporates basic psychological elements such as short term and long term memory. Novel to the architecture are memory searches using cues chosen pseudorandomly from short term memory. Recalls alternated with sensory images, many tens per second, are analyzed subliminally as an ongoing process, to determine a direction of attention in short term memory.


Crystal-clear memories of a bacterium

Science

Information storage in DNA is the cornerstone of biology. Interestingly, prokaryotes can store information in specific loci in their DNA to remember encounters with invaders (such as bacteriophages--viruses that infect bacteria). Short samples of DNA from invaders are inserted as "spacers" into the CRISPR array. The array thus contains samples of DNA invaders in a defined locus that is recognized by Cas proteins that further process this information. This enables bacteria to adaptively and specifically respond to invading DNA that they have experienced before.


Training a neural network in phase-change memory beats GPUs

#artificialintelligence

Compared to a typical CPU, a brain is remarkably energy-efficient, in part because it combines memory, communications, and processing in a single execution unit, the neuron. A brain also has lots of them, which lets it handle lots of tasks in parallel. Attempts to run neural networks on traditional CPUs run up against these fundamental mismatches. Only a few things can be executed at a time, and shuffling data to memory is a slow process. As a result, neural networks have tended to be both computationally and energy intensive.


Even you can have the memory of a champion memorizer

Los Angeles Times

The making of a memory champion, it turns out, is not so different from the making of any other great athlete. To triumph in sport, athletes sculpt muscle and sinew and lash them together with head and heart to deliver optimum performance. To perform extraordinary feats of memorization, memory champions strengthen distinct groups of structures scattered throughout the brain. And then, they groove the connections that lash those groups together until the whole system works like a well-oiled machine. In short, memory champions are not born that way.


A Practical Approach to Sizing Neural Networks

arXiv.org Artificial Intelligence

Memorization is worst-case generalization. Based on MacKay's information theoretic model of supervised machine learning, this article discusses how to practically estimate the maximum size of a neural network given a training data set. First, we present four easily applicable rules to analytically determine the capacity of neural network architectures. This allows the comparison of the efficiency of different network architectures independently of a task. Second, we introduce and experimentally validate a heuristic method to estimate the neural network capacity requirement for a given dataset and labeling. This allows an estimate of the required size of a neural network for a given problem. We conclude the article with a discussion on the consequences of sizing the network wrongly, which includes both increased computation effort for training as well as reduced generalization capability.