How Short-Term Memory Is Important For Your Brain And Mind

International Business Times

When you need to remember a phone number, a shopping list or a set of instructions, you rely on what psychologists and neuroscientists refer to as working memory. It's the ability to hold and manipulate information in mind, over brief intervals. It's for things that are important to you in the present moment, but not 20 years from now. Researchers believe working memory is central to the functioning of the mind. It correlates with many more general abilities and outcomes – things like intelligence and scholastic attainment – and is linked to basic sensory processes.


Neuromodulation of Word Meaning Selection

AAAI Conferences

Processes of word meaning generation, word association and understanding are known to be impaired in schizophrenia and related diseases. Word meaning selection requires the involvement of prefrontal cortex and processes of working memory and selective attention. Under the dopaminergic hypothesis of schizophrenia, the normal neuromodulatory activation of prefrontal cortex for the performance of working memory-related tasks is disturbed. We present a model of selective attention and its modulation by dopamine and show how abnormal levels of dopamine availability may lead to some of the observed impairments in word meaning selection, namely (a) failure to construct contextually appropriate meanings and (b) intrusions of phonological and episodic associative links within semantic processing.


AI Software Reveals the Inner Workings of Short-term Memory

#artificialintelligence

Research by neuroscientists at the University of Chicago shows how short-term, working memory uses networks of neurons differently depending on the complexity of the task at hand. The researchers used modern artificial intelligence (AI) techniques to train computational neural networks to solve a range of complex behavioral tasks that required storing information in short term memory. The AI networks were based on the biological structure of the brain and revealed two distinct processes involved in short-term memory. One, a "silent" process where the brain stores short-term memories without ongoing neural activity, and a second, more active process where circuits of neurons fire continuously. The study, led by Nicholas Masse, PhD, a senior scientist at UChicago, and senior author David Freedman, PhD, professor of neurobiology, was published this week in Nature Neuroscience.


Exploring LSTMs

#artificialintelligence

This, then, is a deep neural network: it takes an image input, returns an activity output, and – just as we might learn to detect patterns in puppy behavior without knowing anything about dogs (after seeing enough corgis, we discover common characteristics like fluffy butts and drumstick legs; next, we learn advanced features like splooting) – in between it learns to represent images through hidden layers of representations. Instead of simply taking an image and returning an activity, an RNN also maintains internal memories about the world (weights assigned to different pieces of information) to help perform its classifications. Note that the hidden state computed at time \(t\) (\(h_t\), our internal knowledge) is fed back at the next time step. So what we'd like is for the network to learn how to update its beliefs (scenes without Bob shouldn't change Bob-related information, scenes with Alice should focus on gathering details about her), in a way that its knowledge of the world evolves more gently.


Exploring LSTMs

#artificialintelligence

The first time I learned about LSTMs, my eyes glazed over. It turns out LSTMs are a fairly simple extension to neural networks, and they're behind a lot of the amazing achievements deep learning has made in the past few years. So I'll try to present them as intuitively as possible – in such a way that you could have discovered them yourself. Imagine we have a sequence of images from a movie, and we want to label each image with an activity (is this a fight?, are the characters talking?, are the characters eating?). One way is to ignore the sequential nature of the images, and build a per-image classifier that considers each image in isolation.