Yes, you can improve your memory. Here's how.

Popular Science

Think back to grade school. Do you remember those standardized tests where you were given lists of meaningless words to memorize? Then, after taking a challenging math or reading section, you had to write down as many words from that list as you could remember? If you were terrible at this, don't fret. Just as you can train yourself to ace the SATs, there's a trick to becoming a master memorizer of random words.



Crystal-clear memories of a bacterium

Science

Information storage in DNA is the cornerstone of biology. Interestingly, prokaryotes can store information in specific loci in their DNA to remember encounters with invaders (such as bacteriophages--viruses that infect bacteria). Short samples of DNA from invaders are inserted as "spacers" into the CRISPR array. The array thus contains samples of DNA invaders in a defined locus that is recognized by Cas proteins that further process this information. This enables bacteria to adaptively and specifically respond to invading DNA that they have experienced before.


Did you know cavemen were already dealing with "Big Data" issues?

@machinelearnbot

Nowadays everyone is talking about data & analytics. Executives are figuring out how to turn it into value, researchers are writing huge numbers of papers about it and students are enrolling in massive numbers in data science programs. This would lead to believe that data and analytics area recent phenomenon. Indeed, there are convincing reasons that show analytics is more important and relevant than ever. However, the underlying idea of analytics – the urge to understand the nature of our reality and to act accordingly – is as old as humanity itself.


Generative Temporal Models with Spatial Memory for Partially Observed Environments

arXiv.org Machine Learning

In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent's representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.