memory
- North America > United States > California > San Francisco County > San Francisco (0.15)
- Europe > Austria > Vienna (0.14)
- Europe > Sweden > Stockholm > Stockholm (0.06)
- (23 more...)
- Health & Medicine (0.94)
- Transportation > Ground > Rail (0.93)
- Information Technology > Information Management > Search (0.69)
- Information Technology > Artificial Intelligence > Natural Language (0.69)
- Information Technology > Sensing and Signal Processing > Image Processing (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.47)
On-Device Training Under 256KB Memory
On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backpropagation.
Attention Approximates Sparse Distributed Memory
While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.
Improved Regret Bounds for Tracking Experts with Memory
We address the problem of sequential prediction with expert advice in a non-stationary environment with long-term memory guarantees in the sense of Bousquet and Warmuth [4]. We give a linear-time algorithm that improves on the best known regret bound [27]. This algorithm incorporates a relative entropy projection step. This projection is advantageous over previous weight-sharing approaches in that weight updates may come with implicit costs as in for example portfolio optimization. We give an algorithm to compute this projection step in linear time, which may be of independent interest.
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards
Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow. Recent work demonstrated that using a memory buffer of previous successful trajectories can result in more effective policies. However, existing methods may overly exploit past successful experiences, which can encourage the agent to adopt sub-optimal and myopic behaviors. In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer. Our method allows the agent to reach diverse regions in the state space and improve upon the past trajectories to reach new states. We empirically show that our approach significantly outperforms count-based exploration methods (parametric approach) and self-imitation learning (parametric approach with non-parametric memory) on various complex tasks with local optima. In particular, without using expert demonstrations or resetting to arbitrary states, we achieve the state-of-the-art scores under five billion number of frames, on challenging Atari games such as Montezuma's Revenge and Pitfall.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- Europe > Austria > Vienna (0.14)
- Europe > Sweden > Stockholm > Stockholm (0.06)
- (23 more...)
- Health & Medicine (0.94)
- Transportation > Ground > Rail (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.70)
- Information Technology > Information Management > Search (0.70)
- Information Technology > Sensing and Signal Processing > Image Processing (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.47)
AI creates new 'memories' of Palestinians killed in Gaza
What does recognising a Palestinian state mean? AI creates new'memories' of Palestinians killed in Gaza NewsFeed AI creates new'memories' of Palestinians killed in Gaza People are using AI to create new'memories' of Palestinians killed by Israel in Gaza, generating touching images of them with their loved ones. Drone strike on mosque kills dozens in Sudan's el-Fasher Russia becoming'more dangerous', EU foreign affairs head warns
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (1.00)
- Europe > Russia (0.27)
- Asia > Russia (0.27)
- (6 more...)
AI-native Memory 2.0: Second Me
Wei, Jiale, Ying, Xiang, Gao, Tao, Bao, Fangyi, Tao, Felix, Shang, Jingbo
Human interaction with the external world fundamentally involves the exchange of personal memory, whether with other individuals, websites, applications, or, in the future, AI agents. A significant portion of this interaction is redundant, requiring users to repeatedly provide the same information across different contexts. Existing solutions, such as browser-stored credentials, autofill mechanisms, and unified authentication systems, have aimed to mitigate this redundancy by serving as intermediaries that store and retrieve commonly used user data. The advent of large language models (LLMs) presents an opportunity to redefine memory management through an AI-native paradigm: SECOND ME. SECOND ME acts as an intelligent, persistent memory offload system that retains, organizes, and dynamically utilizes user-specific knowledge. By serving as an intermediary in user interactions, it can autonomously generate context-aware responses, prefill required information, and facilitate seamless communication with external systems, significantly reducing cognitive load and interaction friction. Unlike traditional memory storage solutions, SECOND ME extends beyond static data retention by leveraging LLM-based memory parameterization. This enables structured organization, contextual reasoning, and adaptive knowledge retrieval, facilitating a more systematic and intelligent approach to memory management. As AI-driven personal agents like SECOND ME become increasingly integrated into digital ecosystems, SECOND ME further represents a critical step toward augmenting human-world interaction with persistent, contextually aware, and self-optimizing memory systems. We have open-sourced the fully localizable deployment system at GitHub: https://github.com/Mindverse/Second-Me.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
Snapchat launches AI Dreams tool that transforms your selfies into hyper-realistic images - including mermaids and Renaissance-era royals
It's no secret that filters can transform us into almost anything - whether it be a dog or a dancing hotdog. But Snapchat has now taken this up a notch, with the launch of a new tool that uses artificial intelligence (AI) to completely reimagine your photographs. The so-called'Dreams' feature will allow users to create fantasy-themed AI selfies in just a few taps - and the results are unbelievably realistic. Deep-sea mermaids and Renaissance-era royals are among the initial pack of eight complimentary Dreams that can be created, while others start at $0.99. The AI tool will be launched first in Australia and New Zealand, before making its way to other Snapchatters across the globe in a couple of weeks.
- Oceania > New Zealand (0.26)
- Oceania > Australia (0.26)
Robotics: Nicolas Mansard, coordinator of the MEMMO project, winner of the Stars of Europe - Actu IA
Created in 2013, the Stars of Europe awards recognize the coordinators of European collaborative research projects. On December 6, Sylvie Retailleau, Minister of Higher Education and Research, presented trophies to twelve winners at a ceremony at the Quai Branly Museum. Among them, Nicolas Mansard, CNRS researcher in robotics at LAAS-CNRS, holder of the ANITI chair " Artificial and natural movement", rewarded for the coordination of the MEMMO (memory of motion) project. Funded by the Horizon 2020 program over a four-year period, MEMMO (Memory of Motion) is a collaborative project initiated in 2018 that brought together a consortium of 10 European partners for a budget of €4 million: LAAS-CNRS (France), IDIAP (Switzerland), University of Edinburgh (UK), Max Planck Institute (Germany), Oxford University (UK), Trento University (IT), PAL-Robotics (Spain), Wandercraft (France), Airbus (France), Costain (UK) and APAJH (France). "I would like to thank the people who helped me coordinate this project. It is a project put together by a consortium of young researchers. It was a great pride for me to be chosen to coordinate this project. "We wanted to prove that it was possible to generate complex motions for arbitrary robots with arms and legs interacting in a dynamic environment in real time.
- Europe > France (0.91)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.26)
- Europe > Switzerland (0.26)
- (2 more...)