Goto

Collaborating Authors

 fetch


'King of Silver Dollars' coin could fetch over 1M at auction

Popular Science

Science Archaeology'King of Silver Dollars' coin could fetch over $1M at auction Coin collectors consider the rare 1804 dollar one of the field's most desirable trophies. Breakthroughs, discoveries, and DIY tips sent every weekday. A 19th century coin widely considered to be "the King of Silver Dollars" is hitting the auction block next week as part of Heritage's FUN US Coins Signature Auction. The Adams-Carter 1804 Class III Draped Bust dollar is one of only 16 known examples of the 1804 silver dollars. The valuable silver coin is the undisputed headliner of the 38-item Presidio Collection that will be auctioned January 14-17.


Fetch.ai: An Architecture for Modern Multi-Agent Systems

Wooldridge, Michael J., Bagoly, Attila, Ward, Jonathan J., La Malfa, Emanuele, Licks, Gabriel Paludo

arXiv.org Artificial Intelligence

Recent surges in LLM-driven intelligent systems largely overlook decades of foundational multi-agent systems (MAS) research, resulting in frameworks with critical limitations such as centralization and inadequate trust and communication protocols. This paper introduces the Fetch.ai architecture, an industrial-strength platform designed to bridge this gap by facilitating the integration of classical MAS principles with modern AI capabilities. We present a novel, multi-layered solution built on a decentralized foundation of on-chain blockchain services for verifiable identity, discovery, and transactions. This is complemented by a comprehensive development framework for creating secure, interoperable agents, a cloud-based platform for deployment, and an intelligent orchestration layer where an agent-native LLM translates high-level human goals into complex, multi-agent workflows. We demonstrate the deployed nature of this system through a decentralized logistics use case where autonomous agents dynamically discover, negotiate, and transact with one another securely. Ultimately, the Fetch.ai stack provides a principled architecture for moving beyond current agent implementations towards open, collaborative, and economically sustainable multi-agent ecosystems.


Dogs can learn and remember how toys work

Popular Science

The process is similar to how human babies learn that bowls and spoons are used to eat. Breakthroughs, discoveries, and DIY tips sent every weekday. We've long known that dogs are pretty smart. They may "picture" objects in their heads just like us, communicate with button boards, and can even understand fairly complicated words . They also appear to categorize objects by function, understanding how similar types of toys work, even if they don't look alike.


Making FETCH! Happen: Finding Emergent Dog Whistles Through Common Habitats

Sasse, Kuleen, Aguirre, Carlos, Cachola, Isabel, Levy, Sharon, Dredze, Mark

arXiv.org Artificial Intelligence

WARNING: This paper contains content that maybe upsetting or offensive to some readers. Dog whistles are coded expressions with dual meanings: one intended for the general public (outgroup) and another that conveys a specific message to an intended audience (ingroup). Often, these expressions are used to convey controversial political opinions while maintaining plausible deniability and slip by content moderation filters. Identification of dog whistles relies on curated lexicons, which have trouble keeping up to date. We introduce \textbf{FETCH!}, a task for finding novel dog whistles in massive social media corpora. We find that state-of-the-art systems fail to achieve meaningful results across three distinct social media case studies. We present \textbf{EarShot}, a novel system that combines the strengths of vector databases and Large Language Models (LLMs) to efficiently and effectively identify new dog whistles.


Environmentally Adaptive Control Including Variance Minimization Using Stochastic Predictive Network with Parametric Bias: Application to Mobile Robots

Kawaharazuka, Kento, Shinjo, Koki, Kawamura, Yoichiro, Okada, Kei, Inaba, Masayuki

arXiv.org Artificial Intelligence

In this study, we propose a predictive model composed of a recurrent neural network including parametric bias and stochastic elements, and an environmentally adaptive robot control method including variance minimization using the model. Robots which have flexible bodies or whose states can only be partially observed are difficult to modelize, and their predictive models often have stochastic behaviors. In addition, the physical state of the robot and the surrounding environment change sequentially, and so the predictive model can change online. Therefore, in this study, we construct a learning-based stochastic predictive model implemented in a neural network embedded with such information from the experience of the robot, and develop a control method for the robot to avoid unstable motion with large variance while adapting to the current environment. This method is verified through a mobile robot in simulation and to the actual robot Fetch.


Where to Fetch: Extracting Visual Scene Representation from Large Pre-Trained Models for Robotic Goal Navigation

Li, Yu, Li, Dayou, Zhao, Chenkun, Wang, Ruifeng, Song, Ran, Zhang, Wei

arXiv.org Artificial Intelligence

To complete a complex task where a robot navigates to a goal object and fetches it, the robot needs to have a good understanding of the instructions and the surrounding environment. Large pre-trained models have shown capabilities to interpret tasks defined via language descriptions. However, previous methods attempting to integrate large pre-trained models with daily tasks are not competent in many robotic goal navigation tasks due to poor understanding of the environment. In this work, we present a visual scene representation built with large-scale visual language models to form a feature representation of the environment capable of handling natural language queries. Combined with large language models, this method can parse language instructions into action sequences for a robot to follow, and accomplish goal navigation with querying the scene representation. Experiments demonstrate that our method enables the robot to follow a wide range of instructions and complete complex goal navigation tasks.


FETCH: A Memory-Efficient Replay Approach for Continual Learning in Image Classification

Weißflog, Markus, Protzel, Peter, Neubert, Peer

arXiv.org Artificial Intelligence

Class-incremental continual learning is an important area of research, as static deep learning methods fail to adapt to changing tasks and data distributions. In previous works, promising results were achieved using replay and compressed replay techniques. In the field of regular replay, GDumb achieved outstanding results but requires a large amount of memory. This problem can be addressed by compressed replay techniques. The goal of this work is to evaluate compressed replay in the pipeline of GDumb. We propose FETCH, a two-stage compression approach. First, the samples from the continual datastream are encoded by the early layers of a pre-trained neural network. Second, the samples are compressed before being stored in the episodic memory. Following GDumb, the remaining classification head is trained from scratch using only the decompressed samples from the reply memory. We evaluate FETCH in different scenarios and show that this approach can increase accuracy on CIFAR10 and CIFAR100. In our experiments, simple compression methods (e.g., quantization of tensors) outperform deep autoencoders. In the future, FETCH could serve as a baseline for benchmarking compressed replay learning in constrained memory scenarios.


GraphSnapShot: Graph Machine Learning Acceleration with Fast Storage and Retrieval

Liu, Dong, Waleffe, Roger, Jiang, Meng, Venkataraman, Shivaram

arXiv.org Artificial Intelligence

In our recent research, we have developed a framework called GraphSnapShot, which has been proven an useful tool for graph learning acceleration. The core idea of GraphSnapShot is to capture and update the state of local graph structures dynamically, just like taking snapshots of graphs. GraphSnapShot is designed to efficiently capture, store and update the dynamic snapshots of graph data, enabling us to track patterns in the structure of graph networks. This technique is useful for most graph learning tasks that relies on topology analysis or networks are constantly evolving, such as social media analysis, biological networks, or any system where the relationships between entities change over time. The key components of GraphSnapShot is the GraphSDSampler. GraphS-DSampler can efficiently capture, update, retrieve and store graph snapshots of topology while doing computation at the same time, which makes graph learning computation significantly faster. In experiments, GraphSnapShot shows efficiency. It can promote computation speed significantly compared to traditional NeighborhodSamplers implemented in dgl, and it can reduce the GPU & memory usage during training, with little loss of accuracy. Experimental results show that the GraphSnapShot has potential to be a powerful tool for large graph training acceleration.


CppFlow: Generative Inverse Kinematics for Efficient and Robust Cartesian Path Planning

Morgan, Jeremy, Millard, David, Sukhatme, Gaurav S.

arXiv.org Artificial Intelligence

In this work we present CppFlow - a novel and performant planner for the Cartesian Path Planning problem, which finds valid trajectories up to 129x faster than current methods, while also succeeding on more difficult problems where others fail. At the core of the proposed algorithm is the use of a learned, generative Inverse Kinematics solver, which is able to efficiently produce promising entire candidate solution trajectories on the GPU. Precise, valid solutions are then found through classical approaches such as differentiable programming, global search, and optimization. In combining approaches from these two paradigms we get the best of both worlds - efficient approximate solutions from generative AI which are made exact using the guarantees of traditional planning and optimization. We evaluate our system against other state of the art methods on a set of established baselines as well as new ones introduced in this work and find that our method significantly outperforms others in terms of the time to find a valid solution and planning success rate, and performs comparably in terms of trajectory length over time. The work is made open source and available for use upon acceptance.


MOSAIC: Learning Unified Multi-Sensory Object Property Representations for Robot Perception

Tatiya, Gyan, Francis, Jonathan, Wu, Ho-Hsiang, Bisk, Yonatan, Sinapov, Jivko

arXiv.org Artificial Intelligence

A holistic understanding of object properties across diverse sensory modalities (e.g., visual, audio, and haptic) is essential for tasks ranging from object categorization to complex manipulation. Drawing inspiration from cognitive science studies that emphasize the significance of multi-sensory integration in human perception, we introduce MOSAIC (Multi-modal Object property learning with Self-Attention and Integrated Comprehension), a novel framework designed to facilitate the learning of unified multi-sensory object property representations. While it is undeniable that visual information plays a prominent role, we acknowledge that many fundamental object properties extend beyond the visual domain to encompass attributes like texture, mass distribution, or sounds, which significantly influence how we interact with objects. In MOSAIC, we leverage this profound insight by distilling knowledge from the extensive pre-trained Contrastive Language-Image Pre-training (CLIP) model, aligning these representations not only across vision but also haptic and auditory sensory modalities. Through extensive experiments on a dataset where a humanoid robot interacts with 100 objects across 10 exploratory behaviors, we demonstrate the versatility of MOSAIC in two task families: object categorization and object-fetching tasks. Our results underscore the efficacy of MOSAIC's unified representations, showing competitive performance in category recognition through a simple linear probe setup and excelling in the fetch object task under zero-shot transfer conditions. This work pioneers the application of CLIP-based sensory grounding in robotics, promising a significant leap in multi-sensory perception capabilities for autonomous systems. We have released the code, datasets, and additional results: https://github.com/gtatiya/MOSAIC.