Loynd, Ricky
Magentic-One: A Generalist Multi-Agent System for Solving Complex Tasks
Fourney, Adam, Bansal, Gagan, Mozannar, Hussein, Tan, Cheng, Salinas, Eduardo, Erkang, null, Zhu, null, Niedtner, Friederike, Proebsting, Grace, Bassman, Griffin, Gerrits, Jack, Alber, Jacob, Chang, Peter, Loynd, Ricky, West, Robert, Dibia, Victor, Awadallah, Ahmed, Kamar, Ece, Hosn, Rafah, Amershi, Saleema
Modern AI agents, driven by advances in large foundation models, promise to enhance our productivity and transform our lives by augmenting our knowledge and capabilities. To achieve this vision, AI agents must effectively plan, perform multi-step reasoning and actions, respond to novel observations, and recover from errors, to successfully complete complex tasks across a wide range of scenarios. In this work, we introduce Magentic-One, a high-performing open-source agentic system for solving such tasks. Magentic-One uses a multi-agent architecture where a lead agent, the Orchestrator, plans, tracks progress, and re-plans to recover from errors. Throughout task execution, the Orchestrator directs other specialized agents to perform tasks as needed, such as operating a web browser, navigating local files, or writing and executing Python code. We show that Magentic-One achieves statistically competitive performance to the state-of-the-art on three diverse and challenging agentic benchmarks: GAIA, AssistantBench, and WebArena. Magentic-One achieves these results without modification to core agent capabilities or to how they collaborate, demonstrating progress towards generalist agentic systems. Moreover, Magentic-One's modular design allows agents to be added or removed from the team without additional prompt tuning or training, easing development and making it extensible to future scenarios. We provide an open-source implementation of Magentic-One, and we include AutoGenBench, a standalone tool for agentic evaluation. AutoGenBench provides built-in controls for repetition and isolation to run agentic benchmarks in a rigorous and contained manner -- which is important when agents' actions have side-effects. Magentic-One, AutoGenBench and detailed empirical performance evaluations of Magentic-One, including ablations and error analysis are available at https://aka.ms/magentic-one
PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining
Thomas, Garrett, Cheng, Ching-An, Loynd, Ricky, Frujeri, Felipe Vieira, Vineet, Vibhav, Jalobeanu, Mihai, Kolobov, Andrey
Transformers [1] have lead to breakthroughs in training large-scale general representations for computer vision (CV) and natural language processing (NLP) [2], enabling zero-shot adaptation and fast finetuning [3]. At the same time, despite impressive progress, transformer-based representations haven't shown the same versatility for robotic manipulation. Some attribute this gap to the lack of suitable training data for robotics [3]. We argue instead that data relevant to training robotic manipulation models is copious but has important structure that most existing training methods ignore and fail to leverage. These insights lead us to propose a novel transformer-based architecture, called PLEX, that is capable of effective learning from realistically available robotic manipulation datasets. We observe that robotics-relevant data falls into three major categories: (1) Video-only data, which contain high-quality and potentially description-annotated demonstrations for an immense variety of tasks but have no explicit action information for a robot to mimic; (2) Data containing matching sequences of percepts and actions, which are less plentiful than pure videos and don't necessarily correspond to meaningful tasks [4], but capture valuable correlations between a robot's actions and changes in the environment and are easy to collect on a given robot; (3) Small sets of high-quality sensorimotor demonstrations for a target task in a target environment. Thus, a scalable model architecture for robotic manipulation must be able to learn primarily from videos, while being extra data-efficient on sensorimotor training sequences and the small amount target demonstrations. PLEX, the PLanning-EXecution architecture we propose, is designed to take advantage of data sources of these types.
Relational Attention: Generalizing Transformers for Graph-Structured Tasks
Diao, Cameron, Loynd, Ricky
Transformers flexibly operate over sets of real-valued vectors representing taskspecific entities and their attributes, where each vector might encode one wordpiece token and its position in a sequence, or some piece of information that carries no position at all. But as set processors, standard transformers are at a disadvantage in reasoning over more general graph-structured data where nodes represent entities and edges represent relations between entities. To address this shortcoming, we generalize transformer attention to consider and update edge vectors in each transformer layer. We evaluate this relational transformer on a diverse array of graph-structured tasks, including the large and challenging CLRS Algorithmic Reasoning Benchmark. There, it dramatically outperforms state-of-theart graph neural networks expressly designed to reason over graph-structured data. Our analysis demonstrates that these gains are attributable to relational attention's inherent ability to leverage the greater expressivity of graphs over sets. Graph-structured problems turn up in many domains, including knowledge bases (Hu et al., 2021; Bordes et al., 2013), communication networks (Leskovec et al., 2010), citation networks (McCallum et al., 2000), and molecules (Debnath et al., 1991; Zhang et al., 2020b). One example is predicting the bioactive properties of a molecule, where the atoms of the molecule are the nodes of the graph and the bonds are the edges. Along with their ubiquity, graph-structured problems vary widely in difficulty. For example, certain graph problems can be solved with a simple multi-layer perceptron, while others are quite challenging and require explicit modeling of relational characteristics. Graph Neural Networks (GNNs) are designed to process graphstructured data, including the graph's (possibly directed) edge Figure 1: The relational transformer structure and (in some cases) features associated with the edges. Standard transformers lack the relational inductive biases (Battaglia et al., 2018) that are explicitly built into the most commonly used GNNs. This allows entities carrying domain-specific attributes (like position) to be encoded as vectors for input to the same transformer architecture applied to different domains. Work was done during an internship at Microsoft Research. Figure 2: Categories of GNNs and Transformers, compared in terms of transformer machinery and edge vector incorporation.
MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Wagener, Nolan, Kolobov, Andrey, Frujeri, Felipe Vieira, Loynd, Ricky, Cheng, Ching-An, Hausknecht, Matthew
Simulated humanoids are an appealing research domain due to their physical capabilities. Nonetheless, they are also challenging to control, as a policy must drive an unstable, discontinuous, and high-dimensional physical system. One widely studied approach is to utilize motion capture (MoCap) data to teach the humanoid agent low-level skills (e.g., standing, walking, and running) that can then be re-used to synthesize high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, as MoCap data offers only kinematic information. Finding physical control inputs to realize the demonstrated motions requires computationally intensive methods like reinforcement learning. Thus, despite the publicly available MoCap data, its utility has been limited to institutions with large-scale compute. In this work, we dramatically lower the barrier for productive research on this topic by training and releasing high-quality agents that can track over three hours of MoCap data for a simulated humanoid in the dm_control physics-based environment. We release MoCapAct (Motion Capture with Actions), a dataset of these expert agents and their rollouts, which contain proprioceptive observations and actions. We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn downstream high-level tasks. Finally, we use MoCapAct to train an autoregressive GPT model and show that it can control a simulated humanoid to perform natural motion completion given a motion prompt. Videos of the results and links to the code and dataset are available at https://microsoft.github.io/MoCapAct.
NAIL: A General Interactive Fiction Agent
Hausknecht, Matthew, Loynd, Ricky, Yang, Greg, Swaminathan, Adith, Williams, Jason D.
Interactive Fiction (IF) games are complex textual decision making problems. This paper introduces NAIL, an autonomous agent for general parser-based IF games. NAIL won the 2018 Text Adventure AI Competition, where it was evaluated on twenty unseen games. This paper describes the architecture, development, and insights underpinning NAIL's performance.