Goto

Collaborating Authors

Agents


Computer, is my experiment finished? Researchers discuss the use of AI agents in their research

#artificialintelligence

Everyone knows that the Computer--an artificial intelligence (AI)-like entity--on a Star Trek spaceship does everything from brewing tea to compiling complex analyses of flux data. But how are they used at real research facilities? How can AI agents--computer programs that can act based on a perceived environment--help scientists discover next-generation batteries or quantum materials? Three staff members at the National Synchrotron Light Source II (NSLS-II) described how AI agents support scientists using the facility's research tools. As a U.S. Department of Energy's (DOE) Office of Science user facility located at DOE's Brookhaven National Laboratory, NSLS-II offers its experimental capabilities to scientists from all over the world who use it to reveal the mysteries of materials for tomorrow's technology.



Creating Emergent Behaviors with Reinforcement Learning and Unreal Engine

#artificialintelligence

In the following article I discuss how to generate emergent behavior in AI characters using Unreal Engine, Reinforcement Learning, and the free machine learning plugin MindMaker. The aim is that the interested reader can use this as a guide for creating emergent behavior in their own game project or embodied AI character. Emergent behavior refers to behaviors that are not pre-programmed but develop organically in response to some environmental stimuli. Emergent behavior is common to many if not all forms of life, being a function of evolution itself. It is also more recently a feature of embodied artificial agents. When one employs emergent behavior methods, one does not rigidly program specific actions for the AI, but instead allows them to "evolve" through some adaptive algorithm such as genetic programming, reinforcement learning, or Monte Carlo methods.


Is reinforcement (machine) learning overhyped?

#artificialintelligence

Imagine you are about to sit down to play a game with a friend. But this isn't just any friend -- it's a computer program that doesn't know the rules of the game. It does, however, understand that it has a goal, and that goal is to win. Because this friend doesn't know the rules, it starts by making random moves. Some of them make absolutely no sense, and winning for you is easy.


Overview - Power Virtual Agents

#artificialintelligence

See the Important information section for specific usage details. Power Virtual Agents lets you create powerful AI-powered chatbots for a range of requests--from providing simple answers to common questions to resolving issues requiring complex conversations. Engage with customers and employees in multiple languages across websites, mobile apps, Facebook, Microsoft Teams, or any channel supported by the Azure Bot Framework. These bots can be created easily without the need for data scientists or developers. Power Virtual Agents is available as both a standalone web app, and as a discrete app within Microsoft Teams. Most of the functionality between the two is the same.


ACT-1: How Adept Is Building the Future of AI with Action Transformers

#artificialintelligence

One of AI's most ambitious goals is to build systems that can do everything a human can. GPT-3 can write and Stable Diffusion can paint, but neither can interact with the world directly. AI companies have been trying to create intelligent agents this way for 10 years. This seems to be changing now. One of my latest articles covers Google's PaLM-SayCan (PSC), a robot powered by PaLM, the best large language model to date. PSC's language module can interpret human requests expressed in natural language and transform them into high-level tasks that can be further broken down into elemental actions.


Altruistic Hedonic Games

Journal of Artificial Intelligence Research

Hedonic games are coalition formation games in which players have preferences over the coalitions they can join. For a long time, all models of representing hedonic games were based upon selfish players only. Among the known ways of representing hedonic games compactly, we focus on friend-oriented hedonic games and propose a novel model for them that takes into account not only the players' own preferences but also their friends' preferences. Depending on the order in which players look at their own or their friends' preferences, we distinguish three degrees of altruism: selfish-first, equal-treatment, and altruistic-treatment preferences. We study both the axiomatic properties of these games and the computational complexity of problems related to various common stability concepts.


Motion Planning Under Uncertainty with Complex Agents and Environments via Hybrid Search

Journal of Artificial Intelligence Research

As autonomous systems and robots are applied to more real world situations, they must reason about uncertainty when planning actions. Mission success oftentimes cannot be guaranteed and the planner must reason about the probability of failure. Unfortunately, computing a trajectory that satisfies mission goals while constraining the probability of failure is difficult because of the need to reason about complex, multidimensional probability distributions. Recent methods have seen success using chance-constrained, model-based planning. However, the majority of these methods can only handle simple environment and agent models. We argue that there are two main drawbacks of current approaches to goal-directed motion planning under uncertainty. First, current methods suffer from an inability to deal with expressive environment models such as 3D non-convex obstacles. Second, most planners rely on considerable simplifications when computing trajectory risk including approximating the agent’s dynamics, geometry, and uncertainty. In this article, we apply hybrid search to the risk-bound, goal-directed planning problem. The hybrid search consists of a region planner and a trajectory planner. The region planner makes discrete choices by reasoning about geometric regions that the autonomous agent should visit in order to accomplish its mission. In formulating the region planner, we propose landmark regions that help produce obstacle-free paths. The region planner passes paths through the environment to a trajectory planner; the task of the trajectory planner is to optimize trajectories that respect the agent’s dynamics and the user’s desired risk of mission failure. We discuss three approaches to modeling trajectory risk: a CDF-based approach, a sampling-based collocation method, and an algorithm named Shooting Method Monte Carlo. These models allow computation of trajectory risk with more complex environments, agent dynamics, geometries, and models of uncertainty than past approaches. A variety of 2D and 3D test cases are presented including a linear case, a Dubins car model, and an underwater autonomous vehicle. The method is shown to outperform other methods in terms of speed and utility of the solution. Additionally, the models of trajectory risk are shown to better approximate risk in simulation.


GitHub - xavierpuigf/virtualhome: API to run VirtualHome, a Multi-Agent Household Simulator

#artificialintelligence

VirtualHome is an interactive platform to simulate complex household activities via programs. Key aspect of VirtualHome is that it allows complex interactions with the environment, such as picking up objects, switching on/off appliances, opening appliances, etc. Our simulator can easily be called with a Python API: write the activity as a simple sequence of instructions which then get rendered in VirtualHome. You can choose between different agents and environments, as well as modify environments on the fly. The platform allows to simulate multi-agent activities and can serve as an environment to train agents fro embodied AI tasks.


RLlib for Deep Hierarchical Multiagent Reinforcement Learning

#artificialintelligence

Reinforcement learning (RL) is an effective method for solving problems that require agents to learn the best way to act in complex environments. RLlib is a powerful tool for applying reinforcement learning to problems where there are multiple agents or when agents must take on multiple roles. There are many of resources for learning about RLlib from a theoretical or academic perspective, but there is a lack of materials for learning how to use RLlib to solve your own practical problems. This tutorial helps to fill that gap. If you want to get right into RLlib, fell free to skip to the next section. Thorndike observed that some behaviors in animals arise from a gradual stamping in [Thorndike, 1898].