Goto

Collaborating Authors

 United States Naval Research Laboratory


Human-Agent Teaming as a Common Problem for Goal Reasoning

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


Comparing Reward Shaping, Visual Hints, and Curriculum Learning

AAAI Conferences

Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum. Yet few studies examine how they compare to each other, when one might prefer one approach, or how they may complement each other. As a first step in this direction, we compare reward shaping, hints, and curricula for a Deep RL agent in the game of Minecraft. We seek to answer whether reward shaping, visual hints, or the curricula have the most impact on performance, which we measure as the time to reach the target, the distance from the target, the cumulative reward, or the number of actions taken. Our analyses show that performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, the results suggest that designing an effective curriculum and providing appropriate hints most improve the performance. Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum, yet few studies examine how they compare to each other. We compare these approaches for a Deep RL agent in the game of Minecraft and show performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, this suggests that designing an effective curriculum with hints most improve the performance.


Robotic Swarms as Solids, Liquids and Gasses

AAAI Conferences

There have been significant advances in developing each phase of the mission. Secondly, based on our everyday algorithms that allow researchers to examine these experience with physical objects in our environment, behaviors in simulation (Luke et al. 2005), generally assuming the three major physical states of matter, solid, liquid and noise-free estimates of the agents' own, neighbors' and gas, represent a natural and intuitive means of describing the targets' positions. However, the actual information flow into types of motions a swarm of mobile robots can perform as biological agents' in terms of the sensing, processing and they cluster, transit or wander (Gage 1992).