Pinto, Alessandro
Learning Hierarchical Control For Multi-Agent Capacity-Constrained Systems
Vallon, Charlott, Pinto, Alessandro, Stellato, Bartolomeo, Borrelli, Francesco
This paper introduces a novel data-driven hierarchical control scheme for managing a fleet of nonlinear, capacity-constrained autonomous agents in an iterative environment. We propose a control framework consisting of a high-level dynamic task assignment and routing layer and low-level motion planning and tracking layer. Each layer of the control hierarchy uses a data-driven Model Predictive Control (MPC) policy, maintaining bounded computational complexity at each calculation of a new task assignment or actuation input. We utilize collected data to iteratively refine estimates of agent capacity usage, and update MPC policy parameters accordingly. Our approach leverages tools from iterative learning control to integrate learning at both levels of the hierarchy, and coordinates learning between levels in order to maintain closed-loop feasibility and performance improvement of the connected architecture.
Survey of Human Models for Verification of Human-Machine Systems
Wang, Timothy E., Pinto, Alessandro
We survey the landscape of human operator modeling ranging from the early cognitive models developed in artificial intelligence to more recent formal task models developed for model-checking of human machine interactions. We review human performance modeling and human factors studies in the context of aviation, and models of how the pilot interacts with automation in the cockpit. The purpose of the survey is to assess the applicability of available state-of-the-art models of the human operators for the design, verification and validation of future safety-critical aviation systems that exhibit higher-level of autonomy, but still require human operators in the loop. These systems include the single-pilot aircraft and NextGen air traffic management. We discuss the gaps in existing models and propose future research to address them.
Assurance for Autonomy -- JPL's past research, lessons learned, and future directions
Feather, Martin S., Pinto, Alessandro
Robotic space missions have long depended on automation, defined in the 2015 NASA Technology Roadmaps as "the automatically-controlled operation of an apparatus, process, or system using a pre-planned set of instructions (e.g., a command sequence)," to react to events when a rapid response is required. Autonomy, defined there as "the capacity of a system to achieve goals while operating independently from external control," is required when a wide variation in circumstances precludes responses being pre-planned, instead autonomy follows an on-board deliberative process to determine the situation, decide the response, and manage its execution. Autonomy is increasingly called for to support adventurous space mission concepts, as an enabling capability or as a significant enhancer of the science value that those missions can return. But if autonomy is to be allowed to control these missions' expensive assets, all parties in the lifetime of a mission, from proposers through ground control, must have high confidence that autonomy will perform as intended to keep the asset safe to (if possible) accomplish the mission objectives. The role of mission assurance is a key contributor to providing this confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy. To remedy this situation, researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy. This paper summarizes over two decades of this research, and offers a vision of where further work is needed to address open issues.
Metaphysics of Planning Domain Descriptions
Srivastava, Siddharth (United Technologies Research Center, Berkeley) | Russell, Stuart (University of California Berkeley) | Pinto, Alessandro (United Technologies Research Center, Berkeley)
STRIPS-like languages (SLLs) have fostered immense advances in automated planning. In practice, SLLs are used to express highly abstract versions of real-world planning problems, leading to more concise models and faster solution times. Unfortunately, as we show in the paper, simple ways of abstracting solvable real-world problems may lead to SLL models that are unsolvable, SLL models whose solutions are incorrect with respect to the real-world problem, or models that are inexpressible in SLLs. There is some evidence that such limitations have restricted the applicability of AI planning technology in the real world, as is apparent in the case of task and motion planning in robotics. We show that the situation can be ameliorated by a combination of increased expressive power — for example, allowing angelic nondeterminism in action effects — and new kinds of algorithmic approaches designed to produce correct solutions from initially incorrect or non-Markovian abstract models.
Metaphysics of Planning Domain Descriptions
Srivastava, Siddharth (United Technologies, Berkeley) | Russell, Stuart (University of California, Berkeley) | Pinto, Alessandro (United Technologies, Berkeley)
Domain models for sequential decision making typically represent abstract versions of real-world systems. In practice, such representations are compact, easy to maintain, and affort faster solution times. Unfortunately, as we show in this paper, simple ways of abstracting solvable real-world problems may lead to models whose solutions are incorrect with respect to the real-world problem. There is some evidence that such limitations have restricted the applicability of SDM technology in the real world, as is apparent in the case of task and motion planning in robotics. We show that the situation can be ameliorated by a combination of increased expressive power---for example, allowing angelic nondeterminism in action effects---and new kinds of algorithmic approaches designed to produce correct solutions from initially incorrect or non-Markovian abstract models.