Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
As technology becomes more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust autonomous systems. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of existing research related to assurances is presented. Much of the surveyed research originates from fields such as interpretable, comprehensible, transparent, and explainable machine learning, as well as human-computer interaction, human-robot interaction, and e-commerce. Several key ideas are extracted from this work in order to refine the definition of assurances. The design of assurances is found to be highly dependent not only on the capabilities of the autonomous system, but on the characteristics of the human user, and the appropriate trust-related behaviors. Several directions for future research are identified and discussed.
As technology become more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust this technology. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of research that has been performed with respect to assurances is presented, and several key ideas are extracted in order to refine the definition of assurances. Several directions for future research are identified and discussed.
Beer, R. Dirk (Pacific Science and Engineering Group) | Rieth, Cory A. (Pacific Science and Engineering Group) | Tran, Randy (Pacific Science and Engineering Group) | Cook, Maia B. (Pacific Science and Engineering Group)
Increasing prevalence and complexity of robotic and autonomous systems (RAS) and promising applications of hybrid multi-human multi-RAS teams across a wide range of domains pose a challenge to user interface designers, autonomy researchers, system developers, program managers, and manning/personnel analysts. These stakeholders need a principled, generalizable approach to analyze these teams in an operational context to design effective team configurations and human-system interfaces. To meet this need, we have developed a theoretical framework and software simulation that supports analysis to understand and predict the type and number of human-RAS and human-human interaction task demands imposed by the mission and operational context. We extend previous research to include multi-human multi-RAS teams, and emphasize generalizability across a wide range of current and future RAS technologies and military and commercial applications. To ensure that our framework is grounded in mission and operational realities, we validated the framework structure with domain experts. The framework characterizes Operational Context, Team Configuration, and Interaction Task Demands, and defines relationships between these constructs. These relationships are complex, and prediction of Interaction Task Demands quickly becomes difficult even for small teams. Therefore, to support analysis, we developed a software simulation (Beer, Rieth, Tran, & Cook, 2016) that predicts these demands and allows testing and validation of the framework. The framework and simulation presented here provide a step forward in the development of a systematic, well-defined, principled process to analyze the design tradeoffs and requirements for a wide range of future hybrid multi-human multi-RAS teams.