Medsker, Larry
Competency Model Approach to AI Literacy: Research-based Path from Initial Framework to Model
Faruqe, Farhana, Watkins, Ryan, Medsker, Larry
The recent developments in Artificial Intelligence (AI) technologies challenge educators and educational institutions to respond with curriculum and resources that prepare students of all ages with the foundational knowledge and skills for success in the AI workplace. Research on AI Literacy could lead to an effective and practical platform for developing these skills. We propose and advocate for a pathway for developing AI Literacy as a pragmatic and useful tool for AI education. Such a discipline requires moving beyond a conceptual framework to a multi-level competency model with associated competency assessments. This approach to an AI Literacy could guide future development of instructional content as we prepare a range of groups (i.e., consumers, co-workers, collaborators, and creators). We propose here a research matrix as an initial step in the development of a roadmap for AI Literacy research, which requires a systematic and coordinated effort with the support of publication outlets and research funding, to expand the areas of competency and assessments.
Monitoring Trust in Human-Machine Interactions for Public Sector Applications
Faruqe, Farhana, Watkins, Ryan, Medsker, Larry
The work reported here addresses the capacity of psychophysiological sensors and measures using Electroencephalogram (EEG) and Galvanic Skin Response (GSR) to detect levels of trust for humans using AI-supported Human-Machine Interaction (HMI). Improvements to the analysis of EEG and GSR data may create models that perform as well, or better than, traditional tools. A challenge to analyzing the EEG and GSR data is the large amount of training data required due to a large number of variables in the measurements. Researchers have routinely used standard machine-learning classifiers like artificial neural networks (ANN), support vector machines (SVM), and K-nearest neighbors (KNN). Traditionally, these have provided few insights into which features of the EEG and GSR data facilitate the more and least accurate predictions - thus making it harder to improve the HMI and human-machine trust relationship. A key ingredient to applying trust-sensor research results to practical situations and monitoring trust in work environments is the understanding of which key features are contributing to trust and then reducing the amount of data needed for practical applications. We used the Local Interpretable Model-agnostic Explanations (LIME) model as a process to reduce the volume of data required to monitor and enhance trust in HMI systems - a technology that could be valuable for governmental and public sector applications. Explainable AI can make HMI systems transparent and promote trust. From customer service in government agencies and community-level non-profit public service organizations to national military and cybersecurity institutions, many public sector organizations are increasingly concerned to have effective and ethical HMI with services that are trustworthy, unbiased, and free of unintended negative consequences.
Responding to Challenges in the Design of Moral Autonomous Vehicles
Zhao, Helen (Johns Hopkins University) | Dimovitz, Kirsten (The George Washington University) | Staveland, Brooke (The Geroge Washington University) | Medsker, Larry (The George Washington University)
One major example of promising ‘smart’ technology in the public sector is the autonomous vehicle (AV). AVs are expected to yield numerous social benefits, such as increasing traffic efficiency, decreasing pollution, and decreasing traffic accidents by 90%. However, a recent 2016 study published by Bonnefon et al. argued that manufacturers and regulators face a major design challenge of balancing competing public preferences: a moral preference for “utilitarian” algorithms; a consumer preference for vehicles that prioritize passenger safety; and a policy preference for minimum government regulation of vehicle algorithm design. Our paper responds to the 2016 study, calling into question the importance of explicitly moral algorithms and the seriousness of the challenge identified by Bonnefon et al. We conclude that the ‘social dilemma’ is probably overstated. Given that attempts to resolve the ‘social dilemma’ are likely to delay the rollout of socially beneficial AVs, we implore the need for further research validating Bonnefon et al.’s conclusions and encourage manufacturers and regulators to commercialize AVs as soon as possible. We discuss the implications of this example for AV’s for the larger context of Cognitive Assistance in other application areas and the government and public policies that are being discussed.
Reports on the 2014 AAAI Fall Symposium Series
Cohen, Adam B. (Independent Consultant) | Chernova, Sonia (Worcester Polytechnic Institute) | Giordano, James (Georgetown University Medical Center) | Guerin, Frank (University of Aberdeen) | Hauser, Kris (Duke University) | Indurkhya, Bipin (AGH University of Science and Technology) | Leonetti, Matteo (University of Texas at Austin) | Medsker, Larry (Siena College) | Michalowski, Martin (Adventium Labs) | Sonntag, Daniel (German Research Center for Artificial Intelligence) | Stojanov, Georgi (American University of Paris) | Tecuci, Dan G. (IBM Watson, Austin) | Thomaz, Andrea (Georgia Institute of Technology) | Veale, Tony (University College Dublin) | Waltinger, Ulli (Siemens Corporate Technology)
The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.
Reports on the 2014 AAAI Fall Symposium Series
Cohen, Adam B. (Independent Consultant) | Chernova, Sonia (Worcester Polytechnic Institute) | Giordano, James (Georgetown University Medical Center) | Guerin, Frank (University of Aberdeen) | Hauser, Kris (Duke University) | Indurkhya, Bipin (AGH University of Science and Technology) | Leonetti, Matteo (University of Texas at Austin) | Medsker, Larry (Siena College) | Michalowski, Martin (Adventium Labs) | Sonntag, Daniel (German Research Center for Artificial Intelligence) | Stojanov, Georgi (American University of Paris) | Tecuci, Dan G. (IBM Watson, Austin) | Thomaz, Andrea (Georgia Institute of Technology) | Veale, Tony (University College Dublin) | Waltinger, Ulli (Siemens Corporate Technology)
The program also included six keynote presentations, a funding panel, a community panel, and multiple breakout sessions. The keynote presentations, given by speakers that have been working on AI for HRI for many years, focused on the larger intellectual picture of this subfield. Each speaker was asked to address, from his or her personal perspective, why HRI is an AI problem and how AI research can bring us closer to the reality of humans interacting with robots on everyday tasks. Speakers included Rodney Brooks (Rethink Robotics), Manuela Veloso (Carnegie Mellon University), Michael Goodrich (Brigham Young University), Benjamin Kuipers (University of Michigan), Maja Mataric (University of Southern California), and Brian Scassellati (Yale University).