Rosenthal, Stephanie
SalienTrack: providing salient information for semi-automated self-tracking feedback with model explanations
Wang, Yunlong, Liu, Jiaying, Park, Homin, Schultz-McArdle, Jordan, Rosenthal, Stephanie, Lim, Brian Y
Self-tracking can improve people's awareness of their unhealthy behaviors to provide insights towards behavior change. Prior work has explored how self-trackers reflect on their logged data, but it remains unclear how much they learn from the tracking feedback, and which information is more useful. Indeed, the feedback can still be overwhelming, and making it concise can improve learning by increasing focus and reducing interpretation burden. We conducted a field study of mobile food logging with two feedback modes (manual journaling and automatic annotation of food images) and identified learning differences regarding nutrition, assessment, behavioral, and contextual information. We propose a Self-Tracking Feedback Saliency Framework to define when to provide feedback, on which specific information, why those details, and how to present them (as manual inquiry or automatic feedback). We propose SalienTrack to implement these requirements. Using the data collected from the user study, we trained a machine learning model to predict whether a user would learn from each tracked event. Using explainable AI (XAI) techniques, we identified the most salient features per instance and why they lead to positive learning outcomes. We discuss implications for learnability in self-tracking, and how adding model explainability expands opportunities for improving feedback experience.
Impact of Explanation on Trust of a Novel Mobile Robot
Rosenthal, Stephanie, Carter, Elizabeth J.
One challenge with introducing robots into novel environments is misalignment between supervisor expectations and reality, which can greatly affect a user's trust and continued use of the robot. We performed an experiment to test whether the presence of an explanation of expected robot behavior affected a supervisor's trust in an autonomous robot. We measured trust both subjectively through surveys and objectively through a dual-task experiment design to capture supervisors' neglect tolerance (i.e., their willingness to perform their own task while the robot is acting autonomously). Our objective results show that explanations can help counteract the novelty effect of seeing a new robot perform in an unknown environment. Participants who received an explanation of the robot's behavior were more likely to focus on their own task at the risk of neglecting their robot supervision task during the first trials of the robot's behavior compared to those who did not receive an explanation. However, this effect diminished after seeing multiple trials, and participants who received explanations were equally trusting of the robot's behavior as those who did not receive explanations. Interestingly, participants were not able to identify their own changes in trust through their survey responses, demonstrating that the dual-task design measured subtler changes in a supervisor's trust.
Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation
Konam, Sandeep, Quah, Ian, Rosenthal, Stephanie, Veloso, Manuela
With the success of deep learning, recent efforts have been focused on analyzing how learned networks make their classifications. We are interested in analyzing the network output based on the network structure and information flow through the network layers. We contribute an algorithm for 1) analyzing a deep network to find neurons that are 'important' in terms of the network classification outcome, and 2)automatically labeling the patches of the input image that activate these important neurons. We propose several measures of importance for neurons and demonstrate that our technique can be used to gain insight into, and explain how a network decomposes an image to make its final classification.
Towards Visual Explanations for Convolutional Neural Networks via Input Resampling
Lengerich, Benjamin J., Konam, Sandeep, Xing, Eric P., Rosenthal, Stephanie, Veloso, Manuela
The predictive power of neural networks often costs model interpretability. Several techniques have been developed for explaining model outputs in terms of input features; however, it is difficult to translate such interpretations into actionable insight. Here, we propose a framework to analyze predictions in terms of the model's internal features by inspecting information flow through the network. Given a trained network and a test image, we select neurons by two metrics, both measured over a set of images created by perturbations to the input image: (1) magnitude of the correlation between the neuron activation and the network output and (2) precision of the neuron activation. We show that the former metric selects neurons that exert large influence over the network output while the latter metric selects neurons that activate on generalizable features. By comparing the sets of neurons selected by these two metrics, our framework suggests a way to investigate the internal attention mechanisms of convolutional neural networks.
Vision-Language Fusion for Object Recognition
Shiang, Sz-Rung (Carnegie Mellon University) | Rosenthal, Stephanie (Carnegie Mellon University) | Gershman, Anatole (Carnegie Mellon University) | Carbonell, Jaime (Carnegie Mellon University) | Oh, Jean (Carnegie Mellon University)
While recent advances in computer vision have caused object recognition rates to spike, there is still much room for improvement. In this paper, we develop an algorithm to improve object recognition by integrating human-generated contextual information with vision algorithms. Specifically, we examine how interactive systems such as robots can utilize two types of context information--verbal descriptions of an environment and human-labeled datasets. We propose a re-ranking schema, MultiRank, for object recognition that can efficiently combine such information with the computer vision results. In our experiments, we achieve up to 9.4% and 16.6% accuracy improvements using the oracle and the detected bounding boxes, respectively, over the vision-only recognizers. We conclude that our algorithm has the ability to make a significant impact on object recognition in robotics and beyond.
Reports of the 2016 AAAI Workshop Program
Albrecht, Stefano (The University of Texas at Austin) | Bouchard, Bruno (Universitรฉ du Quรฉbec ร Chicoutimi) | Brownstein, John S. (Harvard University) | Buckeridge, David L. (McGill University) | Caragea, Cornelia (University of North Texas) | Carter, Kevin M. (MIT Lincoln Laboratory) | Darwiche, Adnan (University of California, Los Angeles) | Fortuna, Blaz (Bloomberg L.P. and Jozef Stefan Institute) | Francillette, Yannick (Universitรฉ du Quรฉbec ร Chicoutimi) | Gaboury, Sรฉbastien (Universitรฉ du Quรฉbec ร Chicoutimi) | Giles, C. Lee (Pennsylvania State University) | Grobelnik, Marko (Jozef Stefan Institute) | Hruschka, Estevam R. (Federal University of Sรฃo Carlos) | Kephart, Jeffrey O. (IBM Thomas J. Watson Research Center) | Kordjamshidi, Parisa (University of Illinois at Urbana-Champaign) | Lisy, Viliam (University of Alberta) | Magazzeni, Daniele (King's College London) | Marques-Silva, Joao (University of Lisbon) | Marquis, Pierre (Universitรฉ d'Artois) | Martinez, David (MIT Lincoln Laboratory) | Michalowski, Martin (Adventium Labs) | Shaban-Nejad, Arash (University of California, Berkeley) | Noorian, Zeinab (Ryerson University) | Pontelli, Enrico (New Mexico State University) | Rogers, Alex (University of Oxford) | Rosenthal, Stephanie (Carnegie Mellon University) | Roth, Dan (University of Illinois at Urbana-Champaign) | Sinha, Arunesh (University of Southern California) | Streilein, William (MIT Lincoln Laboratory) | Thiebaux, Sylvie (The Australian National University) | Tran, Son Cao (New Mexico State University) | Wallace, Byron C. (University of Texas at Austin) | Walsh, Toby (University of New South Wales and Data61) | Witbrock, Michael (Lucid AI) | Zhang, Jie (Nanyang Technological University)
The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) was held at the beginning of the conference, February 12-13, 2016. Workshop participants met and discussed issues with a selected focus -- providing an informal setting for active exchange among researchers, developers and users on topics of current interest. To foster interaction and exchange of ideas, the workshops were kept small, with 25-65 participants. Attendance was sometimes limited to active participants only, but most workshops also allowed general registration by other interested individuals.
Reports of the 2016 AAAI Workshop Program
Albrecht, Stefano (The University of Texas at Austin) | Bouchard, Bruno (Universitรฉ du Quรฉbec ร Chicoutimi) | Brownstein, John S. (Harvard University) | Buckeridge, David L. (McGill University) | Caragea, Cornelia (University of North Texas) | Carter, Kevin M. (MIT Lincoln Laboratory) | Darwiche, Adnan (University of California, Los Angeles) | Fortuna, Blaz (Bloomberg L.P. and Jozef Stefan Institute) | Francillette, Yannick (Universitรฉ du Quรฉbec ร Chicoutimi) | Gaboury, Sรฉbastien (Universitรฉ du Quรฉbec ร Chicoutimi) | Giles, C. Lee (Pennsylvania State University) | Grobelnik, Marko (Jozef Stefan Institute) | Hruschka, Estevam R. (Federal University of Sรฃo Carlos) | Kephart, Jeffrey O. (IBM Thomas J. Watson Research Center) | Kordjamshidi, Parisa (University of Illinois at Urbana-Champaign) | Lisy, Viliam (University of Alberta) | Magazzeni, Daniele (King's College London) | Marques-Silva, Joao (University of Lisbon) | Marquis, Pierre (Universitรฉ d'Artois) | Martinez, David (MIT Lincoln Laboratory) | Michalowski, Martin (Adventium Labs) | Shaban-Nejad, Arash (University of California, Berkeley) | Noorian, Zeinab (Ryerson University) | Pontelli, Enrico (New Mexico State University) | Rogers, Alex (University of Oxford) | Rosenthal, Stephanie (Carnegie Mellon University) | Roth, Dan (University of Illinois at Urbana-Champaign) | Sinha, Arunesh (University of Southern California) | Streilein, William (MIT Lincoln Laboratory) | Thiebaux, Sylvie (The Australian National University) | Tran, Son Cao (New Mexico State University) | Wallace, Byron C. (University of Texas at Austin) | Walsh, Toby (University of New South Wales and Data61) | Witbrock, Michael (Lucid AI) | Zhang, Jie (Nanyang Technological University)
The Workshop Program of the Association for the Advancement of Artificial Intelligenceโs Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) was held at the beginning of the conference, February 12-13, 2016. Workshop participants met and discussed issues with a selected focus โ providing an informal setting for active exchange among researchers, developers and users on topics of current interest. To foster interaction and exchange of ideas, the workshops were kept small, with 25-65 participants. Attendance was sometimes limited to active participants only, but most workshops also allowed general registration by other interested individuals. The AAAI-16 Workshops were an excellent forum for exploring emerging approaches and task areas, for bridging the gaps between AI and other fields or between subfields of AI, for elucidating the results of exploratory research, or for critiquing existing approaches. The fifteen workshops held at AAAI-16 were Artificial Intelligence Applied to Assistive Technologies and Smart Environments (WS-16-01), AI, Ethics, and Society (WS-16-02), Artificial Intelligence for Cyber Security (WS-16-03), Artificial Intelligence for Smart Grids and Smart Buildings (WS-16-04), Beyond NP (WS-16-05), Computer Poker and Imperfect Information Games (WS-16-06), Declarative Learning Based Programming (WS-16-07), Expanding the Boundaries of Health Informatics Using AI (WS-16-08), Incentives and Trust in Electronic Communities (WS-16-09), Knowledge Extraction from Text (WS-16-10), Multiagent Interaction without Prior Coordination (WS-16-11), Planning for Hybrid Systems (WS-16-12), Scholarly Big Data: AI Perspectives, Challenges, and Ideas (WS-16-13), Symbiotic Cognitive Systems (WS-16-14), and World Wide Web and Population Health Intelligence (WS-16-15).
CoBots: Robust Symbiotic Autonomous Mobile Service Robots
Veloso, Manuela (Carnegie Mellon University) | Biswas, Joydeep (Carnegie Mellon University) | Coltin, Brian (Carnegie Mellon University) | Rosenthal, Stephanie (Carnegie Mellon University)
We research and develop autonomous mobile service robots as Collaborative Robots, i.e., CoBots. For the last three years, our four CoBots have autonomously navigated in our multi-floor office buildings for more than 1,000km, as the result of the integration of multiple perceptual, cognitive, and actuations representations and algorithms. In this paper, we identify a few core aspects of our CoBots underlying their robust functionality. The reliable mobility in the varying indoor environments comes from a novel episodic non-Markov localization. Service tasks requested by users are the input to a scheduler that can consider different types of constraints, including transfers among multiple robots. With symbiotic autonomy, the CoBots proactively seek external sources of help to fill-in for their inevitable occasional limitations. We present sampled results from a deployment and conclude with a brief review of other features of our service robots.
Look versus Leap: Computing Value of Information with High-Dimensional Streaming Evidence
Rosenthal, Stephanie (Independent Researcher) | Bohus, Dan (Microsoft Research) | Kamar, Ece (Microsoft Research) | Horvitz, Eric (Microsoft Research)
A key decision facing autonomous systems with access to streams of sensory data is whether to act based on current evidence or to wait for additional information that might enhance the utility of taking an action. Computing the value of information is particularly difficult with streaming high-dimensional sensory evidence. We describe a belief projection approach to reasoning about information value in these settings, using models for inferring future beliefs over states given streaming evidence. These belief projection models can be learned from data or constructed via direct assessment of parameters and they fit naturally in modular, hierarchical state inference architectures. We describe principles of using belief projection and present results drawn from an implementation of the methodology within a conversational system.
Mobile Robot Planning to Seek Help with Spatially-Situated Tasks
Rosenthal, Stephanie (Carnegie Mellon University) | Veloso, Manuela (Carnegie Mellon University)
Indoor autonomous mobile service robots can overcome their hardware and potential algorithmic limitations by asking humans for help. In this work, we focus on mobile robots that need human assistance at specific spatially-situated locations (e.g., to push buttons in an elevator or to make coffee in the kitchen). We address the problem of what the robot should do when there are no humans present at such help locations. As the robots are mobile, we argue that they should plan to proactively seek help and travel to offices or occupied locations to bring people to the help locations. Such planning involves many trade-offs, including the wait time at the help location before seeking help, and the time and potential interruption to find and displace someone in an office. In order to choose appropriate parameters to represent such decisions, we first conduct a survey to understand potential helpers' travel preferences in terms of distance, interruptibility, and frequency of providing help. We then use these results to contribute a decision-theoretic algorithm to evaluate the possible choices in offices and plan where to proactively seek help. We demonstrate that our algorithm aims to minimize the number of office interruptions as well as task completion time.