Chu, Vivian
SiRoK: Situated Robot Knowledge - Understanding the Balance Between Situated Knowledge and Variability
Daruna, Angel Andres (Institute for Robotics and Intelligent Machines, Georgia Institute of Technology) | Chu, Vivian (Institute for Robotics and Intelligent Machines, Georgia Institute of Technology) | Liu, Weiyu (Institute for Robotics and Intelligent Machines, Georgia Institute of Technology) | Hahn, Meera (Institute for Robotics and Intelligent Machines, Georgia Institute of Technology) | Khante, Priyanka (The University of Texas at Austin) | Chernova, Sonia (Institute for Robotics and Intelligent Machines, Georgia Institute of Technology) | Thomaz, Andrea (The University of Texas at Austin)
General-purpose robots operating in a variety of environments, such as homes or hospitals, require a way to integrate abstract knowledge that is generalizable across domains with local, domain-specific observations. In this work, we examine different types and sources of data, with the goal of understanding how locally observed data and abstract knowledge might be fused.We introduce the Situated Robot Knowledge (SiRoK) framework that integrates probabilistic abstract knowledge and semantic memory of the local environment. In a series of robot and simulation experiments we examine the tradeoffs in the reliability and generalization of both data sources. Our robot experiments show that the variability of object properties and locations in our knowledge base is indicative of the time it takes to generalize a concept and its validity in the real world. The results of our simulations back that of our robot experiments, and give us insights into which source of knowledge to use for 31 types of object classes that exist in the real world.
Reports on the 2017 AAAI Spring Symposium Series
Bohg, Jeannette (Max Planck Institute for Intelligent Systems) | Boix, Xavier (Massachusetts Institute of Technology) | Chang, Nancy (Google) | Churchill, Elizabeth F. (Google) | Chu, Vivian (Georgia Institute of Technology) | Fang, Fei (Harvard University) | Feldman, Jerome (University of California at Berkeley) | Gonzรกlez, Avelino J. (University of Central Florida) | Kido, Takashi (Preferred Networks in Japan) | Lawless, William F. (Paine College) | Montaรฑa, Josรฉ L. (University of Cantabria) | Ontaรฑรณn, Santiago (Drexel University) | Sinapov, Jivko (University of Texas at Austin) | Sofge, Don (Naval Research Laboratory) | Steels, Luc (Institut de Biologia Evolutiva) | Steenson, Molly Wright (Carnegie Mellon University) | Takadama, Keiki (University of Electro-Communications) | Yadav, Amulya (University of Southern California)
Reports on the 2017 AAAI Spring Symposium Series
Bohg, Jeannette (Max Planck Institute for Intelligent Systems) | Boix, Xavier (Massachusetts Institute of Technology) | Chang, Nancy (Google) | Churchill, Elizabeth F. (Google) | Chu, Vivian (Georgia Institute of Technology) | Fang, Fei (Harvard University) | Feldman, Jerome (University of California at Berkeley) | Gonzรกlez, Avelino J. (University of Central Florida) | Kido, Takashi (Preferred Networks in Japan) | Lawless, William F. (Paine College) | Montaรฑa, Josรฉ L. (University of Cantabria) | Ontaรฑรณn, Santiago (Drexel University) | Sinapov, Jivko (University of Texas at Austin) | Sofge, Don (Naval Research Laboratory) | Steels, Luc (Institut de Biologia Evolutiva) | Steenson, Molly Wright (Carnegie Mellon University) | Takadama, Keiki (University of Electro-Communications) | Yadav, Amulya (University of Southern California)
It is also important to remember that having a very sharp distinction of AI A rise in real-world applications of AI has stimulated for social good research is not always feasible, and significant interest from the public, media, and policy often unnecessary. While there has been significant makers. Along with this increasing attention has progress, there still exist many major challenges facing come a media-fueled concern about purported negative the design of effective AIbased approaches to deal consequences of AI, which often overlooks the with the difficulties in real-world domains. One of the societal benefits that AI is delivering and can deliver challenges is interpretability since most algorithms for in the near future. To address these concerns, the AI for social good problems need to be used by human symposium on Artificial Intelligence for the Social end users. Second, the lack of access to valuable data Good (AISOC-17) highlighted the benefits that AI can that could be crucial to the development of appropriate bring to society right now. It brought together AI algorithms is yet another challenge. Third, the researchers and researchers, practitioners, experts, data that we get from the real world is often noisy and and policy makers from a wide variety of domains.
Exploring Affordances Using Human-Guidance and Self-Exploration
Chu, Vivian (Georgia Institute of Technology) | Thomaz, Andrea L. (Georgia Institute of Technology)
Our work is aimed at service robots deployed in human environments that will need many specialized object manipulation skill. We believe robots should leverage end-users to quickly and efficiently learn the affordances of objects in their environment. Prior work has shown that this approach is promising because people naturally focus on showing salient rare aspects ofthe objects (Thomaz and Cakmak 2009). We replicate these prior results and build on them to create a semi-supervised combination of self and guided learning.We compare three conditions: (1) learning through self-exploration, (2) learning from demonstrations providedby 10 naive users, and (3) self-exploration seeded with the user demonstrations. Initial results suggests benefits of a mixed initiative approach.
An HRI Approach to Learning from Demonstration
Akgun, Baris (Georgia Institute of Technology) | Bullard, Kalesha (Georgia Institute of Technology) | Chu, Vivian (Georgia Institute of Technology) | Thomaz, Andrea (Georgia Institute of Technology)
The goal of this research is to enable robots to learn new things from everyday people. For years, the AI and Robotics community has sought to enable robots to efficiently learn new skills from a knowledgeable human trainer, and prior work has focused on several important technical problems. This vast amount of research in the field of robot Learning by Demonstration has by and large only been evaluated with expert humans, typically the system's designer. Thus, neglecting a key point that this interaction takes place within a social structure that can guide and constrain the learning problem. %Moreover, we We believe that addressing this point will be essential for developing systems that can learn from everyday people that are not experts in Machine Learning or Robotics. Our work focuses on new research questions involved in letting robots learn from everyday human partners (e.g., What kind of input do people want to provide a machine learner? How does their mental model of the learning process affect this input? What interfaces and interaction mechanisms can help people provide better input from a machine learning perspective?) Often our research begins with an investigation into the feasibility of a particular machine learning interaction, which leads to a series of research questions around re-designing both the interaction and the algorithm to better suit learning with end-users. We believe this equal focus on both the Machine Learning and the HRI contributions are key to making progress toward the goal of machines learning from humans. In this abstract we briefly overview four different projects that highlight our HRI approach to the problem of Learning from Demonstration.