Goto

Collaborating Authors

 Daniele, Andrea F.


Enhancing scientific exploration of the deep sea through shared autonomy in remote manipulation

arXiv.org Artificial Intelligence

Acknowledgments: The authors would like to acknowledge primary support from the National Science Foundation National Robotics Initiative which has made this research possible, additional support from NASA's PSTAR program, and in-kind support by the NOAA Ocean Exploration Cooperative Institute with ship and robotic vehicle operations during 2021 Pacific Ocean demonstrations in the San Pedro Basin. The authors would also like to thank the captain and crew of the R/V Nautilus, the NUI robotic vehicle operations team, and study participants who volunteered to assist with performance testing of the SHARC and conventional robotic manipulation systems. AP would like to acknowledge support from the National Science Foundation Graduate Research Fellowship under Grant No. 2141064 and the Link Foundation. Funding: National Science Foundation, National Robotics Initiative grant IIS-1830500 (RC) National Science Foundation, National Robotics Initiative grant IIS-1830660 (MW) National Aeronautics and Space Administration, Planetary Science and Technology from Analog Research grant NNX16AL08G (RC) Author contributions: Conceptualization: AFD, AP, GB, MRW, RC Methodology: AFD, AP, GB, MRW, RC Investigation: AFD, AP, GB, MRW, RC Visualization: AFD, AP, GB, RC Funding acquisition: MRW, RC Project administration: MRW, RC Supervision: MRW, RC Writing - original draft: AFD, AP, GB, MRW, RC Writing - review & editing: AFD, AP, GB, MRW, RC Competing interests: Authors declare that they have no competing interests. Data and materials availability: All data are available in the main text or the supplementary materials. NOTE: This is the author's version of the work. It is posted here by permission of the AAAS for personal use, not for redistribution. The definitive version was published in Science Robotics on 23 Aug 2023, DOI: 10.1126/scirobotics.adi5227.


Accessible Interfaces for the Development and Deployment of Robotic Platforms

arXiv.org Artificial Intelligence

Accessibility is one of the most important features in the design of robots and their interfaces. This thesis proposes methods that improve the accessibility of robots for three different target audiences: consumers, researchers, and learners. In order for humans and robots to work together effectively, they both must be able to communicate with each other. We tackle the problem of generating route instructions that are readily understandable by novice humans for the navigation of a priori unknown indoor environments. We then move on to the related problem of enabling robots to understand natural language utterances in the context of learning to operate articulated objects (e.g., fridges, drawers) by leveraging kinematic models. Next, we turn our focus to the development of accessible and reproducible robotic platforms for scientific research. We propose a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained "by design" from the beginning of the research and development process. We then propose a framework called SHARC (SHared Autonomy for Remote Collaboration), to improve accessibility for underwater robotic intervention operations. SHARC allows multiple remote scientists to efficiently plan and execute high-level sampling procedures using an underwater manipulator while deferring low-level control to the robot. Lastly, we developed the first hardware-based MOOC in AI and robotics. This course allows learners to study autonomy hands-on by making real robots make their own decisions and accomplish broadly defined tasks. We design a new robotic platform from the ground up to support this new learning experience. A fully browser-based interface, based on leading tools and technologies for code development, testing, validation, and deployment serves to maximize the accessibility of these educational resources.


Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

arXiv.org Artificial Intelligence

The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding.