Goto

Collaborating Authors

Method lets humans help robots 'see' to get around stuff - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. Researchers have come up with a new strategy that allows humans to help robots "see" their environments and carry out tasks. Just like us, robots can't see through walls. Sometimes they need a little help to get where they're going. The strategy called Bayesian Learning IN the Dark--BLIND, for short--is a new solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.


MIT work raises a question: Can robots be teammates with humans rather than slaves? ZDNet

#artificialintelligence

The image that most of society has of robots is that of slaves -- creations that can be forced do what humans want. Researchers at the Massachusetts Institute of Technology have formed an interesting take on the robot question that is less about slavery, more about cooperation. They observed that language is a function of humans cooperating on tasks, and imagined how robots might use language when working with humans to achieve some result. The word "team" is a word used prominently way up top in the paper, "Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks," written by scientists Vaibhav V. Unhelkar, Shen Li, and Julie A. Shah of the Computer Science and AI Laboratories at MIT and posted on the MIT Web site on March 31st. The use of the word "team" is significant given the structure of the experiment the scientists designed.


Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High Dimensions

arXiv.org Artificial Intelligence

Earlier work has shown that reusing experience from prior motion planning problems can improve the efficiency of similar, future motion planning queries. However, for robots with many degrees-of-freedom, these methods exhibit poor generalization across different environments and often require large datasets that are impractical to gather. We present SPARK and FLAME , two experience-based frameworks for sampling-based planning applicable to complex manipulators in 3 D environments. Both combine samplers associated with features from a workspace decomposition into a global biased sampling distribution. SPARK decomposes the environment based on exact geometry while FLAME is more general, and uses an octree-based decomposition obtained from sensor data. We demonstrate the effectiveness of SPARK and FLAME on a Fetch robot tasked with challenging pick-and-place manipulation problems. Our approaches can be trained incrementally and significantly improve performance with only a handful of examples, generalizing better over diverse tasks and environments as compared to prior approaches.


Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning

arXiv.org Artificial Intelligence

This paper addresses the problem of learning abstractions that boost robot planning performance while providing strong guarantees of reliability. Although state-of-the-art hierarchical robot planning algorithms allow robots to efficiently compute long-horizon motion plans for achieving user desired tasks, these methods typically rely upon environment-dependent state and action abstractions that need to be hand-designed by experts. We present a new approach for bootstrapping the entire hierarchical planning process. This allows us to compute abstract states and actions for new environments automatically using the critical regions predicted by a deep neural network with an auto-generated robot-specific architecture. We show that the learned abstractions can be used with a novel multi-source bi-directional hierarchical robot planning algorithm that is sound and probabilistically complete. An extensive empirical evaluation on twenty different settings using holonomic and non-holonomic robots shows that (a) our learned abstractions provide the information necessary for efficient multi-source hierarchical planning; and that (b) this approach of learning, abstractions, and planning outperforms state-of-the-art baselines by nearly a factor of ten in terms of planning time on test environments not seen during training.


Designing Human-Robot Coexistence Space

arXiv.org Artificial Intelligence

When the human-robot interactions become ubiquitous, the environment surrounding these interactions will have significant impact on the safety and comfort of the human and the effectiveness and efficiency of the robot. Although most robots are designed to work in the spaces created for humans, many environments, such as living rooms and offices, can be and should be redesigned to enhance and improve human-robot collaboration and interactions. This work uses autonomous wheelchair as an example and investigates the computational design in the human-robot coexistence spaces. Given the room size and the objects $O$ in the room, the proposed framework computes the optimal layouts of $O$ that satisfy both human preferences and navigation constraints of the wheelchair. The key enabling technique is a motion planner that can efficiently evaluate hundreds of similar motion planning problems. Our implementation shows that the proposed framework can produce a design around three to five minutes on average comparing to 10 to 20 minutes without the proposed motion planner. Our results also show that the proposed method produces reasonable designs even for tight spaces and for users with different preferences.