By embedding engineered bacteria into the fingers of a robot arm, researchers have created a biohybrid bot that can "taste" -- and they think it could lead to a future in which robots are better equipped to respond to the world around them. For their study, which was published in the journal Science Robotics on Wednesday, a team from the University of California, Davis, and Carnegie Mellon University engineered E. coli bacteria to produce a fluorescent protein when it encountered the chemical IPTG. They then placed the engineered bacteria into wells built into a robot arm's flexible grippers. Finally, they covered the wells with a porous membrane that would keep the bacteria in place while letting liquids reach the cells. To test the system, the researchers had the arm reach into a water bath that sometimes contained IPTG.
Automata, a robotics firm in London, thinks it can fix this lag in uptake. Its robotic arm costs just $7,500 and is sold under the name Eva (yes, it is named after the robot in WALL-E). The company hopes to widen access to robots by focusing only on the more basic functions that small firms actually need. It is backed by $9.5 million from several investors, including robotics giant ABB.
Motion planning problems can be simplified by admissible projections of the configuration space to sequences of lower-dimensional quotient-spaces, called sequential simplifications. To exploit sequential simplifications, we present the Quotient-space Rapidly-exploring Random Trees (QRRT) algorithm. QRRT takes as input a start and a goal configuration, and a sequence of quotient-spaces. The algorithm grows trees on the quotient-spaces both sequentially and simultaneously to guarantee a dense coverage. QRRT is shown to be (1) probabilistically complete, and (2) can reduce the runtime by at least one order of magnitude. However, we show in experiments that the runtime varies substantially between different quotient-space sequences. To find out why, we perform an additional experiment, showing that the more narrow an environment, the more a quotient-space sequence can reduce runtime.
The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments. Here we give a computational account of how humans judge the difficulty of a range of physical construction tasks (e.g., moving 10 loose blocks from their initial configuration to their target configuration, such as a vertical tower) by quantifying two key factors that influence construction difficulty: physical effort and physical risk. Physical effort captures the minimal work needed to transport all objects to their final positions, and is computed using a hybrid task-and-motion planner. Physical risk corresponds to stability of the structure, and is computed using noisy physics simulations to capture the costs for precision (e.g., attention, coordination, fine motor movements) required for success. We show that the full effort-risk model captures human estimates of difficulty and construction time better than either component alone.
In order to solve complex, long-horizon tasks, intelligent robots need to be able to carry out high-level, abstract planning and reasoning in conjunction with motion planning. However, abstract models are typically lossy and plans or policies computed using them are often unexecutable in practice. These problems are aggravated in more realistic situations with stochastic dynamics, where the robot needs to reason about, and plan for multiple possible contingencies. We present a new approach for integrated task and motion planning in such settings. In contrast to prior work in this direction, we show that our approach can effectively compute integrated task and motion policies with branching structure encoding agent behaviors for various possible contingencies. We prove that our algorithm is probabilistically complete and can compute feasible solution policies in an anytime fashion so that the probability of encountering an unresolved contingency decreases over time. Empirical results on a set of challenging problems show the utility and scope of our methods.
It's been two years since the last time I judged the Automate Startup Competition. More than any other trade show contest, this event has been an oracle of future success. In following up with the last vintage of participants, all of the previous entrees are still operating and many are completing multi-million dollar financing rounds. As an indication of the importance of the venue, and quite possibly the growth of the industry, The Robot Report announced last week that 2017 finalist, Kinema Systems was acquired by SoftBank's Boston Dynamics. Traditionally, autonomous machines at the ProMat Show have been relegated to a subsection of the exhibit floor under the Automate brand.
To guarantee the safe and efficient motion planning of autonomous driving under dynamic traffic environment, the autonomous vehicle should be equipped with not only the optimal but also a long term efficient policy to deal with complex scenarios. The first challenge is that to acquire the optimal planning trajectory means to sacrifice the planning efficiency. The second challenge is that most search based planning method cannot find the desired trajectory in extreme scenario. In this paper, we propose a data driven approach for motion planning to solve the above challenges. We transform the lane change mission into Mixed Integer Quadratic Problem with logical constraints, allowing the planning module to provide feasible, safe and comfortable actions in more complex scenario. Furthermore, we propose a hierarchical learning structure to guarantee online, fast and more generalized motion planning. Our approach's performance is demonstrated in the simulated lane change scenario and compared with related planning method.
Robots are well-known for being very good at some very specific things. They're often defined by words like "precision" and "repeatability" and "speed," because if you want a robot to be uniquely useful, it's usually going to have to leverage one or more of those characteristics in a way that makes it better at some specific task than humans are. Robots have been doing this for decades, typically in places like industrial settings, but things are starting to change, and roboticists are beginning to look towards other applications in more unconstrained, dynamic environments, like non-industrial settings. Such environments (our homes, for example) are the kinds of places that we really, really want robots to be useful in. We want them doing our chores so that we don't have to, ideally without causing catastrophic damage or injury at the same time.
Robots in factories today are powerful and precise, but dumb as toast. A new robot arm, developed by a team of researchers from UC Berkeley, is meant to change that by providing a cheap-yet-powerful platform for AI experimentation. The team likens their creation to the Apple II, the personal computer that attracted hobbyists and hackers in the 1970s and '80s, ushering in a technological revolution. Robots and AI have evolved in parallel as areas of research for decades. In recent years, however, AI has advanced rapidly when applied to abstract problems like labeling images or playing video games.
Sampling-based planners are effective in many real-world applications such as robotics manipulation, navigation, and even protein modeling. However, it is often challenging to generate a collision-free path in environments where key areas are hard to sample. In the absence of any prior information, sampling-based planners are forced to explore uniformly or heuristically, which can lead to degraded performance. One way to improve performance is to use prior knowledge of environments to adapt the sampling strategy to the problem at hand. In this work, we decompose the workspace into local primitives, memorizing local experiences by these primitives in the form of local samplers, and store them in a database. We synthesize an efficient global sampler by retrieving local experiences relevant to the given situation. Our method transfers knowledge effectively between diverse environments that share local primitives and speeds up the performance dramatically. Our results show, in terms of solution time, an improvement of multiple orders of magnitude in two traditionally challenging high-dimensional problems compared to state-of-the-art approaches.