robot fail
DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning
Zhang, Xiaohan, Altaweel, Zainab, Hayamizu, Yohei, Ding, Yan, Amiri, Saeid, Yang, Hao, Kaminski, Andy, Esselink, Chad, Zhang, Shiqi
Prompting foundation models such as large language models (LLMs) and vision-language models (VLMs) requires extensive domain knowledge and manual efforts, resulting in the so-called "prompt engineering" problem. To improve the performance of foundation models, one can provide examples explicitly [1] or implicitly [2], or encourage intermediate reasoning steps [3, 4]. Despite all the efforts, their performance in long-horizon reasoning tasks is still limited. Classical planning methods, including those defined by Planning Domain Definition Language (PDDL), are strong in ensuring the soundness, completeness and efficiency in planning tasks [5]. However, those classical planners rely on predefined states and actions, and do not perform well in open-world scenarios. We aim to enjoy the openness of VLMs in scene understanding while retaining the strong long-horizon reasoning capabilities of classical planners. Our key idea is to extract domain knowledge from classical planners for prompting VLMs towards enabling classical planners that are visually grounded and responsive to open-world situations. Given the natural connection between planning symbols and human language, this paper investigates how pre-trained VLMs can assist the robot in realizing symbolic plans generated by classical planners, while avoiding the engineering efforts of checking the outcomes of each action.
- North America > United States > New York > Broome County > Binghamton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > Japan > Shikoku > Kagawa Prefecture > Takamatsu (0.04)
- Research Report (1.00)
- Overview (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
A faster way to teach a robot
Researchers from MIT and elsewhere have developed a technique that enables a human to efficiently fine-tune a robot that failed to complete a desired task-- like picking up a unique mug-- with very little effort on the part of the human. Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual image, say, of MIT's mascot, Tim the Beaver). "Right now, the way we train these robots, when they fail, we don't really know why. So you would just throw up your hands and say, 'OK, I guess we have to start over.' A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback," says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT. Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.
- North America > United States > New York (0.25)
- North America > United States > California (0.25)
Will a Robot Take Your Job? Artificial Intelligence's Impact on the Future of Jobs.
Sean Chou thinks robots are stupid. "All you have to do is type in'YouTube robot fail,' says Chou, CEO of Chicago-based AI startup Catalytic. Here, we'll make it easier: click to see robots fail. And even though they're getting smarter all the time and serving industry in novel ways, Chou is firm in his belief that "we're pretty far from'Terminator.'" It's that they're rising much more slowly than some of the more breathless media coverage might have you believe -- which is great news for most of those who think robots and other AI-powered technology will soon steal their jobs. The consensus among many experts is that a number of professions will be totally automated in the next five to 10 years. A group of senior-level tech executives who comprise the Forbes Technology Council named 13, including insurance underwriting, warehouse and manufacturing jobs, customer service, research and data entry, long haul trucking and a somewhat disconcertingly broad category titled "Any Tasks That ...
- North America > United States > Illinois > Cook County > Chicago (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China (0.04)
- Banking & Finance > Insurance (0.54)
- Government > Regional Government (0.48)
Robot Fails: These 8 Robots Aren't Going to Destroy Humanity Just Yet
Long gone are the days when talk of the existential threat of robots was confined to science fiction. Killer robots are a real worry today. So much so that thousands of academics, scientists, and engineers have signed a petition as part of the Campaign to Stop Killer Robots. Thankfully, we still have time to make a change by putting laws in place that will prevent governments from developing these technologies unchecked. Robots are a long way from being the Skynet monstrosities we see in the movies.
- Asia > South Korea > Gangwon-do > Pyeongchang (0.05)
- Asia > Japan (0.05)
- Leisure & Entertainment (0.93)
- Media > Film (0.74)
Worker robots that learn from mistakes
Computer scientists at the University of Leeds are using the artificial intelligence (AI) techniques of automated planning and reinforcement learning to "train" a robot to find an object in a cluttered space, such as a warehouse shelf or in a fridge -- and move it. The aim is to develop robotic autonomy, so the machine can assess the unique circumstances presented in a task and find a solution -- akin to a robot transferring skills and knowledge to a new problem. The Leeds researchers are presenting their findings today (Monday, November 4) at the International Conference on Intelligent Robotics and Systems in Macau, China. The big challenge is that in a confined area, a robotic arm may not be able to grasp an object from above. Instead it has to plan a sequence of moves to reach the target object, perhaps by manipulating other items out of the way.
Learning from mistakes and transferable skills - the attributes for a worker robot
Practise makes perfect – it is an adage that has helped humans become highly dexterous and now it is an approach that is being applied to robots. Computer scientists at the University of Leeds are using the artificial intelligence (AI) techniques of automated planning and reinforcement learning to "train" a robot to find an object in a cluttered space, such as a warehouse shelf or in a fridge – and move it. The aim is to develop robotic autonomy, so the machine can assess the unique circumstances presented in a task and find a solution – akin to a robot transferring skills and knowledge to a new problem. The Leeds researchers are presenting their findings today (Monday, November 4) at the International Conference on Intelligent Robotics and Systems in Macau, China. The big challenge is that in a confined area, a robotic arm may not be able to grasp an object from above.
Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning
Chen, Min, Nikolaidis, Stefanos, Soh, Harold, Hsu, David, Srinivasa, Siddhartha
Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally in order to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.
- North America > United States > California (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- (2 more...)
- Information Technology (0.46)
- Government (0.46)
- Leisure & Entertainment > Games (0.34)
11 robot fails, flubs, and pratfalls from the past year ZDNet
This is a decidedly more somber entry. In March, a self-driving car being tested by Uber struck and killed a pedestrian in Tempe, Arizona. Uber is pausing tests across the U.S. while an investigation into the cause of the death is underway. As companies race to be the first to market with self-driving vehicles on public roads and highways, it's a sobering reminder that the technology is still very much in development. Whether the incident slows down the pace of testing nationwide remains to be seen.
- Transportation > Ground > Road (1.00)
- Transportation > Passenger (0.75)
These clumsy robots prove AI is far from perfect
When a person screws up we call it human nature. So what does it mean when a machine that's trying to imitate our intelligence makes a mistake? According to the doomsayers, it means robots could attack us because of faulty reasoning – and that's scary. But, it's hard to fear a machine that can be defeated with tropical fruit. That's why we've gathered some of the best robot fails we could find to remind everyone we're still in charge. For starters, who could forget the Boston Dynamics' Atlas robot demonstration?