Goto

Collaborating Authors

 teaching robot


Teaching robots to map large environments

Robohub

A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain. Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot's onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission. To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.


Why teaching robots to blink is hard but important

BBC News

Mr Kennedy says that the research remains more experimental, and isn't yet being applied in Disney's theme parks. "The goal here was to really select a single social cue that we were interested in and push it as far as we could in making lifelike believable motion and behaviour that we felt would provide a platform for engagement with people."

  teaching robot
  Industry: Media > News (0.40)

Teaching Robots to Perform Tasks Like Humans - USC Viterbi

#artificialintelligence

Can language models reason in a real-world setting? USC researchers explored this question in a recent paper published at AAAI. Your coffee has gone cold. You pick up your cup, place it in the microwave, and zap it. For a robot, however, the task is not easy – even if it has been "taught" by language models (LMs) where the water, cup and microwave are.


Teaching Robots About Tools With Neural Radiance Fields (NeRF)

#artificialintelligence

New research from the University of Michigan proffers a way for robots to understand the mechanisms of tools, and other real-world articulated objects, by creating Neural Radiance Fields (NeRF) objects that demonstrate the way these objects move, potentially allowing the robot to interact with them and use them without tedious dedicated preconfiguration. By utilizing known source references for the internal motility of tools (or any object with a suitable reference), NARF22 can synthesize a photorealistic approximation of the tool and its range of movement and type of operation. Robots that are required to do more than avoid pedestrians or perform elaborately pre-programmed routines (for which non-reusable datasets have probably been labeled and trained at some expense) need this kind of adaptive capacity if they are to work with the same materials and objects that the rest of us must contend with. To date, there have been a number of obstacles to imbuing robotic systems with this kind of versatility. These include the paucity of applicable datasets, many of which feature a very limited number of objects; the sheer expense involved in generating the kind of photorealistic, mesh-based 3D models that can help robots to learn instrumentality in the context of the real world; and the non-photorealistic quality of such datasets as may actually be suitable for the challenge, causing the objects to appear disjointed from what the robot perceives in the world around it, and training it to seek a cartoon-like object that will never appear in reality.


Teaching robots to be team players with nature

#artificialintelligence

This en masse behavior by individual organisms can provide separate and collective good, such as improving chances of successful mating propagation or providing security. Now, researchers have harnessed the self-organization skills required to reap the benefits of natural swarms for robotic applications in artificial intelligence, computing, search and rescue, and much more. They published their method on Aug. 3 in Intelligent Computing. "Designing a set of rules that, once executed by a swarm of robots, results in a specific desired behavior is particularly challenging," said corresponding author Marco Dorigo, professor in the artificial intelligence laboratory, named IRIDIA, of the Université Libre de Bruxelles, Belgium. "The behavior of the swarm is not a one-to-one map with simple rules executed by individual robots, but rather results from the complex interactions of many robots executing the same set of rules."


Why teaching robots to play hide-and-seek could be the key to next-gen A.I.

#artificialintelligence

Artificial general intelligence, the idea of an intelligent A.I. agent that's able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it's increasingly widely a part of real artificial intelligence conversations as well. But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which.


Researchers Are Teaching Robots How To Open Doors

#artificialintelligence

A lot of people like to joke about robots ruling the world. I mean, many of us have seen movies (or books) like I, Robot where the entire premise is about robots taking over and making mankind follow their governance.


One of Facebook's first moves as Meta: Teaching robots to touch and feel

#artificialintelligence

Last week, Mark Zuckerberg officially announced that his company was changing its name from Facebook to Meta, with a prominent new focus on creating the metaverse. A defining feature of this metaverse will be creating a feeling of presence in the virtual world. Presence could mean simply interacting with other avatars and feeling like you are immersed in a foreign landscape. Or, it could even involve engineering some sort of haptic feedback for users when they touch or interact with objects in the virtual world. As part of all this, a division of Meta called Meta AI wants to help machines learn how humans touch and feel by using a robot finger sensor called DIGIT, and a robot skin called ReSkin.


Why Scientists are Teaching Robots to Play Hide-and-Seek

#artificialintelligence

Artificial general intelligence, the idea of an intelligent A.I. agent that's able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it's increasingly widely a part of real artificial intelligence conversations as well. But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which.


Teaching robots through positive reinforcement – TechCrunch

#artificialintelligence

The field, after all, holds the key to unlocking a lot of potential for the industry. One of the things that makes it so remarkable is the myriad different approaches so many researchers are taking to unlock the secrets of helping robots essentially learn from scratch. A new paper from Johns Hopkins University sporting the admittedly delightful name "Good Robot" explores the potential of learning through positive reinforcement. The title derives from an anecdote from author Andrew Hundt about teaching his dog to not chase after squirrels. I won't go into that here -- you can just watch this video instead: But the core of the idea is to offer the robot some manner of incentive when it gets something correct, rather than a disincentive when it does something wrong.