Inside Facebook's robotics lab where it teaches six-legged bots to walk and makes its AI smarter

Daily Mail - Science & tech

Facebook isn't often thought of as a robotics company, but new work being done in the social media giant's skunkworks AI lab is trying to prove otherwise. The company on Monday gave a detailed look into some of the projects being undertaken by its AI researchers at its Menlo Park, California-based headquarters, many of which are aimed at making robots smarter. Among the machines being developed are walking hexapods that resemble a spider, a robotic arm and a human-like hand complete with sensors to help it touch. Facebook has a dedicated team of AI researchers at its headquarters in Menlo Park, California that are tasked with testing out robots. The hope is that their learnings can be applied to other AI software in the company and make those systems smarter.


Facebook Research is developing touchy-feely curious robots

#artificialintelligence

"Much of our work in robotics is focused on self-supervised learning, in which systems learn directly from raw data so they can adapt to new tasks and new circumstances," a team of researchers from FAIR (Facebook AI Research) wrote in a blog post. "In robotics, we're advancing techniques such as model-based reinforcement learning (RL) to enable robots to teach themselves through trial and error using direct input from sensors." Specifically, the team has been trying to get a six-legged robot to teach itself to walk without any outside assistance. "Generally speaking, locomotion is a very difficult task in robotics and this is what it makes it very exciting from our perspective," Roberto Calandra, a FAIR researcher, told Engadget. "We have been able to design algorithms for AI and actually test them on a really challenging problem that we otherwise don't know how to solve."


Bringing Bots To Life With Artificial Intelligence - TOPBOTS

#artificialintelligence

Any video gamer knows how boring NPCs (non-playable characters) in digital worlds are. Their behavior is simple and predictable and their words entirely scripted by a staff of writers. This makes them uninteresting opponents and unsatisfying companions. We're far more likely to emotionally attach to lifelike characters, like the emo robot sidekicks in the Star Wars franchise, but crafting believable, autonomous entities you can actually interact with is no easy feat. Character models built by artificial intelligence aim to escape the uncanny valley and imbue inanimate objects and digital characters with an aura of realism and life.


Bringing Bots To Life With Artificial Intelligence - TOPBOTS

#artificialintelligence

Any video gamer knows how boring NPCs (non-playable characters) in digital worlds are. Their behavior is simple and predictable and their words entirely scripted by a staff of writers. This makes them uninteresting opponents and unsatisfying companions. We're far more likely to emotionally attach to lifelike characters, like the emo robot sidekicks in the Star Wars franchise, but crafting believable, autonomous entities you can actually interact with is no easy feat. Character models built by artificial intelligence aim to break out of the uncanny valley and imbue inanimate objects and digital characters with an aura of realism and life.


How does Boston Dynamics use AI? – Towards Data Science

#artificialintelligence

Being a company funded partially by DARPA, it is very hard to find information about them. However, let's try to find out how they use technology to build their awesome robots. If someone from Boston Dynamics can correct my mistakes, I'd appreciate it. Eric Jang, Research Engineer at Google Brain, said "The value of Boston Dynamic is almost entirely in their closed-source control software. Boston Dynamics doesn't publish what techniques they use, but from Marc Raibert's talk at NIPS, it seems like their work is based on the approach proposed by "Sequential Composition of Dynamically Dexterous Robot Behaviors" by Burridge, Rizzi, and Koditscheck in 1999 (https://kodlab.seas.upenn.edu/up...). The robotic policy uses a model-based controller, which is in turn represented as a sequential composition of "cost funnels" that operate over local regions of state space."