Among the many things we humans like to lord over the rest of the animal kingdom is our complex language. Sure, other creatures talk to one another, but we've got all these wildly complicated written languages with syntax and fun words like defenestrate. This we can also lord over robots, who, in addition to lacking emotion and the ability to not fall on their faces, can't write novels. Researchers at Brown University just got a robot to do something as linguistically improbable as it is beautiful: After training to hand-write Japanese characters, the robot then turned around and started to copy words in a slew of other languages it'd never written before, including Hindi, Greek, and English, just by looking at examples of that handwriting. Not only that, it could do English in print and cursive.
One day in the not-so-distant future, robots could help humans out in the workplace by taking notes or sketching helpful diagrams. That's one of the objectives of a new robot created by researchers from Brown University that can learn to write languages and sketch drawings practically on its own. After learning to write Japanese characters, the robot was able to teach itself how to copy words in 10 different languages, including Hindi, Greek and English, just by studying various examples. It uses an algorithm that helps the robot decide where and how to place each pen stroke that distinguishes each letter in the alphabet, as well as what order to place them in to make the correct word. 'Just by looking at a target image of a word or sketch, the robot can reproduce each stroke as one continuous action,' Atsunobi Kotani, who led the study, said in a statement.
Stefanie Tellex, assistant professor of computer science at Brown University, is solving a thorny robotics problem: robotic grasp. She has built a machine learning model so that robots can automatically learn to manipulate objects and can produce much-needed sample data with which other researchers can use to train robots to pick up objects, she explained at the MIT Technology Review's EmTech conference. If you go to a robotics lab and put an object in front of a robot that it has not seen before, that robot will almost always not be able to pick up that object." It's a problem because a robot has to understand the task and the object from sensor information. The robot arm's controls need answers to important questions: what is the object shape, where is it, how should the robotic arm and gripper move into position and where is the right place to grip the object to pick it up?
Many of the jobs humans would like robots to perform, such as packing items in warehouses, assisting bedridden patients, or aiding soldiers on the front lines, aren't yet possible because robots still don't recognize and easily handle common objects. People generally have no trouble folding socks or picking up water glasses, because we've gone through "a big data collection process" called childhood, says Stefanie Tellex, a computer science professor at Brown University. For robots to do the same types of routine tasks, they also need access to reams of data on how to grasp and manipulate objects. Where does that data come from? Typically it has come from painstaking programming.
For a serious research robot, Baxter is a charmer. Its face is a flat screen that telegraphs "feelings" like embarrassment (rosy cheeks, upturned eyebrows). If you're so inclined, you can sit in front of it and make it read your mind to fix its mistakes. Or you can point to objects for it to pick up. If it gets confused, it can actually ask you for clarification, a seemingly simple interaction that's in fact a big deal for the budding field of human-robot communication.