Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
Scientists have taught a robot how to use human-like hand gestures while speaking by feeding it footage of people giving presentations. The android learned to use a pointing gesture to portray'you' or'me', as well as a crooked arm action to suggest holding something. Building robots that gesticulate like humans will make interactions with them feel more natural, the Korean team behind the technology said. They built the robot around machine learning software that they showed 52 hours of TED talks - presentations given by expert speakers on various topics. Pictured is a Pepper robot that scientists taught to give human-like hand gestures during speech.
The robots are coming for your jobs, too. China's state news agency has debuted a virtual anchor designed to be able to deliver the news 24 hours a day. Xinhua unveiled its "artificial intelligence news anchor" this week at an internet conference in the eastern city of Wuzhen. "Hello, you are watching English news program. I am AI news anchor in Beijing," the computer-generated host announced in a robotic voice at that start of its English-language broadcast.
Do you ever wish that Amazon's Alexa or Google Assistant were a bit more present for your chats? The answer might be this robotic head from tech company Furhat Robotics. As well as offering you a more human way to talk to a computer the Furhat robot also has the advantages of being able to emote, something Alexa and Assistant struggle with. Designed by a Swedish firm, Furhat isn't entirely designed to replace consumer products like Alexa and Assistant. The robots are currently being used by larger companies who need to give some life to artificial intelligence.
On Sunday and Monday, Twitter was abuzz over a harrowing Russian video, obviously shot with a drone, that showed a mother bear and cub making their way across a steep, snowy ridge. As the drone films, the cub falls down the ridge and laboriously makes its way up to its worried mother, sliding back down onto the rock on multiple occasions. The cub finally makes it to the top of the ridge, and both mother and cub dart off into the brush, the mother glancing over her shoulder in concern as the drone follows them. Many Twitter users found the video to be inspiring, in a way: Look, that baby bear never gave up! Many wildlife biologists, however, saw something rather different.
At CleverTap, we capture 4 billion transactions per day. Taking that data and personalizing it is where the trick lies. We are building this technology (and making it available) to digital businesses. We don't segment data manually, but the platform ingests data and we leverage AI/ML to get this done. Then we look at how to generate insights of what works and what doesn't.
History has time and again taught us that, science fiction is only a fantasy until science makes it a reality. In the 1940s Isaac Asimov, a prolific science fiction writer wrote about a future where robots are a part of the human world. Similarly, in a sci-fi film, Robocop made more than 30 years ago, a robot is built-up in order to solve an unprecedented crime problem in dystopian crime-ridden Detroit. Today science fiction has become a reality. Police in different parts of the world are using robots for law enforcement and first, ever robotic police officers have become deployed across China, Dubai and Hyderabad in India.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Oregon State's Cassie dressed up as an AT-ST from Star Wars. AT-ST stands for "All Terrain Scout Transport," which is basically accurate for Cassie, too.
When a child is learning to speak, no one bothers explaining the difference between subjects and verbs, or where they fall in a sentence. That is, however, how humans teach computers to understand language: We annotate sentences to describe the structure and meaning of words, and then we use those sentences to train syntactic and semantic parsers. These parsers help voice-recognition systems like Amazon's Alexa understand natural language. This week, researchers from MIT are presenting a paper that describes a new way to train parsers. Mimicking the way a child learns, the system observes captioned videos and associates the words with recorded actions and objects.
Boston Dynamics' videos aren't just famous, at this point they are almost a staple of the internet--typical stuff like robots doing backflips and opening doors for their friends. But the machines only became a YouTube phenomenon because someone grabbed the first video from Boston Dynamics' website and uploaded it themselves. "We just had it on our website and someone stole it and posted it," Marc Raibert, founder of Boston Dynamics, told us during a rare on-camera interview at the WIRED25 festival earlier this month. A few weeks later, the video had amassed 3.5 million views. "The light went on--this matters."