Deep neural machine translation (NMT) can learn representations containing linguistic information. And despite the differences between various models, they all tend to learn similar properties. This phenomena got researchers wondering whether the learned information is fully distributed and embedded to individual neurons. Recent research results confirmed that hypothesis, revealing that simple properties such as coordinating conjunctions and determiners can be attributed to individual neurons, while more complex linguistic properties such as syntax and semantics are distributed across multiple neurons. Following on this, researchers from The Chinese University of Hong Kong, Tencent AI Lab and University of Macau have proposed a new neuron interaction based representation composition for NMT.
In the fall of 2015, celebrated visual effects whiz Pablo Helman was in Taiwan celebrating Thanksgiving with Martin Scorsese. The 24-year veteran of Industrial Light & Magic, the company founded by George Lucas at the onset of the Star Wars franchise, was midway through production on the director's Jesuit missionary saga, Silence, for which Helman had to digitally re-create the enormity of St. Paul's College of Macau. But over holiday dinner, Scorsese began pitching Helman on a different film entirely. It was another adaption, this one based on I Heard You Paint Houses, Charles Brandt's biography of mob hit man and supposed Jimmy Hoffa murderer Frank Sheeran. Much like Silence, the story was expansive, though instead of spanning geography (Portugal to Japan), the movie would stretch across years (approximately seven decades).
Computer scientists at the University of Leeds are using the artificial intelligence (AI) techniques of automated planning and reinforcement learning to "train" a robot to find an object in a cluttered space, such as a warehouse shelf or in a fridge -- and move it. The aim is to develop robotic autonomy, so the machine can assess the unique circumstances presented in a task and find a solution -- akin to a robot transferring skills and knowledge to a new problem. The Leeds researchers are presenting their findings today (Monday, November 4) at the International Conference on Intelligent Robotics and Systems in Macau, China. The big challenge is that in a confined area, a robotic arm may not be able to grasp an object from above. Instead it has to plan a sequence of moves to reach the target object, perhaps by manipulating other items out of the way.
The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (#IROS2019) is being held in Macau this week. The theme this year is "robots connecting people". The conference accepted 1,127 papers for oral presentation, 148 late breaking news posters, and 41 workshops and tutorials. For those who can't make it in person, or can't possibly see everything, IROS is launching IROS TV, an onsite conference television channel featuring a new episode daily that is screened around the conference venue and online. The TV shows profile the research of scientists, educators, and practitioners in robotics, and provide an opportunity to learn about advances in robotics.
Practise makes perfect – it is an adage that has helped humans become highly dexterous and now it is an approach that is being applied to robots. Computer scientists at the University of Leeds are using the artificial intelligence (AI) techniques of automated planning and reinforcement learning to "train" a robot to find an object in a cluttered space, such as a warehouse shelf or in a fridge – and move it. The aim is to develop robotic autonomy, so the machine can assess the unique circumstances presented in a task and find a solution – akin to a robot transferring skills and knowledge to a new problem. The Leeds researchers are presenting their findings today (Monday, November 4) at the International Conference on Intelligent Robotics and Systems in Macau, China. The big challenge is that in a confined area, a robotic arm may not be able to grasp an object from above.
Researchers at MIT are helping autonomous cars deliver on the promise of safer roads with a new trick that lets driverless vehicles see around corners to pre-emptively spot other vehicles or moving hazards that human drivers would never see coming. There have been several attempts to make cameras that are able to see around corners, including other MIT researchers who revealed a system that can shine light into a room from the outside, capture the light that's bounced back, and then process the results to calculate a 3D model of objects inside that are otherwise hidden from human observers. It required a special camera, however, including lasers and other hardware that would inevitably increase the cost of an autonomous vehicle, which would, in turn, hurt sales. You didn't think all these carmakers are developing driverless cars for fun, did you? The new approach to spotting oncoming hazards around corners is being presented at the International Conference on Intelligent Robots and Systems in Macau, China, next week, and it builds and improves on an earlier system called ShadowCam that was developed a few years prior.
Here's our daily update in tweets, live from IJCAI (International Joint Conference on Artificial Intelligence) in Macau. Like yesterday, we'll be covering tutorials and workshops. Now attending the #tutorial "Argumentation and Machine Learning: When the Whole is Greater than the Sum of its Parts" by @CeruttiFederico, & learning about #ML mechanisms that create, annotate, analyze & evaluate arguments expressed in natural language.#AI Now: "Dialogues with Socially Aware Robot Agents – Knowledge & Reasoning using Natural Language," an invited #IJCAI2019 talk by Prof. Kristiina Jokinen Her start: "The quality of #intelligence possessed by humans and #AI is fundamentally different."#Bridging2019 On his second slide: #AGI "needs fresh methods with cognitive architectures and philosophy of mind."#AI