Katsu received his PhD in mechanical engineering from University of Tokyo in 2002. Following postdoctoral work at Carnegie Mellon University from 2002 to 2003, he was a faculty member at University of Tokyo until he joined Disney Research, Pittsburgh, in October 2008. His main research area is humanoid robot control and motion synthesis, in particular methods involving human motion data and dynamic balancing. He has always been fascinated by the way humans control their bodies, which led him to the research on biomechanical human modeling and simulation to understand human sensation and motor control.
Abstract: "Robots today are typically confined to operate in relatively simple, controlled environments. I argue that, in order to handle these variations, robots need to learn to understand how the world changes over time: how the environment can change as a result of the robot's own actions or from the actions of other agents in the environment. I will show how we can apply this idea of understanding changes to a number of robotics problems, such as object segmentation, tracking, and velocity estimation for autonomous driving as well as various object manipulation tasks. By learning how the environment can change over time, we can enable robots to operate in the complex, cluttered environments of our daily lives."
When the folding cage is opened in order to either load or unload the drone, a safety mechanism ensures that the engine is cut off, meaning that safety is ensured, even with completely untrained users. But where this drone takes a step forward is in the folding cage, ensuring that it can be easily stowed away and transported. By adding such a cage to a multicopter, the team ensure safety for those who come into contact with the drone. The drone can be caught while it's flying, meaning that it can deliver to people caught in places where landing is hard or even impossible, such as a collapsed building during search and rescue missions, where first aid, medication, water or food may need to be delivered quickly.
The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars. There is a new research area known as "explainable AI" which hopes to bridge this gap and make it possible to document and understand why machine learning systems operate as they do. The most interesting proposal in the prior document was a requirement for public sharing of incident and crash data so that all teams could learn from every problem any team encounters. The new document calls for a standard data format, and makes general motherhood calls for storing data in a crash, something everybody already does.
Mike Salem from Udacity's Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material. Abdelrahman is a Software Development Engineer in the Core Machine Learning team at Amazon Robotics. His work includes bringing state-of-the-art machine learning techniques to tackle various problems for robots at Amazon's robotic fulfillment centers. You can find all the interviews here.
If you follow the robotics community on the twittersphere, you'll have noticed that Rodney Brooks is publishing a series of essays on the future of robotics and AI which has been gathering wide attention. The Seven Deadly Sins of Predicting the Future of AI published on September 7, 2017. Domo Arigato Mr. Roboto published on August 28, 2017. Machine Learning Explained published on August 28, 2017.
Abstract: "Teams of robots often have to assign target locations among themselves and then plan collision-free paths to their target locations. Today, hundreds of robots already navigate autonomously in Amazon fulfillment centers to move inventory pods all the way from their storage locations to the packing stations. Path planning for these robots can be NP-hard, yet one must find high-quality collision-free paths for them in real-time. The shorter these paths are, the fewer robots are needed and the cheaper it is to open new fulfillment centers.
An immediately positive thing is the potential ability for private robocars to, once they have taken their owners to safety, drive back into the evacuation zone as temporary fleet cars, and fetch other people, starting with those selected by the car's owner, but also members of the public needing assistance. Cars might ferry people from homes to stations where robotic buses (including those from other cities, and human driven buses) could carry lots of people. The good thing is, if you can imagine it, so can the teams building test systems for robocars. If the data networks are up, they could get information in real time on road problems and disaster situations.
In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University. See all the latest robotics news on Robohub, or sign up for our weekly newsletter.
In an OpEd piece in the NY Times, and in a TED Talk late last year, Oren Etzioni, PhD, author, and CEO of the Allen Institute for Artificial Intelligence, suggested an update for Isaac Asimov's three laws of Artificial Intelligence. In an open letter to the U.N., a group of specialists from 26 nations and led by Elon Musk called for the United Nations to ban the development and use of autonomous weapons. Another more political warning was recently broadcast on VoA: Russian President Vladimir Putin, speaking to a group of Russian students, called artificial intelligence "not only Russia's future but the future of the whole of mankind… The one who becomes the leader in this sphere will be the ruler of the world. One that would regulate but not thwart the already growing global AI business.