Talking To Our Computers Is Changing Who We Are

Huffington Post - Tech news and opinion

On Wednesday, Google introduced its new personal assistant, Google Home, which will listen to your voice and provide information on demand, much like the popular Amazon Echo. Apple's Siri and Microsoft's Cortana have been chatting with people for years -- and one expert predicts that voice-driven technology will have startling effects on our social interactions moving forward. "There used to be a disconnect between how we interacted with, say, our desktop computers and our family," Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, told The Huffington Post. "We interacted with that computer only when we wanted to. Now technology is pervading the home environment.


News: ONR Researchers Create 'Human User Manual' for Robots - Office of Naval Research

#artificialintelligence

ARLINGTON, Va.--With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations. "For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy," said Marc Steinberg, an ONR program manager who oversees the research. "One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots."


One-Shot Imitation Learning

arXiv.org Artificial Intelligence

Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. Videos available at https://bit.ly/nips2017-oneshot .


NVIDIA's Deep Learning Car Computer Selected by Volvo on Journey Toward a Crash-Free Future

#artificialintelligence

CES--Volvo Cars will use the NVIDIA DRIVE PX 2 deep learning- based computing engine to power a fleet of 100 Volvo XC90 SUVs starting to hit the road next year in the Swedish carmaker's Drive Me autonomous-car pilot program, NVIDIA announced today. Autonomous technology is an important contributor to Volvo's Vision 2020 -- its guiding principles for creating safer vehicles. This work has resulted in world-leading advancements in autonomous and semi-autonomous driving, and a new safety benchmark for the automotive industry. "Our vision is that no one should be killed or seriously injured in a new Volvo by the year 2020," said Marcus Rothoff, director of the Autonomous Driving Program at Volvo Cars. "NVIDIA's high-performance and responsive automotive platform is an important step towards our vision and perfect for our autonomous drive program and the Drive Me project."


From virtual demonstration to real-world manipulation using LSTM and MDN

arXiv.org Artificial Intelligence

Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes.