Glas discusses how ERICA was designed, the uncanny valley, the software architecture of ERICA, and some of the research studies that ERICA has been involved in. Dylan Glas is a Senior Robotics Software Architect at Futurewei Technologies, a research division of Huawei in Silicon Valley. He was previously a senior researcher in social robotics at Hiroshi Ishiguro Laboratories at ATR and a Guest Associate Professor at the Intelligent Robotics Laboratory at Osaka University. He was the chief architect for the ERICA android in the ERATO Ishiguro Symbiotic Human-Robot Interaction Project. His research interests include social human-machine interaction, ubiquitous sensing, network robot systems, teleoperation for social robots, and machine learning.
Last week's breaking news story on The Robot Report was unfortunately the demise of Helen Greiner's company, CyPhy Works (d/b/a Aria Insights). The high-flying startup raised close to $40 million since its creation in 2008, making it the second business founded by an iRobot alum that has shuttered within five months. While it is not immediately clear why the tethered-drone company went bust, it does raise important questions about the long-term market opportunities for leashed robots. The tether concept is not exclusive to Greiner's company, there are a handful of drone companies that vie for marketshare, including: FotoKite, Elistair, and HoverFly. The primary driver towards chaining an Unmanned Ariel Vehicle (UAV) is bypassing the Federal Aviation Administration's (FAA) ban on beyond line of sight operations.
A child who has never seen a pink elephant can still describe one -- unlike a computer. "The computer learns from data," says Jiajun Wu, a PhD student at MIT. "The ability to generalize and recognize something you've never seen before -- a pink elephant -- is very hard for machines." Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa's latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits.
"Recent advances in Deep Reinforcement Learning (DRL) algorithms provided us with the possibility of adding intelligence to robots. Recently, we have been applying a variety of DRL algorithms to the tasks that modern control theory may not be able to solve. We observed intriguing creativity from robots when they are constrained in reaching a certain goal. To introduce the topic, I will talk about some of the experiments that are being done to show the capabilities and limitations of modern Deep Reinforcement Learning approaches, including those of sparse rewards and continuous observations and action spaces. An in depth explanation of how Hindsight Experience Replay (HER) has been used to obtain dense results from sparse environments when using Deep Deterministic Policy Gradient (DDPG) agents will be given. I will then show how we have modified some of these experiments to have a deeper understanding of the intelligence we are developing, and what are the baseline environmental characteristics that make the robots achieve higher levels of creativity during their problem solving scenarios."
In this episode, Audrow Nash speaks with Ian Bernstein, Founder and Head of Product at Misty Robotics, about a robotics platform designed for developers called Misty II. Ian Bernstein is Founder and Head of Product at Misty Robotics, a spin-off company from Sphero, Inc. focused on building personal robots for the home and office. In this role, Bernstein leads Misty Robotics' product development and design. Prior to Misty Robotics, Bernstein served as Founder and Chief Technology Officer at Sphero, Inc. that has shipped more than 3 million robots to date. Bernstein joined TechStars in 2010 with Sphero co-founder Adam Wilson and created Sphero, the original app-enabled robotic ball.
Abstract: "In this talk I will cover some of the recent work out of the Socially Intelligent Machines Lab at UT Austin (http://sim.ece.utexas.edu/research.html). The vision of our research is to enable robots to function in dynamic human environments by allowing them to flexibly adapt their skill set via learning interactions with end-users. We explore the ways in which Machine Learning agents can exploit principles of human social learning, and breakdown assumptions about what "data" will be like, when the source of that data is an average human teacher. I will cover our work on interactive reinforcement learning algorithms that model the attention of the teacher; coupling learning from demonstration with simulation to make the best use of valuable interactions with people; and algorithms for re-using previously learned tasks in new contexts with the help of a teacher's hints and corrections. In the latter part of the talk, I will put on my other hat, as co-founder and CEO of Diligent Robotics (http://diligentrobots.com/about) to tell you about how we are translating our research on adapting to human environments into a commercial product. Our first product, Moxi, is a robot assistant that works alongside and supports clinical care teams in hospitals. Moxi was launched into beta trials late last year, and has been deployed in four hospitals across Texas to date."
The tactile MPC algorithm works by training an action-conditioned visual dynamics or video-prediction model on autonomously collected data. This model learns from raw sensory data, such as image pixels, and is able to directly make predictions of future images taking as input future hypothetical actions taken by the robot and starting tactile images we call context frames. No other information, such as the absolute position of the end effector, is specified.
Taking a cue from biological cells, researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects, and complete other tasks. This so-called "particle robotics" system -- based on a project by MIT, Columbia Engineering, Cornell University, and Harvard University researchers -- comprises many individual disc-shaped units, which the researchers call "particles." The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.
A soft robot, attached to a balloon and submerged in a transparent column of water, dives and surfaces, then dives and surfaces again, like a fish chasing flies. Soft robots have performed this kind of trick before. But unlike most soft robots, this one is made and operated with no hard or electronic parts. Inside, a soft, rubber computer tells the balloon when to ascend or descend. For the first time, this robot relies exclusively on soft digital logic.
Abstract: "I will talk about three results that surprised me. First, I will show that the free configuration space of an elastic wire is path-connected, a result that makes easy a manipulation planning problem that was thought to be hard. Second, I will show a linear relationship between stimulation parameters, skin impedance, and sensation intensity in electrotactile stimulation. This result leads to algorithms that keep sensation intensity constant despite large variability in skin impedance, eliminating a longstanding barrier to practical use of electrotactile stimulation for sensory substitution and haptic feedback. Third, I will show you several obvious ways to use fiducial markers – which everybody knows will improve the performance of Structure-from-Motion (SfM) algorithms for vision-based 3D reconstruction – that work poorly. Then, I will show you a simple but less obvious way to use them that seems to work well. I will also talk about my experience teaching two engineering courses – one on robotics, one on control systems – to students incarcerated at Danville Correctional Center, an Illinois state prison. I will tell you why I did it and what I learned."