Dr. Tambe describes his team's use of security games to combat poaching, and his experience deploying his algorithms to inform park ranger schedules internationally. Dr. Milind Tambe is the Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California, and Professor in the Computer Science and Industrial and Systems Engineering Departments. He is a founding co-director of the CAIS Center for AI in Society, where he advises students and conducts research on multiagent teamwork, distributed constraint optimization, and security games. The security games framework developed by Dr. Tambe has been deployed and tested nationally and internationally, and led to his co-founding of company Avata Intelligence.
Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design. The researchers developed a low-cost knitted glove, called "scalable tactile glove" (STAG), equipped with about 550 tiny sensors across nearly the entire hand. Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to "learn" a dataset of pressure-signal patterns related to specific objects.
Communicating the goal of a task to another person is easy: we can use language, show them an image of the desired outcome, point them to a how-to video, or use some combination of all of these. On the other hand, specifying a task to a robot for reinforcement learning requires substantial effort. Most prior work that has applied deep reinforcement learning to real robots makes uses of specialized sensors to obtain rewards or studies tasks where the robot's internal sensors can be used to measure reward. Since such instrumentation needs to be done for any new task that we may wish to learn, it poses a significant bottleneck to widespread adoption of reinforcement learning for robotics, and precludes the use of these methods directly in open-world environments that lack this instrumentation. We have developed an end-to-end method that allows robots to learn from a modest number of images that depict successful completion of a task, without any manual reward engineering.
Kapitonov discusses the advantages of using blockchain, use cases including a fully autonomous vending machine, and the Robonomics technology stack. Below are two videos showing the Robonomics Platform in action via a fully autonomous robot artist and drones for environmental monitoring. Aleksandr Kapitonov is a "robot economics" academic society progressor at Airalab (the team behind Robonomics Platform), an assistant professor of Control Systems and Robotics at ITMO University, and regional coordinator of the Erasmus IOT-OPEN.EU project for researching and developing IoT education practices. His research focuses on navigation, computer vision, control of mobile robots and communication for multi-agents systems.
Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills – including manipulation tasks on a real Sawyer robot arm – directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks. Our robot learns to stack a Lego block and push a mug onto a coaster with only inputs from a camera pointed at the robot.
With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.
The IEEE International Conference on Robotics and Automation (ICRA) is being held this week in Montreal, Canada. It's one of the top venues for roboticists and attracts over 4000 conference goers. Andra, Audrow, Lauren, and Lilly are on the ground so expect lots of great podcasts, videos with best-paper nominees, and coverage in the weeks and months ahead. For a taste of who is presenting, here is the schedule of keynotes. It also looks like you can navigate the program, read abstracts, and watch spotlight presentations by following these instructions.
Børnich discusses how Eve can be used in research, how Eve's motors have been designed to be safe around humans (including why they use a low gear ratio), how they do direct force control and the benefits of this approach, and how they use machine learning to reduce cogging in their motors. Børnich also discusses the longterm goal of Halodi Robotics and how they plan to support researchers using Eve. Below are two videos of Eve. The first is a video of how Eve can be used as a platform to address several research questions. The second shows Eve moving a box and dancing.
MIT researchers have devised a method for assessing how robust machine-learning models known as neural networks are for various tasks, by detecting when the models make mistakes they shouldn't. Convolutional neural networks (CNNs) are designed to process and classify images for computer vision and many other tasks. But slight modifications that are imperceptible to the human eye -- say, a few darker pixels within an image -- may cause a CNN to produce a drastically different classification. Such modifications are known as "adversarial examples." Studying the effects of adversarial examples on neural networks can help researchers determine how their models could be vulnerable to unexpected inputs in the real world.
Hydraulics are sometimes looked at as an alternative to electric motors. Hydraulic systems use an incompressible liquid (as opposed to pneumatics that use a compressible gas) to transfer force from one place to another. Since the hydraulic system will be a closed system (ignore relief valves for now) when you apply a force to one end of the system that force is transferred to another part of that system. By manipulating the volume of fluid in different parts of the system you can change the forces in different parts of the system (Remember Pascal's Law from high school??). So here are some of the basic components used (or needed) to develop a hydraulic system.