Collaborating Authors

Building smart robots using AI ROS: Part 1


The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. ROS is used to create application for a physical robot without depending on the actual machine, thus saving cost and time. These applications can be transferred onto the physical robot without modifications. The decision making capability of the robots can be aided with AI.

How Machine Learning Lets Robots Teach Themselves


Algorithms play role in so much of what we see online and in our day-to-day lives, helping out with everything from setting bail to finding recipes. But while the algorithms of the past were painstakingly coded by humans, the algorithms of the future will be built by robots. They'll be better, more efficient, but also nearly impossible for humans to understand.


AAAI Conferences

Mobile robots are increasingly being used in the real-world due to the availability of high-fidelity sensors and sophisticated information processing algorithms. A key challenge to the widespread deployment of robots is the ability to accurately sense the environment and collaborate towards a common objective. Probabilistic sequential decision-making methods can be used to address this challenge because they encapsulate the partial observability and non-determinism of robot domains. However, such formulations soon become intractable for domains with complex state spaces that require real-time operation. Our prior work enabled a mobile robot to use hierarchical partially observable Markov decision processes (POMDPs) to automatically tailor visual sensing and information processing to the task at hand. This paper introduces adaptive observation functions and policy re-weighting in a three-layered POMDP hierarchy to enable reliable and efficient visual processing in dynamic domains. In addition, each robot merges its beliefs with those communicated by teammates, to enable a team of robots to collaborate robustly. All algorithms are evaluated in simulated domains and on physical robots tasked with locating target objects in indoor environments.


AAAI Conferences

Research in learning from demonstration has focused on transferring movements from humans to robots. However, a need is arising for robots that do not just replicate the task on their own, but that also interact with humans in a safe and natural way to accomplish tasks cooperatively. Robots with variable impedance capabilities opens the door to new challenging applications, where the learning algorithms must be extended by encapsulating force and vision information. In this paper we propose a framework to transfer impedance-based behaviors to a torque-controlled robot by kinesthetic teaching. The proposed model encodes the examples as a task-parameterized statistical dynamical system, where the robot impedance is shaped by estimating virtual stiffness matrices from the set of demonstrations. A collaborative assembly task is used as testbed. The results show that the model can be used to modify the robot impedance along task execution to facilitate the collaboration, by triggering stiff and compliant behaviors in an on-line manner to adapt to the user's actions.

BRETT the Robot learns to put things together on his own


Full Story: UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. In their experiments, the PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, used "deep learning" techniques to complete various tasks without pre-programmed details about its surroundings.