In this paper, we present an algorithm that integrates computer vision with machine learning to enable a humanoid robot to accurately fire at objects classified as targets. The robot needs to be calibrated to hold the gun and instructed how to pull the trigger. Two algorithms are proposed and are executed depending on the dynamics of the target. If the target is stationery, a least mean square (LMS) approach is used to compute the error and adjust the gun muzzle accordingly. If the target is found to be dynamic, a modified Q-learning is used to best predict the object position and velocity and to adjust relevant parameters, as necessary. The image processing utilizes the OpenCV library to detect the target and point of impact of the bullets. The approach is evaluated on a 53-DOF humanoid robot iCub. This work is an example of fine motor control which is the basis for much of natural language processing by spatial reasoning. It is one aspect of a long term research effort on automatic language acquisition.
Models of the world can take many shapes. In this paper, we will discuss how groups of autonomous robots learn languages that can be used as a means for modeling the environment. The robots have already learned simple languages for communication of task instructions. These languages are adaptable under changing situations; i.e. once the robots learn a language, they are able to learn new concepts and update old concepts. In this prior work, reinforcement ]earning using a human instructor provides the motivation for communication. In current work, the world wiU be the motivation for learning languages. Since the languages are grounded in the world, they can be used to talk about the world; in effect, the language is the means the robots use to model the world. This paper will explore the issues of learning to communicate solely through environment motivation. Additionally, we will discuss the possible uses of these languages for interacting with the world.
The next big trend in AI looks likely to be computers and robots that teach themselves through trial and error. Elon Musk and Sam Altman (of Y Combinator) caused a stir last December by luring several high-profile researchers to join OpenAI, a billion-dollar nonprofit dedicated to releasing cutting-edge artificial intelligence research for free. Today the nonprofit released the first fruits of its work, and it suggests that kind of learning will be important for the future of AI. The nonprofit has released a tool called OpenAI Gym for developing and comparing different so-called reinforcement learning algorithms, which provide a way for a machine to learn through positive and negative feedback. This week OpenAI also announced two new recruits, including Pieter Abbeel, an associate professor at Berkeley and a leading expert on applying reinforcement learning to robots.
Machine Learning is used everywhere nowadays, from Netflix predictive analytics to self driving cars, we are using this advanced technology in our everyday life without even realizing it. It has been growing in popularity in the last few years, more and more people are getting interested in Machine Learning and would like to know more about it. If you're one of those people, whether your are familiar with it or not, then this article is for you. But before we get fully into Machine Learning, we'll clarify the differences between Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) first, since people usually put them in the same bag as it is a little ambiguous. Artificial Intelligence, this technology that is revolutionizing everything, is a branch of computer science dedicated to creating intelligent machines that mimic the human behavior.
Being able to learn from mistakes is a powerful ability that humans (being mistake-prone) take advantage of all the time. Even if we screw something up that we're trying to do, we probably got parts of it at least a little bit correct, and we can build off of the things that we did not to do better next time.