If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Video game developers often turn to motion capture when they want realistic character animations. Mocap isn't very flexible, though, as it's hard to adapt a canned animation to different body shapes, unusual terrain or an interruption from another character. Researchers might have a better solution: teach the characters to fend for themselves. They've developed a deep learning engine (DeepMimic) that has characters learning to imitate reference mocap animations or even hand-animated keyframes, effectively training them to become virtual stunt actors. The AI promises realistic motion with the kind of flexibility that's difficult even with methods that blend scripted animations together.
Artificial Intelligence (AI) and Machine Learning are deeply linked and are considered by many as the shining stars of the next century. Artificial Intelligence was created in 1950, and defines a man-made software or hardware designed to adopt clever choices. This creation was not assuming its full potential for a long period of time, indeed coding algorithms by hand is soon exhausting, this is when Machine Learning (ML) intervene. It is often part of an AI., allowing it to create new algorithms and thus, learn. It's a ground shaking revolution as Machine Learning is far more efficient than the human brain, it's becoming a crucial part of various fields, such as research or online business. Which M.L. algorithms are the most efficient? Should they be supervised or not? All those questions are getting answered in the following article, without any required knowledge to understand.
Artificial Intelligence, Machine Learning and Deep Learning are all the rage in the press these days, and if you want to be a good Data Scientist you're going to need more than just a passing understanding of what they are and what you can do with them. There are loads of different methodologies, but for me I would always suggest Artificial Neural Networks as the first AI to learn - but then I've always had a soft spot for ANNs since I did my PhD on them. They've been around since the 1970s, and until recently have only really been used as research tools in medicine and engineering. Google, Facebook and a few others, though, have realised that there are commercial uses for ANNs, and so everyone is interested in them again. When it comes to algorithms used in AI, Machine Learning and Deep Learning, there are 3 types of learning process (aka'training').
The year is coming to an end. I did not write nearly as much as I had planned to. But I'm hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Bayesian Methods coming to WildML! And what better way to start than with a summary of all the amazing things that happened in 2017? Looking back through my Twitter history and the WildML newsletter, the following topics repeatedly came up.
This article reports on an extensive survey and analysis of research work related to machine learning as it applies to automated planning over the past 30 years. Major research contributions are broadly characterized by learning method and then descriptive subcategories. Survey results reveal learning techniques that have extensively been applied and a number that have received scant attention. We extend the survey analysis to suggest promising avenues for future research in learning based on both previous experience and current needs in the planning community. Within the AI research community, machine learning is viewed as a potentially powerful means of endowing an agent with greater autonomy and flexibility, often compensating for the designer's incomplete knowledge of the world that the agent will face and incurring low overhead in terms of human oversight and control.
In recent years, we have witnessed the success of autonomous agents applying machine-learning techniques across a wide range of applications. However, agents applying the same machine-learning techniques in online applications have not been so successful. Even agent-based hybrid recommender systems that combine information filtering techniques with collaborative filtering techniques have been applied with considerable success only to simple consumer goods such as movies, books, clothing, and food. Yet complex, adaptive autonomous agent systems that can handle complex goods such as real estate, vacation plans, insurance, mutual funds, and mortgages have emerged. To a large extent, the reinforcement learning methods developed to aid agents in learning have been more successfully deployed in offline applications.
As evidenced by the articles in this special issue, transfer learning has come a long way in the past five or so years, partially because of DARPA's Transfer Learning program, which sponsored much of the work reported in this issue. There is a Transfer Learning Toolkit for Matlab available on the web. Transfer learning has developed techniques for classification, regression, and clustering (as summarized in Pan and Yang's 2009 survey) and for complex interactive tasks that are often best addressed by reinforcement learning techniques. However, there is a more practical and more feasible goal for transfer learning against which progress is being made. An engineering-oriented goal of artificial intelligence that could be enabled by transfer learning is the ability to construct a large number of diverse applications not from scratch, but by taking advantage of knowledge already acquired and formally represented for other purposes.
A key to the success of MAS is efficient and effective multiagent learning (MAL). The past 25 years have seen a great interest and tremendous progress in the field of MAL. This article introduces and overviews this field by presenting its fundamentals, sketching its historical development, and describing some key algorithms for MAL. Moreover, main challenges that the field is facing today are identified. These agents may be computer programs, robots, or even humans.
A principal one among them is the existence of multiple domains that share the same underlying causal structure for actions. We describe an approach that exploits this shared causal structure to discover a hierarchical task structure in a source domain, which in turn speeds up learning of task execution knowledge in a new target domain. Our approach is theoretically justified and compares favorably to manually designed task hierarchies in learning efficiency in the target domain. We demonstrate that causally motivated task hierarchies transfer more robustly than other kinds of detailed knowledge that depend on the idiosyncrasies of the source domain and are hence less transferable. These domains are complex, and good performance requires selecting long chains of actions to achieve subgoals needed for ultimate success.