If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI. In this paper we argue that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling - critical skills for any machine that lays claim to intelligence. In addition, standardized tests have all the basic requirements of a practical test: they are accessible, easily comprehensible, clearly measurable, and offer a graduated progression from simple tasks to those requiring deep understanding of the world.
Thimm, Matthias (Universität Koblenz-Landau) | Villata, Serena (Laboratoire d'Informatique, Signaux et Systèmes de Sophia-Antipolis (I3S)) | Cerutti, Federico (Cardiff University) | Oren, Nir (University of Aberdeen) | Strass, Hannes (Leipzig University) | Vallati, Mauro (University of Huddersfield)
We review the First International Competition on Computational Models of Argumentation (ICMMA'15). The competition evaluated submitted solvers performance on four different computational tasks related to solving abstract argumentation frameworks. Each task evaluated solvers in ways that pushed the edge of existing performance by introducing new challenges. Despite being the first competition in this area, the high number of competitors entered, and differences in results, suggest that the competition will help shape the landscape of ongoing developments in argumentation theory solvers.
This article presents techniques for recognizing students activities in ELEs and visualizing these activities to students. It describes a new plan recognition algorithm that takes into account repetition and interleaving of activities. It was able to outperform the state-of-the-art plan recognition algorithms when compared to a gold-standard that was obtained by a domain-expert. We also show that visualizing students' plans improves their performance on new problems when compared to an alternative visualization that consists of a step-by-step list of actions.
The article introduces the reader to a large interdisciplinary research project whose goal is to use AI to gain new insight into a complex artistic phenomenon. We study fundamental principles of expressive music performance by measuring performance aspects in large numbers of recordings by highly skilled musicians (concert pianists) and analyzing the data with state-of-the-art methods from areas such as machine learning, data mining, and data visualization. The article first introduces the general research questions that guide the project and then summarizes some of the most important results achieved to date, with an emphasis on the most recent and still rather speculative work. Our current results show that it is possible for machines to make novel and interesting discoveries even in a domain such as music and that even if we might never find the "Horowitz Factor," AI can give us completely new insights into complex artistic behavior.
Kernel methods, a new generation of learning algorithms, utilize techniques from optimization, statistics, and functional analysis to achieve maximal generality, flexibility, and performance. These algorithms are different from earlier techniques used in machine learning in many respects: For example, they are explicitly based on a theoretical model of learning rather than on loose analogies with natural learning systems or other heuristics. Although the research is not concluded, already now kernel methods are considered the state of the art in several machine learning tasks. Their ease of use, theoretical appeal, and remarkable performance have made them the system of choice for many learning problems.
In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the article, we report on a performance system, SAXEX, based on this alternative approach, that is capable of generating high-quality expressive solo performances of jazz ballads based on examples of human performers within a case-based reasoning (CBR) system.
Machine learning, and more particularly learning with neural networks, can be viewed as just such a phenomenon. Frequently remarkable performance is obtained by training networks to perform relatively complex AI tasks. The need for a fuller theoretical analysis and understanding of their performance has been a major research objective for the last decade. Neural Network Learning: Theoretical Foundations reports on important developments that have been made toward this goal within the computational learning theory framework.
Inherent batch-to-batch variability, aging, and contamination are major factors contributing to variability in oil-field cement-slurry performance. Such variability imposes a heavy burden on performance testing and is often a major factor in operational failure. Our approach involves predicting cement compositions, particle-size distributions, and thickening-time curves from the diffuse reflectance infrared Fourier transform spectrum of neat cement powders. Our research shows that many key cement properties are captured within the Fourier transform infrared spectra of cement powders and can be predicted from these spectra using suitable neural network techniques.
In its early stages, the field of AI had as its main goal the invention of computer programs having the general problem-solving abilities of humans. Along the way, a major shift of emphasis developed from general-purpose programs toward performance programs, ones whose competence was highly specialized and limited to particular areas of expertise. In this article, I claim that AI is now at the beginning of another transition, one that will reinvigorate efforts to build programs of general, humanlike competence. These programs will use specialized performance programs as tools, much like humans do.