If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Graphical models offer techniques for capturing the structure of many problems in real-world domains and provide means for representation, interpretation, and inference. The modeling framework provides tools for discovering rules for solving problems by exploring structural relationships. We present the Structural Affinity method that uses graphical models for first learning and subsequently recognizing the pattern for solving problems on the Raven's Progressive Matrices Test of general human intelligence. Recently there has been considerable work on computational models of addressing the Raven's test using various representations ranging from fractals to symbolic structures. In contrast, our method uses Markov Random Fields parameterized by affinity factors to discover the structure in the geometric analogy problems and induce the rules of Carpenter et al.'s cognitive model of problem-solving on the Raven's Progressive Matrices Test. We provide a computational account that first learns the structure of a Raven's problem and then predicts the solution by computing the probability of the correct answer by recognizing patterns corresponding to Carpenter et al.'s rules. We demonstrate that the performance of our model on the Standard Raven Progressive Matrices is comparable with existing state of the art models.
Goel, Ashok (Georgia Institute of Technology)
Case-based reasoning addresses new problems by remembering and adapting solutions previously used to solve similar problems. Pulled by the increasing number of applications and pushed by a growing interest in memory intensive techniques, research on case-based reasoning appears to be gaining momentum. In this article, we briefly summarize recent developments in research on case-based reasoning based partly on the recent Twenty Fourth International Conference on Case-Based Reasoning.
Wollowski, Michael (Rose-Hulman Institute of Technology) | Selkowitz, Robert (Canisius College) | Brown, Laura E. (Michigan Technological Institute) | Goel, Ashok (Georgia Institute of Technology) | Luger, George (University of New Mexico) | Marshall, Jim (Sarah Lawrence College) | Neel, Andrew (Discover Cards) | Neller, Todd (Gettysburg College) | Norvig, Peter (Google)
The field of AI has changed significantly in the past couple of years and will likely continue to do so. Driven by a desire to expose our students to relevant and modern materials, we conducted two surveys, one of AI instructors and one of AI practitioners. The surveys were aimed at gathering infor-mation about the current state of the art of introducing AI as well as gathering input from practitioners in the field on techniques used in practice. In this paper, we present and briefly discuss the responses to those two surveys.
Goel, Ashok (Georgia Institute of Technology) | Creeden, Brian (Georgia Institute of Technology) | Kumble, Mithun (Georgia Institute of Technology) | Salunke, Shanu (Georgia Institute of Technology) | Shetty, Abhinaya (Georgia Institute of Technology) | Wiltgen, Bryan (Georgia Institute of Technology)
We describe an experiment in using IBM’s Watson cognitive system to teach about human-computer co-creativity in a Georgia Tech Spring 2015 class on computational creativity. The project-based class used Watson to support biologically inspired design, a design paradigm that uses biological systems as analogues for inventing technological systems. The twenty-four students in the class self-organized into six teams of four students each, and developed semester-long projects that built on Watson to support biologically inspired design. In this paper, we describe this experiment in using Watson to teach about human-computer co-creativity, present one project in detail, and summarize the remaining five projects. We also draw lessons on building on Watson for (i) supporting biologically inspired design, and (ii) enhancing human-computer co-creativity.
We present several epistemic views of ideation in scientific discovery that we have investigated: conceptual classification, abductive explanation, conceptual modeling, analogical reasoning, and visual reasoning. We then describe an experiment in computational ideation through model construction, evaluation and revision. We describe an interactive tool called MILA–S that enables construction of conceptual models of ecological phenomena, agent-based simulations of the conceptual model, and revision of the conceptual model based on the results of the simulation. The key feature of MILA–S is that it automatically generates the simulations from the conceptual model. We report on a pilot study with 50 middle school science students who used MILA–S to discover causal explanations for an ecological phenomenon. Initial results from the study indicate that use of MILA–S had a significant impact both on the process of model construction and the nature of the constructed models. We posit that MILA–S may enable scientists to construct, evaluate and revise conceptual models of ecological phenomena.
We report a novel approach to addressing the Raven’s Progressive Matrices (RPM) tests, one based upon purely visual representations. Our technique introduces the calculation of confidence in an answer and the automatic adjustment of level of resolution if that confidence is insufficient. We first describe the nature of the visual analogies found on the RPM. We then exhibit our algorithm and work through a detailed example. Finally, we present the performance of our algorithm on the four major variants of the RPM tests, illustrating the impact of confidence. This is the first such account of any computational model against the entirety of the Raven’s.
The Odd One Out test of intelligence consists of 3x3 matrix reasoning problems organized in 20 levels of difficulty. Addressing problems on this test appears to require integration of multiple cognitive abilities usually associated with creativity, including visual encoding, similarity assessment, pattern detection, and analogical transfer. We describe a novel fractal strategy for addressing visual analogy problems on the Odd One Out test. In our strategy, the relationship between images is encoded fractally, capturing important aspects of similarity as well as inherent self-similarity. The strategy starts with fractal representations encoded at a high level of resolution, but, if that is not sufficient to resolve ambiguity, it automatically adjusts itself to the right level of resolution for addressing a given problem. Similarly, the strategy starts with searching for fractally-derived similarity between simpler relationships, but, if that is not sufficient to resolve ambiguity, it automatically shifts to search for such similarity between higher-order relationships. We present preliminary results and initial analysis from applying the fractal technique on nearly 3,000 problems from the Odd One Out test.
Past research has shown that when tree-structured background knowledge is available, it can be exploited to increase the efficiency of classification learning. When this kind of background knowledge is available, the problem becomes one of compositional classification. Of course, if the background knowledge contains errors, the quality of the learned hypothesis will suffer. In this paper we study the effect of faulty knowledge engineering on compositional classification learning. We present and analyze empirical results that show the impact on the quality of compositional classification learning as the quality of knowledge engineering is degraded.