Goto

Collaborating Authors

New Polynomial Classes for Logic-Based Abduction

AAAI Conferences

We address the problem of propositional logic-based abduction, i.e., the problem of searching for a best explanation for a given propositional observation according to a given propositional knowledge base. We give a general algorithm, based on the notion of projection; then we study restrictions over the representations of the knowledge base and of the query, and find new polynomial classes of abduction problems.



Computers and Thought

Classics

E.A. Feigenbaum and J. Feldman (Eds.). Computers and Thought. McGraw-Hill, 1963. This collection includes twenty classic papers by such pioneers as A. M. Turing and Marvin Minsky who were behind the pivotal advances in artificially simulating human thought processes with computers. All Parts are available as downloadable pdf files; most individual chapters are also available separately. COMPUTING MACHINERY AND INTELLIGENCE. A. M. Turing. CHESS-PLAYING PROGRAMS AND THE PROBLEM OF COMPLEXITY. Allen Newell, J.C. Shaw and H.A. Simon. SOME STUDIES IN MACHINE LEARNING USING THE GAME OF CHECKERS. A. L. Samuel. EMPIRICAL EXPLORATIONS WITH THE LOGIC THEORY MACHINE: A CASE STUDY IN HEURISTICS. Allen Newell J.C. Shaw and H.A. Simon. REALIZATION OF A GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter. EMPIRICAL EXPLORATIONS OF THE GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter, J.R. Hansen, and D. W. Loveland. SUMMARY OF A HEURISTIC LINE BALANCING PROCEDURE. Fred M. Tonge. A HEURISTIC PROGRAM THAT SOLVES SYMBOLIC INTEGRATION PROBLEMS IN FRESHMAN CALCULUS. James R. Slagle. BASEBALL: AN AUTOMATIC QUESTION ANSWERER. Green, Bert F. Jr., Alice K. Wolf, Carol Chomsky, and Kenneth Laughery. INFERENTIAL MEMORY AS THE BASIS OF MACHINES WHICH UNDERSTAND NATURAL LANGUAGE. Robert K. Lindsay. PATTERN RECOGNITION BY MACHINE. Oliver G. Selfridge and Ulric Neisser. A PATTERN-RECOGNITION PROGRAM THAT GENERATES, EVALUATES, AND ADJUSTS ITS OWN OPERATORS. Leonard Uhr and Charles Vossler. GPS, A PROGRAM THAT SIMULATES HUMAN THOUGHT. Allen Newell and H.A. Simon. THE SIMULATION OF VERBAL LEARNING BEHAVIOR. Edward A. Feigenbaum. PROGRAMMING A MODEL OF HUMAN CONCEPT FORMULATION. Earl B. Hunt and Carl I. Hovland. SIMULATION OF BEHAVIOR IN THE BINARY CHOICE EXPERIMENT Julian Feldman. A MODEL OF THE TRUST INVESTMENT PROCESS. Geoffrey P. E. Clarkson. A COMPUTER MODEL OF ELEMENTARY SOCIAL BEHAVIOR. John T. Gullahorn and Jeanne E. Gullahorn. TOWARD INTELLIGENT MACHINES. Paul Armer. STEPS TOWARD ARTIFICIAL INTELLIGENCE. Marvin Minsky. A SELECTED DESCRIPTOR-INDEXED BIBLIOGRAPHY TO THE LITERATURE ON ARTIFICIAL INTELLIGENCE. Marvin Minsky.


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.