In 1950, Claude Shannon published his seminal work on how to program a computer to play chess. Since then, developing game-playing programs that can compete with (and even exceed) the abilities of the human world champions has been a long-sought-after goal of the AI research community. In Shannon's time, it would have seemed unlikely that only a scant 50 years would be needed to develop programs that play world-class backgammon, checkers, chess, Othello, and Scrabble. These remarkable achievements are the result of a better understanding of the problems being solved, major algorithmic insights, and tremendous advances in hardware technology. Computer games research is one of the important success stories of AI. This article reviews the past successes, current projects, and future research directions for AI using computer games as a research test bed.
The board game Go is older and more complex than chess. While it's been 20 years since IBM's Deep Blue beat world chess champion Garry Kasparov, computers only started beating Go experts a few years ago. An Oct. 18 report in the science journal Nature tells us that this particular man/machine contest is done. A system built by the DeepMind unit of Alphabet (ticker: GOOGL) beat Go's reigning world champ 100 games to none. The deposed champ, you should know, is a prior version of the same artificial intelligence system, which beat one of humankind's international champions in 2016.
This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.
Albrecht, Stefano V. (University of Edinburgh) | Beck, J. Christopher (University of Toronto) | Buckeridge, David L. (McGill University) | Botea, Adi (IBM Research, Dublin) | Caragea, Cornelia (University of North Texas) | Chi, Chi-hung (Commonwealth Scientific and Industrial Research Organisation) | Damoulas, Theodoros (New York University) | Dilkina, Bistra (Georgia Institute of Technology) | Eaton, Eric (University of Pennsylvania) | Fazli, Pooyan (Carnegie Mellon University) | Ganzfried, Sam (Carnegie Mellon University) | Giles, C. Lee (Pennsylvania State University) | Guillet, Sébastian (Université du Québec) | Holte, Robert (University of Alberta) | Hutter, Frank (University of Freiburg) | Koch, Thorsten (TU Berlin) | Leonetti, Matteo (University of Texas at Austin) | Lindauer, Marius (University of Freiburg) | Machado, Marlos C. (University of Alberta) | Malitsky, Yui (IBM Research) | Marcus, Gary (New York University) | Meijer, Sebastiaan (KTH Royal Institute of Technology) | Rossi, Francesca (University of Padova, Italy) | Shaban-Nejad, Arash (University of California, Berkeley) | Thiebaux, Sylvie (Australian National University) | Veloso, Manuela (Carnegie Mellon University) | Walsh, Toby (NICTA) | Wang, Can (Commonwealth Scientific and Industrial Research Organisation) | Zhang, Jie (Nanyang Technological University) | Zheng, Yu (Microsoft Research)
AAAI's 2015 Workshop Program was held Sunday and Monday, January 25–26, 2015 at the Hyatt Regency Austin Hotel in Austion, Texas, USA. The AAAI-15 workshop program included 15 workshops covering a wide range of topics in artificial intelligence. Most workshops were held on a single day. The titles of the workshops included AI and Ethics, AI for Cities, AI for Transportation: Advice, Interactivity and Actor Modeling, Algorithm Configuration, Artificial Intelligence Applied to Assistive Technologies and Smart Environments, Beyond the Turing Test, Computational Sustainability, Computer Poker and Imperfect Information, Incentive and Trust in E-Communities, Multiagent Interaction without Prior Coordination, Planning, Search, and Optimization, Scholarly Big Data: AI Perspectives, Challenges, and Ideas, Trajectory-Based Behaviour Analytics, World Wide Web and Public Health Intelligence, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, and Learning for General Competency in Video Games.