If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Anderson, Monica (University of Alabama) | Barták, Roman (Charles University) | Brownstein, John S. (Boston Children's Hospital, Harvard University) | Buckeridge, David L. (McGill University) | Eldardiry, Hoda (Palo Alto Research Center) | Geib, Christopher (Drexel University) | Gini, Maria (University of Minnesota) | Isaksen, Aaron (New York University) | Keren, Sarah (Technion University) | Laddaga, Robert (Vanderbilt University) | Lisy, Viliam (Czech Technical University) | Martin, Rodney (NASA Ames Research Center) | Martinez, David R. (MIT Lincoln Laboratory) | Michalowski, Martin (University of Ottawa) | Michael, Loizos (Open University of Cyprus) | Mirsky, Reuth (Ben-Gurion University) | Nguyen, Thanh (University of Michigan) | Paul, Michael J. (University of Colorado Boulder) | Pontelli, Enrico (New Mexico State University) | Sanner, Scott (University of Toronto) | Shaban-Nejad, Arash (University of Tennessee) | Sinha, Arunesh (University of Michigan) | Sohrabi, Shirin (IBM T. J. Watson Research Center) | Sricharan, Kumar (Palo Alto Research Center) | Srivastava, Biplav (IBM T. J. Watson Research Center) | Stefik, Mark (Palo Alto Research Center) | Streilein, William W. (MIT Lincoln Laboratory) | Sturtevant, Nathan (University of Denver) | Talamadupula, Kartik (IBM T. J. Watson Research Center) | Thielscher, Michael (University of New South Wales) | Togelius, Julian (New York University) | Tran, So Cao (New Mexico State University) | Tran-Thanh, Long (University of Southampton) | Wagner, Neal (MIT Lincoln Laboratory) | Wallace, Byron C. (Northeastern University) | Wilk, Szymon (Poznan University of Technology) | Zhu, Jichen (Drexel University)
This survey explores Procedural Content Generation via Machine Learning (PCGML), defined as the generation of game content using machine learning models trained on existing content. As the importance of PCG for game development increases, researchers explore new avenues for generating high-quality content with or without human involvement; this paper addresses the relatively new paradigm of using machine learning (in contrast with search-based, solver-based, and constructive methods). We focus on what is most often considered functional game content such as platformer levels, game maps, interactive fiction stories, and cards in collectible card games, as opposed to cosmetic content such as sprites and sound effects. In addition to using PCG for autonomous generation, co-creativity, mixed-initiative design, and compression, PCGML is suited for repair, critique, and content analysis because of its focus on modeling existing content. We discuss various data sources and representations that affect the resulting generated content. Multiple PCGML methods are covered, including neural networks, long short-term memory (LSTM) networks, autoencoders, and deep convolutional networks; Markov models, $n$-grams, and multi-dimensional Markov chains; clustering; and matrix factorization. Finally, we discuss open problems in the application of PCGML, including learning from small datasets, lack of training data, multi-layered learning, style-transfer, parameter tuning, and PCG as a game mechanic.
This paper explores the question of whether it's possible to discover a well-defined property of game systems that corresponds to what game designers and players mean by the term ``depth.'' We propose a measurable property of a game's formal system, which we call `d', that corresponds to the capacity of a game to absorb dedicated problem-solving attention and allow for sustained, long-term learning. To define this property we develop a formal model that measures how susceptible a game is to partial solutions under conditions of steadily increasing computational resources. We then sketch out several directions for using the model to investigate questions about the structural properties of games that produce these effects.
We describe an application of neural networks to predict the placements of resources in StarCraft II maps. Networks are trained on existing maps taken from databases of maps actively used in online competitions and tested on unseen maps with resources (minerals and vespene gas) removed. This method is potentially useful for AI-assisted game design tools, allowing the suggestion of resource and base placements consonant with implicit StarCraft II design principles for fully or partially sketched heightmaps. By varying the thresholds for the placement of resources, more or fewer resources can be created consistently with the pattern of a single map. We further propose that these networks can be used to help understand the design principles of StarCraft II maps, and by extension other, similar types of game content.
Humans may one day create superintelligence, artificially intelligent machines that surpass mankind's intellect. Would these artificial intelligences choose to play games with us, and if so, which games? We believe this question is relevant for the ethics of general AI, the current widespread integration of AI systems into daily life, and for game AI research. We present a catalog of scenarios, some good for humanity and some bad, in which various kinds of play might take place between humans and intelligent machines. We assume a superintelligence, because of its greater cognitive ability, would stand in a similar relation to us as an adult does to a child, an expert to a novice, or a human to an animal. We define friendly games, learning games, observational games, and domination games, and proceed to consider games adults play with children, experts play with novices, and humans play with animals. Reasoning by analogy, we imagine corresponding games that superintelligences might choose to play with us, finding that domination games would pose a significant risk to humanity.