Renz, Jochen
The Difficulty of Novelty Detection in Open-World Physical Domains: An Application to Angry Birds
Pinto, Vimukthini, Xue, Cheng, Gamage, Chathura Nagoda, Renz, Jochen
Detecting and responding to novel situations in open-world environments is a key capability of human cognition. Current artificial intelligence (AI) researchers strive to develop systems that can perform in open-world environments. Novelty detection is an important ability of such AI systems. In an open-world, novelties appear in various forms and the difficulty to detect them varies. Therefore, to accurately evaluate the detection capability of AI systems, it is necessary to investigate the difficulty to detect novelties. In this paper, we propose a qualitative physics-based method to quantify the difficulty of novelty detection focusing on open-world physical domains. We apply our method in a popular physics simulation game, Angry Birds. We conduct an experiment with human players with different novelties in Angry Birds to validate our method. Results indicate that the calculated difficulty values are in line with the detection difficulty of the human players.
Deceptive Level Generation for Angry Birds
Gamage, Chathura, Stephenson, Matthew, Pinto, Vimukthini, Renz, Jochen
The Angry Birds AI competition has been held over many years to encourage the development of AI agents that can play Angry Birds game levels better than human players. Many different agents with various approaches have been employed over the competition's lifetime to solve this task. Even though the performance of these agents has increased significantly over the past few years, they still show major drawbacks in playing deceptive levels. This is because most of the current agents try to identify the best next shot rather than planning an effective sequence of shots. In order to encourage advancements in such agents, we present an automated methodology to generate deceptive game levels for Angry Birds. Even though there are many existing content generators for Angry Birds, they do not focus on generating deceptive levels. In this paper, we propose a procedure to generate deceptive levels for six deception categories that can fool the state-of-the-art Angry Birds playing AI agents. Our results show that generated deceptive levels exhibit similar characteristics of human-created deceptive levels. Additionally, we define metrics to measure the stability, solvability, and degree of deception of the generated levels.
Using Restart Heuristics to Improve Agent Performance in Angry Birds
Liu, Tommy, Renz, Jochen, Zhang, Peng, Stephenson, Matthew
Over the past few years the Angry Birds AI competition has been held in an attempt to develop intelligent agents that can successfully and efficiently solve levels for the video game Angry Birds. Many different agents and strategies have been developed to solve the complex and challenging physical reasoning problems associated with such a game. However none of these agents attempt one of the key strategies which humans employ to solve Angry Birds levels, which is restarting levels. Restarting is important in Angry Birds because sometimes the level is no longer solvable or some given shot made has little to no benefit towards the ultimate goal of the game. This paper proposes a framework and experimental evaluation for when to restart levels in Angry Birds. We demonstrate that restarting is a viable strategy to improve agent performance in many cases.
Agent-Based Adaptive Level Generation for Dynamic Difficulty Adjustment in Angry Birds
Stephenson, Matthew, Renz, Jochen
Section is a key area of investigation for video game research 2 describes the large amount of background and related (Hendrikx et al. 2013; Togelius et al. 2011). PLG work, both for Angry Birds and adaptive level generation in can be extremely useful for increasing a game's length and general. Section 3 presents our proposed adaptive generation replayability, as it allows a large number of levels to be created method. Section 4 describes our conducted experiments and in a relatively short time. It is also possible to tailor the results. Sections 5 discusses what these results could mean generated levels towards specific user's playstyles, known as for both human players and agents, Section 6 concludes this adaptive level generation, which allows for a unique and personalised work and outlines future possibilities.
A Continuous Information Gain Measure to Find the Most Discriminatory Problems for AI Benchmarking
Stephenson, Matthew, Anderson, Damien, Khalifa, Ahmed, Levine, John, Renz, Jochen, Togelius, Julian, Salge, Christoph
This paper introduces an information-theoretic method for selecting a small subset of problems which gives us the most information about a group of problem-solving algorithms. This method was tested on the games in the General Video Game AI (GVGAI) framework, allowing us to identify a smaller set of games that still gives a large amount of information about the game-playing agents. This approach can be used to make agent testing more efficient in the future. We can achieve almost as good discriminatory accuracy when testing on only a handful of games as when testing on more than a hundred games, something which is often computationally infeasible. Furthermore, this method can be extended to study the dimensions of effective variance in game design between these games, allowing us to identify which games differentiate between agents in the most complementary ways. As a side effect of this investigation, we provide an up-to-date comparison on agent performance for all GVGAI games, and an analysis of correlations between scores and win-rates across both games and agents.
Towards Explainable Inference about Object Motion using Qualitative Reasoning
Ge, Xiaoyu, Renz, Jochen, Hua, Hua
The capability of making explainable inferences regarding physical processes has long been desired. One fundamental physical process is object motion. Inferring what causes the motion of a group of objects can even be a challenging task for experts, e.g., in forensics science. Most of the work in the literature relies on physics simulation to draw such infer- ences. The simulation requires a precise model of the under- lying domain to work well and is essentially a black-box from which one can hardly obtain any useful explanation. By contrast, qualitative reasoning methods have the advan- tage in making transparent inferences with ambiguous infor- mation, which makes it suitable for this task. However, there has been no suitable qualitative theory proposed for object motion in three-dimensional space. In this paper, we take this challenge and develop a qualitative theory for the motion of rigid objects. Based on this theory, we develop a reasoning method to solve a very interesting problem: Assuming there are several objects that were initially at rest and now have started to move. We want to infer what action causes the movement of these objects.
Creating a Hyper-Agent for Solving Angry Birds Levels
Stephenson, Matthew (Australian National University) | Renz, Jochen (Australian National University)
Over the past few years the Angry Birds AI competition has been held in an attempt to develop intelligent agents that can successfully and efficiently solve levels for the video game Angry Birds. Many different agents and strategies have been developed to solve the complex and challenging physical reasoning problems associated with such a game. However, the performance of these various agents is non-transitive and varies significantly across different levels. No single agent dominates all situations presented, indicating that different procedures are better at solving certain levels than others. We therefore propose the construction of a hyper-agent that selects from a portfolio of sub-agents whichever it believes is best at solving any given level. This hyper-agent utilises key features that can be observed about a level to rank the available candidate algorithms based on their expected score.The proposed method exhibits a significant increase in performance over the individual sub-agents, and demonstrates the potential of using such an approach to solve other physics-based games or problems.
The Computational Complexity of Angry Birds and Similar Physics-Simulation Games
Stephenson, Matthew (Australian National University) | Renz, Jochen (Australian National University) | Ge, Xiaoyu (Australian National University)
This paper presents several proofs for the computational complexity of the popular physics-based puzzle game AngryBirds. By using a combination of different gadgets within this game’s environment, we can demonstrate that the problem of solving Angry Birds levels is NP-hard. Proof of NP-hardness is by reduction from a known NP-complete problem, in this case 3-SAT. In addition, we are able to show that the original version of Angry Birds is within NP and therefore alsoNP-complete. These proofs can be extended to other physics-based games with similar mechanics.
Angry Birds as a Challenge for Artificial Intelligence
Renz, Jochen (The Australian National University) | Ge, XiaoYu (Australian National University) | Verma, Rohan (Australian National University) | Zhang, Peng (Australian National University)
The Angry Birds AI Competition (aibirds.org) has been held annually since 2012 in conjunction with some of the major AI conferences, most recently with IJCAI 2015. The goal of the competition is to build AI agents that can play new Angry Birds levels as good as or better than the best human players. Successful agents should be able to quickly analyze new levels and to predict physical consequences of possible actions in order to select actions that solve a given level with a high score. Agents have no access to the game internal physics, but only receive screenshots of the live game. In this paper we describe why this problem is a challenge for AI, and why it is an important step towards building AI that can successfully interact with the real world. We also summarise some highlights of past competitions, including a new competition track we introduced recently.
The Angry Birds AI Competition
Renz, Jochen (The Australian National University) | Ge, Xiaoyu (The Australian National University) | Gould, Stephen (The Australian National University) | Zhang, Peng (The Australian National University)
The aim of the Angry Birds AI competition (AIBIRDS) is to build intelligent agents that can play new Angry Birds levels better than the best human players. This is surprisingly difficult for AI as it requires similar capabilities to what intelligent systems need for successfully interacting with the physical world, one of the grand challenges of AI. As such the competition offers a simplified and controlled environment for developing and testing the necessary AI technologies, a seamless integration of computer vision, machine learning, knowledge representation and reasoning, reasoning under uncertainty, planning, and heuristic search, among others. Over the past three years there have been significant improvements, but we are still a long way from reaching the ultimate aim and, thus, there are great opportunities for participants in this competition.