freeciv
Qualitative Event Perception: Leveraging Spatiotemporal Episodic Memory for Learning Combat in a Strategy Game
Hancock, Will, Forbus, Kenneth D.
Event perception refers to people's ability to carve up continuous experience into meaningful discrete events. We speak of finishing our morning coffee, mowing the lawn, leaving work, etc. as singular occurrences that are localized in time and space. In this work, we analyze how spatiotemporal representations can be used to automatically segment continuous experience into structured episodes, and how these descriptions can be used for analogical learning. These representations are based on Hayes' notion of histories and build upon existing work on qualitative episodic memory. Our agent automatically generates event descriptions of military battles in a strategy game and improves its gameplay by learning from this experience. Episodes are segmented based on changing properties in the world and we show evidence that they facilitate learning because they capture event descriptions at a useful spatiotemporal grain size. This is evaluated through our agent's performance in the game. We also show empirical evidence that the perception of spatial extent of episodes affects both their temporal duration as well as the number of overall cases generated.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- Leisure & Entertainment > Games (0.84)
- Government > Military (0.68)
- Health & Medicine > Consumer Health (0.61)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Qualitative Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Scripts & Frames (0.71)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Spatial Reasoning (0.69)
- Information Technology > Artificial Intelligence > Cognitive Science > Cognitive Architectures (0.69)
Comparing Knowledge-based Reinforcement Learning to Neural Networks in a Strategy Game
Nechepurenko, Liudmyla, Voss, Viktor, Gritsenko, Vyacheslav
We compare a novel Knowledge-based Reinforcement Learning (KB-RL) approach with the traditional Neural Network (NN) method in solving a classical task of the Artificial Intelligence (AI) field. Neural networks became very prominent in recent years and, combined with Reinforcement Learning, proved to be very effective for one of the frontier challenges in AI - playing the game of Go. Our experiment shows that a KB-RL system is able to outperform a NN in a task typical for NN, such as optimizing a regression problem. Furthermore, KB-RL offers a range of advantages in comparison to the traditional Machine Learning methods. Particularly, there is no need for a large dataset to start and succeed with this approach, its learning process takes considerably less effort, and its decisions are fully controllable, explicit and predictable.
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Leisure & Entertainment > Games > Computer Games (0.68)
- Leisure & Entertainment > Games > Go (0.66)
Custom AI can beat most human foes at FreeCiv -- and that's not even its day job
Beating the AI opponents in a strategy game is the first step any gamer takes before heading online where the real challenge is. Whether they cheat or not, most game AIs are beatable, often with simple, repeatable strategies once you find their weak points. Named HIRO (Human Intelligence Robotically Optimized), the algorithm can beat almost all players -- and that's not even its main job. Arago is an IT automation firm, which develops smart AIs that can streamline businesses and automate many of their functions. HIRO is one such AI, and while it does an excellent job of improving workflows at a number of corporations, it's the way it's trained that is most fascinating. It plays -- very well at that -- the freely available civilization-building game called Freeciv.
Arago teaches an AI to play games, the better to manage IT systems
If an AI could rule a world, would you trust it to manage your IT systems? German software company Arago is hoping you will. The developer of IT automation system Hiro (short for Human Intelligence Robotically Optimized) has been teaching its software how to play Freeciv, an open source computer strategy game inspired by Sid Meier's Civilization series of games, and in the process is learning to make IT management more fun. Hiro is an AI-based automation system that usually sits on top of other IT service management tools. Unlike script-based systems, it learns from its users how best to manage a company's IT systems.
- Information Technology (0.78)
- Leisure & Entertainment > Games (0.38)
Arago's AI can now beat some human players at complex civ strategy games
Arago's flagship HIRO AI product plays Freeciv, a free civilization building simulation that's based on the popular and long-lived Sid Meier's Civilization series of games – and it's getting more skilled. Freeciv is a complex, sprawling game with a huge number of possible strategies that can lead to success, especially when playing against unpredictable human opponents, but HIRO can now even best around 80 percent of human players it faces off against, as announced by Arago at TechCrunch Disrupt London 2016 on stage. How complicated can it be to succeed at a video game? Well, depending on options you select, as well as variables that can vary dramatically during the many, many turns that normally happen during any given game of Freeciv, the possibly permutations of individual games is 10 to the power of 15,000, meaning you require a very plastic AI indeed to successfully "learn" how to negotiate individual twists and turns. Games prove a common platform for testing and proving AI prowess; Google's AlphaGo is an example that has received a lot of attention for its successes.
Learning Qualitative Models by Demonstration
Hinrichs, Thomas R. (Northwestern University) | Forbus, Kenneth D. (Northwestern University)
Creating software agents that learn interactively requires the ability to learn from a small number of trials, extracting general, flexible knowledge that can drive behavior from observation and interaction. We claim that qualitative models provide a useful intermediate level of causal representation for dynamic domains, including the formulation of strategies and tactics. We argue that qualitative models are quickly learnable, and enable model-based reasoning techniques to be used to recognize, operationalize, and construct more strategic knowledge. This paper describes an approach to incrementally learning qualitative influences by demonstration in the context of a strategy game. We show how the learned model can help a system play by enabling it to explain which actions could contribute to maximizing a quantitative goal. We also show how reasoning about the model allows it to reformulate a learning problem to address delayed effects and credit assignment, such that it can improve its performance on more strategic tasks such as city placement.
- North America > United States > Illinois > Cook County > Evanston (0.04)
- Europe > United Kingdom > Scotland (0.04)