Agent Architecture Considerations for Real-Time Planning in Games

AAAI Conferences

Planning in real-time offers several benefits over the more typical techniques of implementing Non-Player Character (NPC) behavior with scripts or finite state machines. NPCs that plan their actions dynamically are better equipped to handle unexpected situations. The modular nature of the goals and actions that make up the plan facilitates reuse, sharing, and maintenance of behavioral building blocks. These benefits, however, come at the cost of CPU cycles. In order to simultaneously plan for several NPCs in real-time, while continuing to share the processor with the physics, animation, and rendering systems, careful consideration must taken with the supporting architecture. The architecture must support distributed processing and caching of costly calculations. These considerations have impacts that stretch beyond the architecture of the planner, and affect the agent architecture as a whole. This paper describes lessons learned while implementing real-time planning for NPCs for F.E.A.R., a AAA first person shooter shipping for PC in 2005.


0 IA SOF AI

AAAI Conferences

We propose that a planner should be provided with an explicit model of its own planning mechanism, and show that linking a planner's expectations about the performance of its plans to such a model, by means of explicit justification structures, enables the planner to determine which aspects of its planning are responsible for observed performance failures.



A robot hand taught itself to solve a Rubik's Cube after creating its own training regime

#artificialintelligence

Over a year ago, OpenAI, the San Francisco–based for-profit AI research lab, announced that it had trained a robotic hand to manipulate a cube with remarkable dexterity. That might not sound earth-shattering. But in the AI world, it was impressive for two reasons. First, the hand had taught itself how to fidget with the cube using a reinforcement-learning algorithm, a technique modeled on the way animals learn. Second, all the training had been done in simulation, but it managed to successfully translate to the real world.


Multimodal Cognitive Architecture: Making Perception More Central to Intelligent Behavior

AAAI Conferences

I propose that the notion of cognitive state be broadened from the current predicate-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. In contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science, in the proposed approach, central cognition incorporates parts of the perceptual machinery. I motivate and describe the proposal schematically, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state. The proposal explains our rich multimodal internal experience, and can be a key step in the realization of embodied agents. The proposed multimodal cognitive state can significantly enhance the agent's problem solving. Note: Memory, as well as the information retrieved from memory and from perception, represented in a predicate-symbolic form.