Goto

Collaborating Authors

 Kolen, John F.


Horizontal Scaling With a Framework for Providing AI Solutions Within a Game Company

AAAI Conferences

Games have been a major focus of AI since the field formed seventy years ago. Recently, video games have replaced chess and go as the current "Mt. Everest Problem." This paper looks beyond the video games themselves to the application of AI techniques within the ecosystems that produce them. Electronic Arts (EA) must deal with AI at scale across many game studios as it develops many AAA games each year, and not a single, AI-based, flagship application. EA has adopted a horizontal scaling strategy in response to this challenge and built a platform for delivering AI artifacts anywhere within EA's software universe. By combining a data warehouse for player history, an Agent Store for capturing processes acquired through machine learning, and a recommendation engine as an action layer, EA has been delivering a wide range of AI solutions throughout the company during the last two years. These solutions, such as dynamic difficulty adjustment, in-game content and activity recommendations, matchmaking, and game balancing, have had major impact on engagement, revenue, and development resources within EA.


Fool's Gold: Extracting Finite State Machines from Recurrent Network Dynamics

Neural Information Processing Systems

Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize a formal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions.


Fool's Gold: Extracting Finite State Machines from Recurrent Network Dynamics

Neural Information Processing Systems

Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize aformal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions.


Back Propagation is Sensitive to Initial Conditions

Neural Information Processing Systems

This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique.


Back Propagation is Sensitive to Initial Conditions

Neural Information Processing Systems

This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique.