Plotting

 Blair, Alan D.


Models of Brains: What Should We Borrow From Biology?

AAAI Conferences

Traditional models of neural networks used in computer science and much artificial intelligence research are typically based on an understanding of the brain from many decades ago. In this paper we present an overview of the major known functional properties of natural neural networks and present the evidence for how these properties have been implicated in learning processes as well as their interesting computational properties. Introducing some of these functional properties into neural networks evolved to perform learning or adaptation tasks has resulted in better solutions and improved evolvability, and a common theme emerging across computational studies of these properties is self-organisation. It is thought that self-organizing principles play a critical role in the development and functioning of natural neural networks, and thus an interesting direction for future research is explicitly exploring the use of self-organizing systems, via the functional properties reviewed here, in the development of neural networks for AI systems.


Why did TD-Gammon Work?

Neural Information Processing Systems

Although TD-Gammon is one of the major successes in machine learning, it has not led to similar impressive breakthroughs in temporal difference We werelearning for other applications or even other games. Instead we apply simple hill-climbing in a relative fitness environment. These results and further analysis suggest of Tesauro's program had more to do with thethat the surprising success of the learning task and the dynamics of theco-evolutionary structure backgammon game itself. 1 INTRODUCTION It took great chutzpah for Gerald Tesauro to start wasting computer cycles on temporal of Backgammon (Tesauro, 1992). After all, the dream ofprogram play itself in the hopes computers mastering a domain by self-play or "introspection" had been around since the early days of AI, forming part of Samuel's checker player (Samuel, 1959) and used in Donald Michie's MENACE tictac-toe learner (Michie, 1961). However such self-conditioning or nonexistent internal representations, had generally beensystems, with weak of scale and abandoned by the field of AI.


Why did TD-Gammon Work?

Neural Information Processing Systems

Although TD-Gammon is one of the major successes in machine learning, it has not led to similar impressive breakthroughs in temporal difference learning for other applications or even other games. We were able to replicate some of the success of TD-Gammon, developing a competitive evaluation function on a 4000 parameter feed-forward neural network, without using back-propagation, reinforcement or temporal difference learning methods. Instead we apply simple hill-climbing in a relative fitness environment. These results and further analysis suggest that the surprising success of Tesauro's program had more to do with the co-evolutionary structure of the learning task and the dynamics of the backgammon game itself. 1 INTRODUCTION It took great chutzpah for Gerald Tesauro to start wasting computer cycles on temporal difference learning in the game of Backgammon (Tesauro, 1992). After all, the dream of computers mastering a domain by self-play or "introspection" had been around since the early days of AI, forming part of Samuel's checker player (Samuel, 1959) and used in Donald Michie's MENACE tictac-toe learner (Michie, 1961).