Document Type

Conference Proceeding

Publisher

IEEE

Faculty

Faculty of Computing, Health and Science

School

School of Computer and Information Science

RAS ID

5415

Comments

This is an Author's Accepted Manuscript of: Wittkamp, M., Barone, L., & Hingston, P. F. (2008). Using NEAT for Continuous Adaptation and Teamwork Formation in Pacman. Proceedings of IEEE Symposium on Computational Intelligence and Games. (pp. 234-242). Australia, Perth. IEEE. Available here

© 2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Abstract

Despite games often being used as a testbed for new computational intelligence techniques, the majority of artificial intelligence in commercial games is scripted. This means that the computer agents are non-adaptive and often inherently exploitable because of it. In this paper, we describe a learning system designed for team strategy development in a real time multi-agent domain. We test our system in the game of Pacman, evolving adaptive strategies for the ghosts in simulated real time against a competent Pacman player. Our agents (the ghosts) are controlled by neural networks, whose weights and structure are incrementally evolved via an implementation of the NEAT (Neuro-Evolution of Augmenting Topologies) algorithm. We demonstrate the design and successful implementation of this system by evolving a number of interesting and complex team strategies that outperform the ghosts' strategies of the original arcade version of the game.

DOI

10.1109/CIG.2008.5035645

Access Rights

free_to_read

Share

 
COinS
 

Link to publisher version (DOI)

10.1109/CIG.2008.5035645