Temporal Difference Learning and Simulated Annealing for Optimal Control: A Case Study
Document Type
Conference Proceeding
Faculty
Faculty of Computing, Health and Science
School
School of Computer and Security Science
RAS ID
8426
Abstract
The trade-off between exploration and exploitation has an important impact on the performance of temporal difference learning. There are several action selection strategies, however, it is unclear which strategy is better. The impact of action selection strategies may depend on the application domains and human factors. This paper presents a modified Sarsa(λ) control algorithm by sampling actions in conjunction with simulated annealing technique. A game of soccer is utilised as the simulation environment, which has a large, dynamic and continuous state space. The empirical results demonstrate that the quality of convergence has been significantly improved by using the simulated annealing approach.
Comments
Leng, J., Sathyaraj, B. M., & Jain, L. (2008, March). Temporal difference learning and simulated annealing for optimal control: A case study. In KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications (pp. 495-504). Springer, Berlin, Heidelberg.