Convergence Analysis on Approximate Reinforcement Learning

Document Type

Conference Proceeding

Faculty

Faculty of Computing, Health and Science

School

School of Computer and Security Science

RAS ID

8425

Comments

Leng, J., Jain, L., & Fyfe, C. (2007, November). Convergence analysis on approximate reinforcement learning. In International Conference on Knowledge Science, Engineering and Management (pp. 85-91). Springer, Berlin, Heidelberg.

Abstract

Temporal difference (TD) learning is a form of approximate reinforcement learning using an incremental learning updates. For large, stochastic and dynamic systems, however, it is still on open question for lacking the methodology to analyse the convergence and sensitivity of TD algorithms. Meanwhile, analysis on convergence and sensitivity of parameters are very expensive, such analysis metrics are obtained only by running an experiment with different parameter values. In this paper, we utilise the TD(λ) learning control algorithm with a linear function approximation technique known as tile coding in order to help soccer agent learn the optimal control processes. The aim of this paper is to propose a methodology for analysing the performance for adaptively selecting a set of optimal parameter values in TD(λ) learning algorithm.

This document is currently not available here.

Share

 
COinS