A New Exploratory Neural Network Training Method

Document Type

Conference Proceeding

Faculty

Faculty of Computing, Health and Science

School

School of Computing, Health and Science

RAS ID

4626

Comments

Ahmed, S., Cross, J., & Bouzerdoum, S. (2006). A new exploratory neural network training method. In Intelligent Engineering Systems through Artificial Neural Networks, Volume 16. ASME Press.

Abstract

A new exploratory self-adaptive derivative free training algorithm is developed. It only evaluates error function that is reduced to a set of sub-problems in a constrained search space and the search directions follow rectilinear moves. To accelerate the training algorithm, an interpolation search is developed that determines the best learning rates. The constrained interpolation search decides the best learning rates such that the direction of search is not deceived in locating the minimum trajectory of the error function. The proposed algorithm is practical when the error function is ill conditioned implying that the Hessian matrix property is unstable, or the derivative evaluation is difficult. The benchmark XOR problem (Nitta, 2003) is used to compare the performance of the proposed algorithm with the standard back propagation training methods. The proposed algorithm improves over the standard first order back propagation training method in function evaluations by a factor of 32. The improvement in standard deviation for the same metric has a ratio of 38:1. This implies that the proposed algorithm has less intractable instances.

This document is currently not available here.

Share

 
COinS