Date of Award
Doctor of Philosophy
School of Engineering and Mathematics
Faculty of Communications, Health and Science
Associate Professor Jim Cross
Associate Professor Dr. A. Bouzerdoum
Three new iterative, dynamically self-adaptive, derivative-free and training parameter free artificial neural network (ANN) training algorithms are developed. They are defined as self-adaptive back propagation, multi-directional and restart ANN training algorithms. The descent direction in self-adaptive back propagation training is determined implicitly by a central difference approximation scheme, which chooses its step size according to the convergence behavior of the error function. This approach trains an ANN when the gradient information of the corresponding error function is not readily available. The self- adaptive variable learning rates per epoch are determined dynamically using a constrained interpolation search. As a result, appropriate descent to the error function is achieved. The multi-directional training algorithm is self-adaptive and derivative free. It orients an initial search vector in a descent location at the early stage of training. Individual learning rates and momentum term for all the ANN weights are determined optimally. The search directions are derived from rectilinear and Euclidean paths, which explore stiff ridges and valleys of the error surface to improve training. The restart training algorithm is derivative free. It redefines a de-generated simplex at a re-scale phase. This multi-parameter training algorithm updates ANN weights simultaneously instead of individually. The descent directions are derived from the centroid of a simplex along a reflection point opposite to the worst vertex. The algorithm is robust and has the ability to improve local search. These ANN training methods are appropriate when there is discontinuity in corresponding ANN error function or the Hessian matrix is ill conditioned or singular. The convergence properties of the algorithms are proved where possible. All the training algorithms successfully train exclusive OR (XOR), parity, character-recognition and forecasting problems. The simulation results with XOR, parity and character recognition problems suggest that all the training algorithms improve significantly over the standard back propagation algorithm in average number of epoch, function evaluations and terminal function values. The multivariate ANN calibration problem as a regression model with small data set is relatively difficult to train. In forecasting problems, an ANN is trained to extrapolate the data in validation period. The extrapolation results are compared with the actual data. The trained ANN performs better than the statistical regression method in mean absolute deviations; mean squared errors and relative percentage error. The restart training algorithm succeeds in training a problem, where other training algorithms face difficulty. It is shown that a seasonal time series problem possesses a Hessian matrix that has a high condition number. Convergence difficulties as well as slow training are therefore not atypical. The research exploits the geometry of the error surface to identify self-adaptive optimized learning rates and momentum terms. Consequently, the algorithms converge with high success rate. These attributes brand the training algorithms as self-adaptive, automatic, parameter free, efficient and easy to use.
Ahmed, S. (2000). Development of self-adaptive back propagation and derivative free training algorithms in artificial neural networks. https://ro.ecu.edu.au/theses/1539