VLSI implementation of a neural network classifier based on the saturating linear activation function

Document Type

Conference Proceeding

Publisher

IEEE

Faculty

Faculty of Computing, Health and Science

School

School of Engineering and Mathematics

RAS ID

19

Comments

Bermak, A., & Bouzerdoum, A. (2002, November). VLSI implementation of a neural network classifier based on the saturating linear activation function. In Neural Information Processing, 2002. ICONIP'02. Proceedings of the 9th International Conference on (Vol. 2, pp. 981-985). IEEE. Available here

Abstract

This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.

DOI

10.1109/ICONIP.2002.1198207

Share

 
COinS
 

Link to publisher version (DOI)

10.1109/ICONIP.2002.1198207