VLSI implementation of a neural network classifier based on the saturating linear activation function
Faculty of Computing, Health and Science
School of Engineering and Mathematics
This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.