Convergence of BP Algorithm for Training MLP with Linear Output
Hongmei Shao 1, Wei Wu 2*, Wenbin Liu 31 College of Mathematics and Computational Science, China University of Petroleum, Dongying, 257061, China
The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation $($BP$)$ algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.
Key words: Multilayer perceptron; BP algorithm; Convergence; Monotonicity.
AMS subject classifications: 92B20, 68T05