Convergence of gradient method for double parallel feedforward neural network
J. Wang 1, W. Wu 2, Z. Li 1, L. Li 11 School of Mathematics and Computational Sciences, Petroleum University of China, Dongying, 257061, P.R. China.
2 School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, P.R. China.
Received by the editors May 4, 2009 and, in revised form, March 22, 2011.
The deterministic convergence for a Double Parallel Feedforward Neural Network (DPFNN) is studied. DPFNN is a parallel connection of a multi-layer feedforward neural network and a single layer feedforward neural network. Gradient method is used for training DPFNN with finite training sample set. The monotonicity of the error function in the training iteration is proved. Then, some weak and strong convergence results are obtained, indicating that the gradient of the error function tends to zero and the weight sequence goes to a fixed point, respectively. Numerical examples are provided, which support our theoretical findings and demonstrate that DPFNN has faster convergence speed and better generalization capability than the common feedforward neural network.AMS subject classifications: 65M06, 76D45
Key words: Double parallel feedforward neural network, gradient method, monotonicity, convergence.
Email: email@example.com, firstname.lastname@example.org