Convergence of gradient method for training ridge polynomial neural network

Yu, Xin; Deng, Fei
May 2013
Neural Computing & Applications;May2013 Supplement, Vol. 22, p333
Academic Journal
The ridge polynomial neural network is one of the most popular higher-order neural networks, which has the powerful capability of approximating reasonable functions while avoiding the combinatorial increase in the number of weights required. In this paper, we study the convergence of gradient method with batch updating rule for ridge polynomial neural network, and a monotonicity theorem and two convergence theorems (including a weak convergence and a strong convergence) are proved. The experimental results demonstrate that the proposed theorems are valid.


Related Articles

  • A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei. Babaie-Kafaki, Saman // Computational Optimization & Applications;Jun2012, Vol. 52 Issue 2, p409 

    In (Andrei, Comput. Optim. Appl. 38:402-416, ), the efficient scaled conjugate gradient algorithm SCALCG is proposed for solving unconstrained optimization problems. However, due to a wrong inequality used in (Andrei, Comput. Optim. Appl. 38:402-416, ) to show the sufficient descent property for...

  • Wavelet-based adaptive robust control for a class of MIMO uncertain nonlinear systems. Chen, Chiu-Hsiung // Neural Computing & Applications;Jun2012, Vol. 21 Issue 4, p747 

    In this study, a wavelet neural network (WNN)-based adaptive robust control (WARC) strategy is investigated to resolve the tracking control problem of a class of multi-input multi-output (MIMO) uncertain nonlinear systems. The proposed control system comprises of an adaptive wavelet controller...

  • Network Traffic Prediction based on Particle Swarm BP Neural Network. Yan Zhu; Guanghua Zhang; Jing Qiu // Journal of Networks;Nov2013, Vol. 8 Issue 11, p2685 

    The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee...

  • Numerical Computation and Application for LPNN. Guiting Li; Bingtuan Wang // Journal of Systems Science & Information;Jun2005, Vol. 3 Issue 2, p393 

    This paper give some practical computer simulation cases and numerical computation for Legendre Polynomial Neural Network. The computer imitation shows the convergence speed of Legendre Polynomial Neural Network is much faster than that of BP neural network.

  • PRP-Type Direct Search Methods for Unconstrained Optimization. Qunfeng Liu; Wanyou Cheng // Applied Mathematics;Jun2011, Vol. 2 Issue 6, p725 

    Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data...

  • New Scaled Sufficient Descent Conjugate Gradient Algorithm for Solving Unconstraint Optimization Problems. AL-Bayati, Abbas Y.; Muhammad, Rafiq S. // Journal of Computer Science;2010, Vol. 6 Issue 5, p511 

    Problem statement: The scaled hybrid Conjugate Gradient (CG) algorithm which usually used for solving non-linear functions was presented and was compared with two standard well-Known NAG routines, yielding a new fast comparable algorithm. Approach: We proposed, a new hybrid technique based on...

  • A New CG-Algorithm with Self-Scaling VM-Update for Unconstraint Optimization. Al-Bayati, Abbas Y.; Latif, Ivan S. // Applications & Applied Mathematics;Jun2012, Vol. 7 Issue 1, p226 

    In this paper, a new combined extended Conjugate-Gradient (CG) and Variable-Metric (VM) methods is proposed for solving unconstrained large-scale numerical optimization problems. The basic idea is to choose a combination of the current gradient and some pervious search directions as a new search...

  • Optimized first-order methods for smooth convex minimization. Kim, Donghwan; Fessler, Jeffrey // Mathematical Programming;Sep2016, Vol. 159 Issue 1/2, p81 

    We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle (Math Program 145(1-2):451-482, 2014. doi:) recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that...

  • Two-parameter conjugate gradient projection method and its application. Hong-fang Cui // Advanced Materials Research;7/24/2014, Vol. 989-994, p1802 

    On the basis of the conjugate gradient method CD, the artile builds a new two-parameter P-NCD projected conjugate gradient method, the article gives two-parameter P-NCD Conjugate Gradient Method drop projection and on the strong Wolfe line search in the principles of the convergence criteria,...


Read the Article


Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics