TITLE

A New Gradient Method with an Optimal Stepsize Property

AUTHOR(S)
Dai, Y. H.; Yang, X. Q.
PUB. DATE
November 2005
SOURCE
Computational Optimization & Applications;Nov2005, Vol. 32 Issue 3, p73
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
The gradient method for the symmetric positive definite linear system Ax = b is as follows xk+1 = xk - ak gk where gk = Axk - b is the residual of the system at xk and ak is the stepsize. The stepsize ak = [This symbol can not be represented into ASCI format.] is optimal in the sense that it minimizes the modulus ?I -aA?2, where ?1 and ?n are the minimal and maximal eigenvalues of A respectively. Since ?1 and ?n are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as k ?8. At the same time, the minimal and maximal eigenvalues, ?1 and ?n, of A and their corresponding eigenvectors can be obtained.
ACCESSION #
20191549

 

Related Articles

  • A New Gradient Method with an Optimal Stepsize Property. Dai, Y. H.; Yang, X. Q. // Computational Optimization & Applications;Jan2006, Vol. 33 Issue 1, p73 

    The gradient method for the symmetric positive definite linear system Ax = b is as follows xk+1 = xk - akgk where gk = Axk - b is the residual of the system at xk and ak is the stepsize. The stepsize ak = 2/?1+?n is optimal in the sense that it minimizes the modulus ?I - aA?2, where ?1 and ?n...

  • The Lagrangian Globalization Method for Nonsmooth Constrained Equations. Xiaojiao Tong; Liqun Qi; Yu-Fei Yang // Computational Optimization & Applications;Nov2005, Vol. 32 Issue 3, p89 

    The difficulty suffered in optimization-based algorithms for the solution of nonlinear equations lies in that the traditional methods for solving the optimization problem have been mainly concerned with finding a stationary point or a local minimizer of the underlying optimization problem, which...

  • An augmented Lagrangian dual approach for the H-weighted nearest correlation matrix problem. HOUDUO QI; DEFENG SUN // IMA Journal of Numerical Analysis;Apr2011, Vol. 31 Issue 2, p491 

    Higham (2002, IMA J. Numer. Anal., 22, 329–343) considered two types of nearest correlation matrix problems, namely the W-weighted case and the H-weighted case. While the W-weighted case has since been well studied to make several Lagrangian dual-based efficient numerical methods...

  • PRECONDITIONING HIGHER ORDER FINITE ELEMENT SYSTEMS BY ALGEBRAIC MULTIGRID METHOD OF LINEAR ELEMENTS. Yun-qing Huang; Shi Shu; Xi-jun Yu // Journal of Computational Mathematics;Sep2006, Vol. 24 Issue 5, p657 

    We present and analyze a robust preconditioned conjugate gradient method for the higher order Lagrangian finite element systems of a class of elliptic problems. An auxiliary linear element stiffness matrix is chosen to be the preconditioner for higher order finite elements. Then an algebraic...

  • Multi-step nonlinear conjugate gradient methods for unconstrained minimization. Ford, John A.; Narushima, Yasushi; Yabe, Hiroshi // Computational Optimization & Applications;Jun2008, Vol. 40 Issue 2, p191 

    Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87-101, 2001) proposed a conjugate gradient method based on the secant...

  • A parallel preconditioned method for sparse linear systems. LIU Xiu-min; LV Quan-yi; DU Yan-jun // Basic Sciences Journal of Textile Universities / Fangzhi Gaoxia;Jun2012, Vol. 25 Issue 2, p180 

    A parallel preconditioned conjugate gradient method is proposed in this manuscript to solve linear systems with a sparse, symmetric and positive coefficient matrix. The preconditioned idea of iteration method is derived. First, given the preconditioner M, the iterative method was constructed to...

  • Modification of theWolfe Line Search Rules to Satisfy the Descent Condition in the Polak-Ribière-Polyak Conjugate Gradient Method. Armand, P. // Journal of Optimization Theory & Applications;Feb2007, Vol. 132 Issue 2, p287 

    This paper proposes a line search technique to satisfy a relaxed form of the strong Wolfe conditions in order to guarantee the descent condition at each iteration of the Polak-Ribière-Polyak conjugate gradient algorithm. It is proved that this line search algorithm preserves the usual...

  • New Hybrid Globally Convergent CG-Algorithms for Nonlinear Unconstrained Optimization. AL-Bayati, Abbas Y.; AL-Baro, Barah M. // Australian Journal of Basic & Applied Sciences;2010, Vol. 4 Issue 9, p4223 

    Problem Statement: Conjugate gradient methods, which we have investigated in this study, were widely used in optimization, especially for large scale optimization problems, because it does not need the storage of any matrix. The purpose of this construction is to find new CG-algorithms suitable...

  • Backtracking Adaptive Search: The Distribution of the Number of Iterations to Convergence. Wood, G. R.; Bulger, D. W.; Baritompa, W. P.; Alexander, D. L. J. // Journal of Optimization Theory & Applications;Mar2006, Vol. 128 Issue 3, p547 

    Backtracking adaptive search is a simplified stochastic optimization procedure which permits the acceptance of worsening objective function values. It generalizes hesitant adaptive search, which in turn is a generalization of pure adaptive search. In this paper, we use ideas from the theory of...

Share

Other Topics