A Modified BFGS Method Without Line Searches for Nonconvex Unconstrained Optimization

Yunhai Xiao; Zengxin Wei; Li Zhang
June 2006
Advances in Theoretical & Applied Mathematics;2006, Vol. 1 Issue 2, p149
Academic Journal
We proposed a modified BFGS method with so-called fixed steplength strategy. The globally and superlinear convergence are still hold. Preliminary numerical results show that the proposed method is fit for solving the problem where the computation of the objective function is not easy and it is much time saving.


Related Articles

  • Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization. Ueda, Kenji; Yamashita, Nobuo // Applied Mathematics & Optimization;Aug2010, Vol. 62 Issue 1, p27 

    The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and...

  • MODIFIED NEWTON'S METHODS OF CUBIC CONVERGENCE FOR MULTIPLE ROOTS. Siyul Lee; Young-Hee Kim // Journal of Computational Analysis & Applications;Apr2012, Vol. 14 Issue 3, p516 

    From Newton's method of solving nonlinear equations numerically, various modifications concerning accelerated order, or multiple roots had been actively developed. In this paper, we focus on those concerning multiple roots. Based on existing modifications for simple roots, new methods for...

  • Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Fern�ndez, Dami�n; Solodov, Mikhail // Mathematical Programming;Sep2010, Vol. 125 Issue 1, p47 

    The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve fast convergence despite possible degeneracy of constraints of optimization problems, when the Lagrange multipliers associated to a solution are not unique. Superlinear...

  • DERIVATIVE-FREE OPTIMAL ITERATIVE METHODS. Khattri, S. K.; Agarwal, R. P. // Computational Methods in Applied Mathematics;2010, Vol. 10 Issue 4, p368 

    In this study, we develop an optimal family of derivative-free iterative methods. Convergence analysis shows that the methods are fourth order convergent, which is also verified numerically. The methods require three functional evaluations during each iteration. Though the methods are...

  • A New Numerical Solving Method for Equations of One Variable. Eskandari, Hamideh // International Journal of Applied Mathematics & Computer Sciences;2009, Vol. 5 Issue 3, p183 

    In this paper, a nonlinear equation f(x) =0 is solved by using Taylor's expansion through a new iteration method and it will be shown that this new method can present better approximation than Newton method. Before, for solving a nonlinear algebraic equation f(x) =0, the hybrid iteration method...

  • Postcritical regimes in the nonlinear problem of vortex motion under the free surface of a weighable fluid. Zhitnikov, V.; Sherykhalina, N.; Sherykhalin, O. // Journal of Applied Mechanics & Technical Physics;Jan2000, Vol. 41 Issue 1, p62 

    An improved Levi-Civita method in which the singularities of the desired function are taken into account by introducing terms containing power singularities is proposed. Results of numerical analysis of the nonlinear problem of a vortex in a bounded flow of an ideal weighable fluid ( Fr>1) are...

  • On convergence of the Gauss-Newton method for convex composite optimization. Li, Chong; Wang, Xinghua // Mathematical Programming;2002, Vol. 91 Issue 2, p349 

    The local quadratic convergence of the Gauss-Newton method for convex composite optimization f=h�F is established for any convex function h with the minima set C, extending Burke and Ferris� results in the case when C is a set of weak sharp minima for h.

  • On the convergence of the Newton/log-barrier method. Wright, Stephen J. // Mathematical Programming;2001, Vol. 90 Issue 1, p71 

    Abstract. In the Newton/log-barrier method, Newton steps are taken for the log-barrier function for a fixed value of the barrier parameter until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates...

  • Superlinearly Convergent Trust-Region Method without the Assumption of Positive-Definite Hessian. ZHANG, J. L.; WANG, Y.; ZHANG, X. S. // Journal of Optimization Theory & Applications;Apr2006, Vol. 129 Issue 1, p201 

    In this paper, we reinvestigate the trust-region method by reformulating its subproblem: the trust-region radius is guided by gradient information at the current iteration and is self-adaptively adjusted. A trust-region algorithm based on the proposed subproblem is proved to be globally...


Read the Article


Sign out of this library

Other Topics