TITLE

Tabu Guided Generalized Hill Climbing Algorithms

AUTHOR(S)
Vaughan, Diane E.; Jacobson, Sheldon H.
PUB. DATE
September 2004
SOURCE
Methodology & Computing in Applied Probability;Sep2004, Vol. 6 Issue 3, p343
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
This paper formulates tabu search strategies that guide generalized hill climbing (GHC) algorithms for addressing NP-hard discrete optimization problems. The resulting framework, termed tabu guided generalized hill climbing (TG²HC) algorithms, uses a tabu release parameter that probabilistically accepts solutions currently on the tabu list. TG²HC algorithms are modeled as a set of stationary Markov chains, where the tabu list is fixed for each outer loop iteration. This framework provides practitioners with guidelines for developing tabu search strategies to use in conjunction with GHC algorithms that preserve some of the algorithms' known performance properties. In particular, sufficient conditions are obtained that indicate how to design iterations of problem- specific tabu search strategies, where the stationary distributions associated with each of these iterations converge to the distribution with zero weight on all non-optimal solutions.
ACCESSION #
13056907

 

Related Articles

  • SOLVING THE FISHER-WRIGHT AND COALESCENCE PROBLEMS WITH A DISCRETE MARKOV CHAIN ANALYSIS. Buss, Samuel R.; Clote, Peter // Advances in Applied Probability;Dec2004, Vol. 36 Issue 4, p1175 

    We develop a new, self-contained proof that the expected number of generations required for gene allele fixation or extinction in a population of size n is 0(n) under general assumptions. The proof relies on a discrete Markov chain analysis. We further develop an algorithm to compute expected...

  • NONHOMOGENOUS MARKOV DECISION PROCESSES WITH BOREL STATE SPACE-- THE AVERAGE CRITERION WITH NONUNIFORMLY BOUNDED REWARDS. Xianping Guo; Jianyong Liu; Ke Liu // Mathematics of Operations Research;Nov2000, Vol. 25 Issue 4, p667 

    This paper deals with nonhomogeneous Markov decision processes with Borel state space and nonuniformly bounded rewards under the average criterion. First, under the minorant-type assumption we prove the existence of an appropriate solution to the optimality equations. Second, from the optimality...

  • MDP algorithms for portfolio optimization problems in pure jump markets. B�uerle, Nicole; Rieder, Ulrich // Finance & Stochastics;2009, Vol. 13 Issue 4, p591 

    We consider the problem of maximizing the expected utility of the terminal wealth of a portfolio in a continuous-time pure jump market with general utility function. This leads to an optimal control problem for piecewise deterministic Markov processes. Using an embedding procedure we solve the...

  • IMPORTANCE SAMPLING ON COALESCENT HISTORIES. II: SUBDIVIDED POPULATION MODELS. De Iorio, Maria; Griffiths, Robert C. // Advances in Applied Probability;Jun2004, Vol. 36 Issue 2, p434 

    De Iorio and Griffiths (2004) developed a new method of constructing sequential importance-sampling proposal distributions on coalescent histories of a sample of genes for computing the likelihood of a type configuration of genes in the sample by simulation. The method is based on approximating...

  • FILTERS AND PARAMETER ESTIMATION FOR A PARTIALLY OBSERVABLE SYSTEM SUBJECT TO RANDOM FAILURE WITH CONTINUOUS-RANGE OBSERVATIONS. Lin, Daming; Makis, Viliam // Advances in Applied Probability;Dec2004, Vol. 36 Issue 4, p1212 

    We consider a failure-prone system operating in continuous time. Condition monitoring is conducted at discrete time epochs. The state of the system is assumed to evolve as a continuous-time Markov process with a finite state space. The observation process with continuous-range values is...

  • Nonstationary Policies and Average Optimality in Multichain Markov Decision Processes with a General Action Space. Golubin, A. Y. // Journal of Mathematical Sciences;Sep2004, Vol. 123 Issue 1, p3733 

    In this paper, we study Markov decision processes (MDPs) with a finite state space, general action space, multichain structure, and average cost as a criterion for optimization. The paper is organized as follows. In Sec. 2, we formally define the MDPs considered in the sequel. We also show by an...

  • Constant-complexity stochastic simulation algorithm with optimal binning. Sanft, Kevin R.; Othmer, Hans G. // Journal of Chemical Physics;2015, Vol. 143 Issue 7, p1 

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems...

  • Constrained continuous-time Markov decision processes with average criteria. Zhang, Lanlan; Guo, Xianping // Mathematical Methods of Operations Research;2008, Vol. 67 Issue 2, p323 

    In this paper, we study constrained continuous-time Markov decision processes with a denumerable state space and unbounded reward/cost and transition rates. The criterion to be maximized is the expected average reward, and a constraint is imposed on an expected average cost. We give suitable...

  • Structural Results for Partially Observable Markov Decision Processes. Albright, S. Christian // Operations Research;Sep/Oct79, Vol. 27 Issue 5, p1041 

    This paper examines monotonicity results for a fairly general class of partially observable Markov decision processes. When there are only two actual states in the system and when the actions taken are primarily intended to improve the system, rather than to inspect it, we give reasonable...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics