TITLE

Locating Discretionary Service Facilities Based on Probabilistic Customer Flows

AUTHOR(S)
Berman, Oded; Krass, Dmitry; Chen Wei Xu
PUB. DATE
August 1995
SOURCE
Transportation Science;Aug95, Vol. 29 Issue 3, p276
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
In this paper, we consider the problem of locating discretionary facilities on a network. In contrast to previous work in the area, we no longer assume that information on customers' flows along all paths of the network is known (in practice such information is rarely available). Assuming that the fraction of customers that travel from any node to any adjacent node in the network is available, the problem of locating the facilities so as to maximize the fraction of customers that pass by a facility before reaching their destination is formulated as a nonlinear Integer Program. It is shown that by employing the theory of constrained Markov Decision Processes this problem can be reformulated as a linear Mixed Integer Program. The paper presents some preliminary computational results for this formulation as well as results for a greedy heuristic algorithm.
ACCESSION #
4454918

 

Related Articles

  • AN ALGORITHM TO IDENTIFY AND COMPUTE AVERAGE OPTIMAL POLICIES IN MULTICHAIN MARKOV DECISION PROCESSES. Leizarowitz, Arie // Mathematics of Operations Research;Aug2003, Vol. 28 Issue 3, p553 

    This paper concerns discrete-time, finite state multichain MDPs with compact action sets. The optimality criterion is long-run average cost. Simple examples illustrate that optimal stationary Markov policies do not always exist. We establish the existence of ϵoptimal policies that are...

  • CONSTRAINED MARKOV DECISION CHAINS. Derman, Cyrus; Veinott Jr., Arthur F. // Management Science;Dec1972 Part 1, Vol. 19 Issue 4, p389 

    We consider finite state and action discrete time parameter Markov decision chains. The objective is to provide an algorithm for finding a policy that minimizes the long-run expected average cost when there are linear side conditions on the limit points of the expected state-action frequencies....

  • Hierarchical algorithms for discounted and weighted Markov decision processes. Abbad, M.; Daoui, C. // Mathematical Methods of Operations Research;2003, Vol. 58 Issue 2, p237 

    We consider a discrete time finite Markov decision process (MDP) with the discounted and weighted reward optimality criteria. In the authors considered some decomposition of limiting average MDPs. In this paper, we use an analogous approach for discounted and weighted MDPs. Then, we construct...

  • Robust Control of Markov Decision Processes with Uncertain Transition Matrices. Nilim, Arnab; El Ghaoui, Laurent // Operations Research;Sep/Oct2005, Vol. 53 Issue 5, p780 

    Optimal solutions to Markov decision problems may be very sensitive with respect to the state transition probabilities. In many practical problems, the estimation of these probabilities is far from accurate. Hence, estimation errors are limiting factors in applying Markov decision processes to...

  • A QUEUEING REWARD SYSTEM WITH SEVERAL CUSTOMER CLASSES. Miller, Bruce L. // Management Science;Nov69, Vol. 16 Issue 3, p234 

    This paper considers an n-server queueing system with m customer classes distinguished by the reward associated with serving customers of that class. Our objective is to accept or reject customers so as to maximize the expected value of the rewards received over an infinite planning horizon. By...

  • An Adaptive Sampling Algorithm for Solving Markov Decision Processes. Hyeong Soo Chang; Fu, Michael C.; Jiaqiao Hu; Marcus, Steven I. // Operations Research;Jan/Feb2005, Vol. 53 Issue 1, p126 

    Based on recent results for multiarmed bandit problems, we propose an adaptive sampling algorithm that approximates the optimal value of a finite-horizon Markov decision process (MDP) with finite state and action spaces. The algorithm adaptively chooses which action to sample as the sampling...

  • Q-LEARNING FOR RISK-SENSITIVE CONTROL. Borkar, V. S. // Mathematics of Operations Research;May2002, Vol. 27 Issue 2, p294 

    We propose for risk-sensitive control of finite Markov chains a counterpart of the popular Q-learning algorithm for classical Markov decision processes. The algorithm is shown to converge with probability one to the desired solution. The proof technique is an adaptation of the o.d.e. approach...

  • Generalization of White's Method of Successive Approximation to Periodic Markovian Decision Processes. Su, Shiaw Y.; Deininger, Rolf A. // Operations Research;Mar/Apr72, Vol. 20 Issue 2, p318 

    The difficulty in solving a Markovian decision problem with a large number of states by HOWARD'S policy-iteration method is that one has to solve a large system of simultaneous linear equations. Procedures developed by D. J. WHITE and J. MACQUEEN, which avoid this difficulty, have been widely...

  • A WEIGHTED MARKOV DECISION PROCESS. Krass, Dmitry; Filar, Jerzy A.; Sinha, Sagnik S. // Operations Research;Nov/Dec92, Vol. 40 Issue 6, p1180 

    The two most commonly considered reward criteria for Markov decision processes are the discounted reward and the long-term average reward. The first tends to "neglect" the future, concentrating on the short-term rewards, while the second one tends to do the opposite. We consider a new reward...

Share

Read the Article

Courtesy of THE LIBRARY OF VIRGINIA

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics