TITLE

When to challenge a call in tennis: A Markov decision process approach

AUTHOR(S)
Nadimpalli, Vamsi K.; Hasenbein, John J.
PUB. DATE
September 2013
SOURCE
Journal of Quantitative Analysis in Sports;Sep2013, Vol. 9 Issue 3, p229
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
No abstract available.
ACCESSION #
102722678

 

Related Articles

  • Dynamic Programming Approximations for a Stochastic Inventory Routing Problem. Kleywegt, Anton J.; Nori, Vijay S.; Savelsbergh, Martin W. P. // Transportation Science;Feb2004, Vol. 38 Issue 1, p42 

    This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers inventories and decide when and how much inventory should be replenished at each...

  • Decision Process Petri Nets: Analysis and Optimization. Konigsberg, Zvi Retchkiman; Clempner, Julio // Neural, Parallel & Scientific Computations;Sep/Dec2005, Vol. 13 Issue 3/4, p245 

    The article introduces a modeling paradigm for developing decision process representation called Decision Process Petri Nets (DPPN). It includes the Markov decision process to extend the place-transition Petri net theoretic approach. It demonstrates that the DPPN theoretic notions of equilibrium...

  • Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey. Mohammed, Yahaya O.; Baroudi, Uthman A. // KSII Transactions on Internet & Information Systems;May2013, Vol. 7 Issue 5, p1036 

    Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy...

  • Optimal Assignment of Treatments to Health States Using a Markov Decision Model: An Introduction to Basic Concepts. Bala, Mohan V.; Mauskopf, Josephine A. // PharmacoEconomics;2006, Vol. 24 Issue 4, p345 

    Assessing the cost effectiveness of a new health intervention often requires modelling to estimate the impact of the intervention on cost, survival and quality of life over the lifetime of a cohort of patients. Markov modelling is a methodology that is commonly employed to estimate these...

  • On essential information in sequential decision processes. Feinberg, Eugene // Mathematical Methods of Operations Research;2005, Vol. 62 Issue 3, p399 

    This paper provides sufficient conditions when certain information about the past of a stochastic decision processes can be ignored by a controller. We illustrate the results with particular applications to queueing control, control of semi-Markov decision processes with iid sojourn times, and...

  • New prioritized value iteration for Markov decision processes. Garcia-Hernandez, Ma.; Ruiz-Pinales, Jose; Onaindia, Eva; Avi�a-Cervantes, J.; Ledesma-Orozco, Sergio; Alvarado-Mendez, Edgar; Reyes-Ballesteros, Alberto // Artificial Intelligence Review;Feb2012, Vol. 37 Issue 2, p157 

    The problem of solving large Markov decision processes accurately and quickly is challenging. Since the computational effort incurred is considerable, current research focuses on finding superior acceleration techniques. For instance, the convergence properties of current solution methods...

  • SINGULARLY PERTURBED MARKOV CONTROL PROBLEM: LIMITING AVERAGE COST. Bielecki, Tomasz R.; Filar, Jerzy A. // Annals of Operations Research;1991, Vol. 28 Issue 1-4, p153 

    In this paper we consider a singularly perturbed Markov decision process with the limiting average cost criterion. We assume that the underlying process is composed of n separate irreducible processes, and that the small perturbation is such that it ‘unites’ these processes into a...

  • Ergodic and adaptive control of hidden Markov models. Duncan, T.; Pasik-Duncan, B.; Stettner, L. // Mathematical Methods of Operations Research;2005, Vol. 62 Issue 2, p297 

    A partially observed stochastic system is described by a discrete time pair of Markov processes. The observed state process has a transition probability that is controlled and depends on a hidden Markov process that also can be controlled. The hidden Markov process is completely observed in a...

  • LETTER TO THE EDITOR. White, D. J. // Interfaces;Feb82, Vol. 12 Issue 1, p95 

    Presents a letter to the editor about the theoretical and computational aspects of Markov decision processes.

  • Markov Decision Processes. Lev, Benjamin; Orman, Alexander // Interfaces;Mar/Apr95, Vol. 25 Issue 2, p93 

    Reviews the book "Markov Decision Processes," by Martin L. Puterman.

Share

Read the Article

Courtesy of THE LIBRARY OF VIRGINIA

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics