TY - JOUR
T1 - Planning under time constraints in stochastic domains
AU - Dean, Thomas
AU - Kaelbling, Leslie Pack
AU - Kirman, Jak
AU - Nicholson, Ann
PY - 1995/1/1
Y1 - 1995/1/1
N2 - We provide a method, based on the theory of Markov decision processes, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of goals, can be specified in this way. An optimal policy can be found using existing methods, but these methods require time at best polynomial in the number of states in the domain, where the number of states is exponential in the number of propositions (or state variables). By using information about the starting state, the reward function, and the transition probabilities of the domain, we restrict the planner's attention to a set of world states that are likely to be encountered in satisfying the goal. Using this restricted set of states, the planner can generate more or less complete plans depending on the time it has available. Our approach employs several iterative refinement routines for solving different aspects of the decision making problem. We describe the meta-level control problem of deliberation scheduling, allocating computational resources to these routines. We provide different models corresponding to optimization problems that capture the different circumstances and computational strategies for decision making under time constraints. We consider precursor models in which all decision making is performed prior to execution and recurrent models in which decision making is performed in parallel with execution, accounting for the states observed during execution and anticipating future states. We describe experimental results for both the precursor and recurrent problems that demonstrate planning times that grow slowly as a function of domain size and compare their performance to other relevant algorithms.
AB - We provide a method, based on the theory of Markov decision processes, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of goals, can be specified in this way. An optimal policy can be found using existing methods, but these methods require time at best polynomial in the number of states in the domain, where the number of states is exponential in the number of propositions (or state variables). By using information about the starting state, the reward function, and the transition probabilities of the domain, we restrict the planner's attention to a set of world states that are likely to be encountered in satisfying the goal. Using this restricted set of states, the planner can generate more or less complete plans depending on the time it has available. Our approach employs several iterative refinement routines for solving different aspects of the decision making problem. We describe the meta-level control problem of deliberation scheduling, allocating computational resources to these routines. We provide different models corresponding to optimization problems that capture the different circumstances and computational strategies for decision making under time constraints. We consider precursor models in which all decision making is performed prior to execution and recurrent models in which decision making is performed in parallel with execution, accounting for the states observed during execution and anticipating future states. We describe experimental results for both the precursor and recurrent problems that demonstrate planning times that grow slowly as a function of domain size and compare their performance to other relevant algorithms.
UR - http://www.scopus.com/inward/record.url?scp=0029332887&partnerID=8YFLogxK
U2 - 10.1016/0004-3702(94)00086-G
DO - 10.1016/0004-3702(94)00086-G
M3 - Article
AN - SCOPUS:0029332887
VL - 76
SP - 35
EP - 74
JO - Artificial Intelligence
JF - Artificial Intelligence
SN - 0004-3702
IS - 1-2
ER -