TY - JOUR
T1 - Approximability and efficient algorithms for constrained fixed-horizon POMDPs with durative actions
AU - Khonji, Majid
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/10
Y1 - 2023/10
N2 - Partially Observable Markov Decision Process (POMDP) is a fundamental model for probabilistic planning in stochastic domains. More recently, constrained POMDP and chance-constrained POMDP extend the model allowing constraints to be specified on some aspects of the policy in addition to the objective function. Despite their expressive power, these models assume all actions take a fixed duration, which poses a limitation in modeling real-world planning problems. In this work, we propose a unified model for durative POMDP and its constrained extensions. First, we convert these extensions into an Integer Linear Programming (ILP) formulation, which can be solved using existing solvers in the ILP literature. Second, a heuristic search approach is provided that can efficiently prune the search space, guided by solving successive partial ILP programs. Third, we give a theoretical analysis of the problem: unlike short-horizon POMDPs, with policies of a constant depth, which can be solved in polynomial time, the constrained extensions are NP-Hard even with a planning horizon of two and non-negative rewards. To alleviate that, we propose a Fully Polynomial Time Approximation Scheme (FPTAS) that computes (near) optimal deterministic policies in polynomial time. The FPTAS is among the best achievable in theory in terms of approximation ratio. Finally, evaluation results show that our approach is empirically superior to the state-of-the-art fixed-horizon chance-constrained POMDP solver.
AB - Partially Observable Markov Decision Process (POMDP) is a fundamental model for probabilistic planning in stochastic domains. More recently, constrained POMDP and chance-constrained POMDP extend the model allowing constraints to be specified on some aspects of the policy in addition to the objective function. Despite their expressive power, these models assume all actions take a fixed duration, which poses a limitation in modeling real-world planning problems. In this work, we propose a unified model for durative POMDP and its constrained extensions. First, we convert these extensions into an Integer Linear Programming (ILP) formulation, which can be solved using existing solvers in the ILP literature. Second, a heuristic search approach is provided that can efficiently prune the search space, guided by solving successive partial ILP programs. Third, we give a theoretical analysis of the problem: unlike short-horizon POMDPs, with policies of a constant depth, which can be solved in polynomial time, the constrained extensions are NP-Hard even with a planning horizon of two and non-negative rewards. To alleviate that, we propose a Fully Polynomial Time Approximation Scheme (FPTAS) that computes (near) optimal deterministic policies in polynomial time. The FPTAS is among the best achievable in theory in terms of approximation ratio. Finally, evaluation results show that our approach is empirically superior to the state-of-the-art fixed-horizon chance-constrained POMDP solver.
KW - Constrained partially observable Markov decision process
KW - Durative actions
KW - Heuristic search
KW - Risk-bounded planning
UR - http://www.scopus.com/inward/record.url?scp=85166473050&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2023.103968
DO - 10.1016/j.artint.2023.103968
M3 - Article
AN - SCOPUS:85166473050
SN - 0004-3702
VL - 323
JO - Artificial Intelligence
JF - Artificial Intelligence
M1 - 103968
ER -