Abstract
Multi-agent planning problems with constraints on global resource consumption occur in several domains. Existing algorithms for solving Multi-agent Markov Decision Processes can compute policies that meet a resource constraint in expectation, but these policies provide no guarantees on the probability that a resource constraint violation will occur. We derive a method to bound constraint violation probabilities using Hoeffding's inequality. This method is applied to two existing approaches for computing policies satisfying constraints: the Constrained MDP framework and a Column Generation approach. We also introduce an algorithm to adaptively relax the bound up to a given maximum violation tolerance. Experiments on a hard toy problem show that the resulting policies outperform static optimal resource allocations to an arbitrary level. By testing the algorithms on more realistic planning domains from the literature, we demonstrate that the adaptive bound is able to efficiently trade off violation probability with expected value, outperforming state-of-the-art planners.
Original language | English |
---|---|
Title of host publication | Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) |
Editors | Satinder Singh, Shaul Markovitch |
Place of Publication | Palto Alto CA USA |
Publisher | Association for the Advancement of Artificial Intelligence (AAAI) |
Pages | 3562-3568 |
Number of pages | 7 |
Publication status | Published - 1 Jan 2017 |
Externally published | Yes |
Event | AAAI Conference on Artificial Intelligence 2017 - Hilton San Francisco Union Square, San Francisco, United States of America Duration: 4 Feb 2017 → 10 Feb 2017 Conference number: 31st http://www.aaai.org/Conferences/AAAI/aaai17.php |
Conference
Conference | AAAI Conference on Artificial Intelligence 2017 |
---|---|
Abbreviated title | AAAI 2017 |
Country/Territory | United States of America |
City | San Francisco |
Period | 4/02/17 → 10/02/17 |
Internet address |