Abstract
—We consider the online planning problem for a team of agents to discover and track an unknown and time-varying number of moving objects from onboard sensor measurements with uncertain measurement-object origins. Since the onboard sensors have limited field-of-views, the usual planning strategy based solely on either tracking detected objects or discovering unseen objects is inadequate. To address this, we formulate a new information-based multi-objective multi-agent control problem, cast as a partially observable Markov decision process (POMDP). The resulting multi-agent planning problem is exponentially complex due to the unknown data association between objects and multi-sensor measurements; hence, computing an optimal control action is intractable. We prove that the proposed multi-objective value function is a monotone submodular set function, which admits low-cost suboptimal solutions via greedy search with a tight optimality bound. The resulting planning algorithm has a linear complexity in the number of objects and measurements across the sensors, and quadratic in the number of agents. We demonstrate the proposed solution via a series of numerical experiments with a real-world dataset.
Original language | English |
---|---|
Pages (from-to) | 3669-3685 |
Number of pages | 17 |
Journal | IEEE Transactions on Signal Processing |
Volume | 72 |
DOIs | |
Publication status | Published - 5 Jul 2024 |
Keywords
- MPOMDP
- multi-agent control
- multi-object tracking
- path planning
- Stochastic control