Abstract
This paper presents a novel method for increasing the robustness of visual tracking systems by incorporating information from inertial sensors. We show that more can be achieved than simply combining the sensor data within a statistical filter: besides using inertial data to provide predictions for the visual sensor, this data can be used to dynamically tune the parameters of each feature detector in the visual sensor. This allows the visual sensor to provide useful information even in the presence of substantial motion blur. Finally, the visual sensor can be used to calibrate the parameters of the inertial sensor to eliminate drift.
Original language | English |
---|---|
Pages (from-to) | 769-776 |
Number of pages | 8 |
Journal | Image and Vision Computing |
Volume | 22 |
Issue number | 10 SPEC. ISS. |
DOIs | |
Publication status | Published - 1 Sept 2004 |
Externally published | Yes |
Keywords
- Real-time vision
- Sensor fusion
- Visual tracking