Optimized vision-based robot motion planning from multiple demonstrations

Tiantian Shen, Sina Radmard, Ambrose Chan, Elizabeth A. Croft, Graziano Chesi

Research output: Contribution to journalArticleResearchpeer-review

9 Citations (Scopus)

Abstract

This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.

Original languageEnglish
Pages (from-to)1117–1132
Number of pages16
JournalAutonomous Robots
Volume42
Issue number6
DOIs
Publication statusPublished - 1 Aug 2018
Externally publishedYes

Keywords

  • FOV limits
  • Path planning
  • Vision-based motion planning
  • Whole-arm collision avoidance

Cite this