TR2023-118

EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation


Abstract:

In this paper, we explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume tar- get objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera’s field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.

 

  • Related News & Events

  • Related Video

  • Related Publication

  •  Huang, B., Yu, J., Jain, S., "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation", arXiv, October 2023.
    BibTeX arXiv
    • @article{Huang2023oct2,
    • author = {Huang, Baichuan and Yu, Jingjin and Jain, Siddarth},
    • title = {EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation},
    • journal = {arXiv},
    • year = 2023,
    • month = oct,
    • url = {https://arxiv.org/abs/2310.06751}
    • }