TR2023-118
EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
-
- "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation", 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2023.BibTeX TR2023-118 PDF Video
- @inproceedings{Huang2023oct,
- author = {Huang, Baichuan and Yu, Jingjin and Jain, Siddarth},
- title = {EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation},
- booktitle = {2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
- year = 2023,
- month = oct,
- url = {https://www.merl.com/publications/TR2023-118}
- }
,
- "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation", 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2023.
-
MERL Contact:
-
Research Areas:
Abstract:
In this paper, we explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume tar- get objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera’s field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.