TR2025-091

RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation


    •  Vats, S., Jha, D.K., Likhachev, M., Kroemer, O., Romeres, D., "RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation", R3: Reasoning for Robust Robot Manipulation in the Open World Workshop at R:SS 2025, June 2025.
      BibTeX TR2025-091 PDF
      • @inproceedings{Vats2025jun,
      • author = {Vats, Shivam and Jha, Devesh K. and Likhachev, Maxim and Kroemer, Oliver and Romeres, Diego},
      • title = {{RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation}},
      • booktitle = {R3: Reasoning for Robust Robot Manipulation in the Open World Workshop at R:SS 2025},
      • year = 2025,
      • month = jun,
      • url = {https://www.merl.com/publications/TR2025-091}
      • }
  • MERL Contacts:
  • Research Area:

    Robotics

Abstract:

Model-based planners and controllers are commonly used to solve complex manipulation problems as they can efficiently optimize diverse objectives and generalize to long horizon tasks. However, they are limited by the fidelity of their model which oftentimes leads to failures during deployment. To enable a robot to recover from such failures, we propose to use hierarchical reinforcement learning to learn a separate recovery policy. The recovery policy is triggered when a failure is detected based on sensory observations and seeks to take the robot to a state from which it can complete the task using the nominal model-based controllers. Our approach, called RecoveryChaining, uses a hybrid action space, where the model- based controllers are provided as additional nominal options which allows the recovery policy to decide how to recover, when to switch to a nominal controller and which controller to switch to even with sparse rewards. We evaluate our approach in three multi-step manipulation tasks with sparse rewards, where it learns significantly more robust recovery policies than those learned by baselines. Finally, we successfully transfer recovery policies learned in simulation to a physical robot to demonstrate the feasibility of sim-to-real transfer with our method.