TR2019-118

Unsupervised Joint 3D Object Model Learning and 6D Pose Estimation for Depth-Based Instance Segmentation


    •  Wu, Y., Marks, T.K., Cherian, A., Chen, S., Feng, C., Wang, G., Sullivan, A., "Unsupervised Joint 3D Object Model Learning and 6D Pose Estimation for Depth-Based Instance Segmentation", IEEE ICCV Workshop on Recovering 6D Object Pose, DOI: 10.1109/​ICCVW.2019.00339, October 2019, pp. 2777-2786.
      BibTeX TR2019-118 PDF
      • @inproceedings{Wu2019oct,
      • author = {Wu, Yuanwei and Marks, Tim K. and Cherian, Anoop and Chen, Siheng and Feng, Chen and Wang, Guanghui and Sullivan, Alan},
      • title = {Unsupervised Joint 3D Object Model Learning and 6D Pose Estimation for Depth-Based Instance Segmentation},
      • booktitle = {IEEE ICCV Workshop on Recovering 6D Object Pose},
      • year = 2019,
      • pages = {2777--2786},
      • month = oct,
      • doi = {10.1109/ICCVW.2019.00339},
      • url = {https://www.merl.com/publications/TR2019-118}
      • }
  • MERL Contacts:
  • Research Area:

    Computer Vision

Abstract:

In this work, we propose a novel unsupervised approach to jointly learn the 3D object model and estimate the 6D poses of multiple instances of a same object, with applications to depth-based instance segmentation. The inputs are depth images, and the learned object model is represented by a 3D point cloud. Traditional 6D pose estimation approaches are not sufficient to address this problem, where neither a CAD model of the object nor the ground-truth 6D poses of its instances are available during training. To solve this problem, we propose to jointly optimize the model learning and pose estimation in an end-to-end deep learning framework. Specifically, our network produces a 3D object model and a list of rigid transformations on this model to generate instances, which when rendered must match the observed point cloud to minimizing the Chamfer distance. To render the set of instance point clouds with occlusions, the network automatically removes the occluded points in a given camera view. Extensive experiments evaluate our technique on several object models and varying number of instances in 3D point clouds. We demonstrate the application of our method to instance segmentation of depth images of small bins of industrial parts. Compared with popular baselines for instance segmentation, our model not only demonstrates competitive performance, but also learns a 3D object model that is represented as a 3D point cloud.