TR2022-055

Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints


    •  Zhu, X., Jain, S., Tomizuka, M., van Baar, J., "Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints", IEEE International Conference on Robotics and Automation (ICRA), DOI: 10.1109/​ICRA46639.2022.9812092, May 2022, pp. 4833-4839.
      BibTeX TR2022-055 PDF
      • @inproceedings{Zhu2022may2,
      • author = {Zhu, Xinghao and Jain, Siddarth and Tomizuka, Masayoshi and van Baar, Jeroen},
      • title = {Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints},
      • booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
      • year = 2022,
      • pages = {4833--4839},
      • month = may,
      • publisher = {IEEE},
      • doi = {10.1109/ICRA46639.2022.9812092},
      • isbn = {978-1-7281-9681-7},
      • url = {https://www.merl.com/publications/TR2022-055}
      • }
  • MERL Contact:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Robotics

Abstract:

Vision-based tactile sensors typically utilize a de-formable elastomer and a camera mounted above to provide high-resolution image observations of contacts. Obtaining accu-rate volumetric meshes for the deformed elastomer can provide direct contact information and benefit robotic grasping and manipulation. This paper focuses on learning to synthesize the volumetric mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors. Synthetic image-mesh pairs and real-world images are gathered from 3D finite element methods (FEM) and physical sensors, respectively. A graph neural network (GNN) is introduced to learn the image- to-mesh mappings with supervised learning. A self-supervised adaptation method and image augmentation techniques are proposed to transfer networks from simulation to reality, from primitive contacts to unseen contacts, and from one sensor to another. Using these learned and adapted networks, our proposed method can accurately reconstruct the deformation of the real-world tactile sensor elastomer in various domains, as indicated by the quantitative and qualitative results.

 

  • Related News & Events

  • Related Publication

  •  Zhu, X., Jain, S., Tomizuka, M., van Baar, J., "Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints", arXiv, DOI: 10.1109/​ICRA46639.2022.9812092, March 2022.
    BibTeX arXiv
    • @article{Zhu2022mar,
    • author = {Zhu, Xinghao and Jain, Siddarth and Tomizuka, Masayoshi and van Baar, Jeroen},
    • title = {Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints},
    • journal = {arXiv},
    • year = 2022,
    • month = mar,
    • doi = {10.1109/ICRA46639.2022.9812092},
    • isbn = {978-1-7281-9681-7},
    • url = {https://ieeexplore.ieee.org/document/9812092}
    • }