TR2022-083

PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds Sequences


    •  Sullivan, A., Wang, J., Li, X., Chen, S., Abbot, L., "PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds Sequences", CVPR Workshop on Autonomous Driving, June 2022.
      BibTeX TR2022-083 PDF
      • @inproceedings{Sullivan2022jun,
      • author = {Sullivan, Alan and Wang, Jun and Li, Xiaolong and Chen, Siheng and Abbot, Lynn},
      • title = {PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds Sequences},
      • booktitle = {CVPR Workshop on Autonomous Driving},
      • year = 2022,
      • month = jun,
      • url = {https://www.merl.com/publications/TR2022-083}
      • }
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

We propose a point-based spatiotemporal pyramid architecture, called PointMotionNet, to learn motion information from a sequence of large-scale 3D LiDAR point clouds. A core component of PointMotionNet is a novel technique for point-based spatiotemporal convolution, which finds the point correspondences across time by leveraging a time- invariant spatial neighboring space and extracts spatiotemporal features. To validate PointMotionNet, we consider two motion-related tasks: point-based motion prediction and multisweep semantic segmentation. For each task, we design an end-to-end system where PointMotionNet is the core module that learns motion information. We conduct extensive experiments and show that i) for point- based motion prediction, PointMotionNet achieves less than 0.5m mean squared error on Argoverse dataset, which is a significant improvement over existing methods; and ii) for multisweep semantic segmentation, PointMotionNet with a pretrained segmentation backbone outperforms previous SOTA by over 3.3 % mIoU on SemanticKITTI dataset with 25 classes including 6 moving objects.