TR2017-146

Direct Multichannel Tracking


    •  Jaramillo, C., Taguchi, Y., Feng, C., "Direct Multichannel Tracking", International Conference on 3D Vision, October 2017.
      BibTeX TR2017-146 PDF Video
      • @inproceedings{Jaramillo2017oct,
      • author = {Jaramillo, Carlos and Taguchi, Yuichi and Feng, Chen},
      • title = {Direct Multichannel Tracking},
      • booktitle = {International Conference on 3D Vision},
      • year = 2017,
      • month = oct,
      • url = {https://www.merl.com/publications/TR2017-146}
      • }
  • Research Area:

    Computer Vision

Abstract:

We present direct multichannel tracking, an algorithm for tracking the pose of a monocular camera (visual odometry) using high-dimensional features in a direct image alignment framework. Instead of using a single grayscale channel and assuming intensity constancy as in existing approaches, we extract multichannel features at each pixel from each image and assume feature constancy among consecutive images. High-dimensional features are more discriminative and robust to noise and image variations than intensities, enabling more accurate camera tracking. We demonstrate our claim using conventional hand-crafted features such as SIFT as well as more recent features extracted from convolutional neural networks (CNNs) such as Siamese and AlexNet networks. We evaluate the performance of our algorithm against the baseline case (singlechannel tracking) using several public datasets, where the AlexNet feature provides the best pose estimation results.

 

  • Related Video