TR2016-022
Unsupervised Network Pretraining via Encoding Human Design
-
- "Unsupervised Network Pretraining via Encoding Human Design", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/WACV.2016.7477698, March 2016, pp. 1-9.BibTeX TR2016-022 PDF
- @inproceedings{Liu2016mar,
- author = {Liu, Ming-Yu and Mallya, Arun and Tuzel, C. Oncel and Chen, Xi},
- title = {Unsupervised Network Pretraining via Encoding Human Design},
- booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
- year = 2016,
- pages = {1--9},
- month = mar,
- doi = {10.1109/WACV.2016.7477698},
- url = {https://www.merl.com/publications/TR2016-022}
- }
,
- "Unsupervised Network Pretraining via Encoding Human Design", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/WACV.2016.7477698, March 2016, pp. 1-9.
-
Research Areas:
Abstract:
Over the years, computer vision researchers have spent an immense amount of effort on designing image features for the visual object recognition task. We propose to incorporate this valuable experience to guide the task of training deep neural networks. Our idea is to pretrain the network through the task of replicating the process of hand-designed feature extraction. By learning to replicate the process, the neural network integrates previous research knowledge and learns to model visual objects in a way similar to the hand-designed features. In the succeeding finetuning step, it further learns object-specific representations from labeled data and this boosts its classification power. We pretrain two convolutional neural networks where one replicates the process of histogram of oriented gradients feature extraction, and the other replicates the process of region covariance feature extraction. After finetuning, we achieve substantially better performance than the baseline methods.