TR2022-075
An Empirical Analysis of Boosting Deep Networks
-
- "An Empirical Analysis of Boosting Deep Networks", International Joint Conference on Neural Networks (IJCNN), DOI: 10.1109/IJCNN55064.2022.9892204, July 2022.BibTeX TR2022-075 PDF Presentation
- @inproceedings{Rambhatla2022jul,
- author = {Rambhatla, Sai and Jones, Michael J. and Chellappa, Rama},
- title = {An Empirical Analysis of Boosting Deep Networks},
- booktitle = {International Joint Conference on Neural Networks (IJCNN)},
- year = 2022,
- month = jul,
- doi = {10.1109/IJCNN55064.2022.9892204},
- url = {https://www.merl.com/publications/TR2022-075}
- }
,
- "An Empirical Analysis of Boosting Deep Networks", International Joint Conference on Neural Networks (IJCNN), DOI: 10.1109/IJCNN55064.2022.9892204, July 2022.
-
MERL Contact:
-
Research Areas:
Abstract:
Boosting is a method for finding a highly accurate classifier by linearly combining many “weak” classifiers, each of which may be only moderately accurate. Thus, boosting is a method for learning an ensemble of classifiers. While boosting has been shown to be very effective for decision trees, its impact on neural networks has not been extensively studied. Using standard object recognition datasets, we verify experimentally the wellknown result that a boosted ensemble of decision trees usually generalizes much better on testing data than a single decision tree with the same number of parameters. In contrast, using the same datasets and boosting algorithms, our experiments show the opposite to be true when using neural networks (both convolutional neural networks (CNNs) and multilayer perceptrons (MLPs)). We find that a single neural network usually generalizes better than a boosted ensemble of smaller neural networks with the same total number of parameters. While this is an experimental investigation, more theoretical research is warranted to understand the role of boosting in deep learningbased classifiers