TR2022-098

AutoTransfer: Subject Transfer Learning with Censored Representations on Biosignals Data


Abstract:

We investigate a regularization framework for subject transfer learning in which we train an encoder and classifier to minimize classification loss, subject to a penalty measuring independence between the latent representation and the subject label. We introduce three notions of independence and corresponding penalty terms using mutual information or divergence as a proxy for independence. For each penalty term, we provide several concrete estimation algorithms, using analytic methods as well as neural critic functions. We propose a hands-off strategy for applying this diverse family of regular- ization schemes to a new dataset, which we call “AutoTransfer”. We evaluate the performance of these individual regularization strategies under our AutoTransfer framework on EEG, EMG, and ECoG datasets, showing that these approaches can improve subject transfer learning for challenging real-world datasets