Learning Invariant Representations from EEG via Adversarial Inference

    •  Ozdenizci, O., Wang, Y., Koike-Akino, T., Erdogmus, D., "Learning Invariant Representations from EEG via Adversarial Inference", IEEE Access, DOI: 10.1109/​ACCESS.2020.2971600, Vol. 8, pp. 27074-27085, April 2020.
      BibTeX TR2020-049 PDF
      • @article{Ozdenizci2020apr,
      • author = {Ozdenizci, Ozan and Wang, Ye and Koike-Akino, Toshiaki and Erdogmus, Deniz},
      • title = {Learning Invariant Representations from EEG via Adversarial Inference},
      • journal = {IEEE Access},
      • year = 2020,
      • volume = 8,
      • pages = {27074--27085},
      • month = apr,
      • doi = {10.1109/ACCESS.2020.2971600},
      • issn = {2169-3536},
      • url = {}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Machine Learning


Discovering and exploiting shared, invariant neural activity in electroencephalogram (EEG) based classification tasks is of significant interest for generalizability of decoding models across subjects or EEG recording sessions. While deep neural networks are recently emerging as generic EEG feature extractors, this transfer learning aspect usually relies on the prior assumption that deep networks naturally behave as subject- (or session-) invariant EEG feature extractors. We propose a further step towards invariance of EEG deep learning frameworks in a systemic way during model training. We introduce an adversarial inference approach to learn representations that are invariant to inter-subject variabilities within a discriminative setting. We perform experimental studies using a publicly available motor imagery EEG dataset, and state-of-the-art convolutional neural network based EEG decoding models within the proposed adversarial learning framework. We present our results in cross-subject model transfer scenarios, demonstrate neurophysiological interpretations of the learned networks, and discuss potential insights offered by adversarial inference to the growing field of deep learning for EEG.


  • Related News & Events

    •  AWARD   MERL Ranked 1st Place in Cross-Subject Transfer Learning Task and 4th Place Overall at the NeurIPS2021 BEETL Competition for EEG Transfer Learning.
      Date: November 11, 2021
      Awarded to: Niklas Smedemark-Margulies, Toshiaki Koike-Akino, Ye Wang, Deniz Erdogmus
      MERL Contacts: Toshiaki Koike-Akino; Ye Wang
      Research Areas: Artificial Intelligence, Signal Processing, Human-Computer Interaction
      • The MERL Signal Processing group achieved first place in the cross-subject transfer learning task and fourth place overall in the NeurIPS 2021 BEETL AI Challenge for EEG Transfer Learning. The team included Niklas Smedemark-Margulies (intern from Northeastern University), Toshiaki Koike-Akino, Ye Wang, and Prof. Deniz Erdogmus (Northeastern University). The challenge addresses two types of transfer learning tasks for EEG Biosignals: a homogeneous transfer learning task for cross-subject domain adaptation; and a heterogeneous transfer learning task for cross-data domain adaptation. There were 110+ registered teams in this competition, MERL ranked 1st in the homogeneous transfer learning task, 7th place in the heterogeneous transfer learning task, and 4th place for the combined overall score. For the homogeneous transfer learning task, MERL developed a new pre-shot learning framework based on feature disentanglement techniques for robustness against inter-subject variation to enable calibration-free brain-computer interfaces (BCI). MERL is invited to present our pre-shot learning technique at the NeurIPS 2021 workshop.
  • Related Research Highlights