Research Software — SOCKET

SOurce-free Cross-modal KnowledgE Transfer for transfering knowledge from neural networks trained on a source sensor modality.

SOCKET allows transferring knowledge from neural networks trained on a source sensor modality (such as RGB) for one or more domains where large amount of annotated data may be available to an unannotated target dataset from a different sensor modality (such as infrared or depth). It makes use of task-irrelevant paired source-target images in order to promote feature alignment between the two modalities as well as distribution matching between the source batch norm features (mean and variance) and the target features.

  •  Ahmed, S.M., Lohit, S., Peng, K.-C., Jones, M.J., Roy Chowdhury, A.K., "Cross-Modal Knowledge Transfer Without Task-Relevant Source Data", European Conference on Computer Vision (ECCV), October 2022.
    BibTeX TR2022-135 PDF Video Software Presentation
    • @inproceedings{Ahmed2022oct,
    • author = {Ahmed, Sk Miraj and Lohit, Suhas and Peng, Kuan-Chuan and Jones, Michael J. and Roy Chowdhury, Amit K},
    • title = {Cross-Modal Knowledge Transfer Without Task-Relevant Source Data},
    • booktitle = {European Conference on Computer Vision (ECCV)},
    • year = 2022,
    • month = oct,
    • url = {https://www.merl.com/publications/TR2022-135}
    • }

Access software at https://github.com/merlresearch/SOCKET.