NEWS    Anoop Cherian gave an invited talk at the Multi-modal Video Analysis Workshop, ECCV 2020

Date released: September 3, 2020


  •  NEWS    Anoop Cherian gave an invited talk at the Multi-modal Video Analysis Workshop, ECCV 2020
  • Date:

    August 23, 2020

  • Where:

    European Conference on Computer Vision (ECCV), online, 2020

  • Description:

    MERL Principal Research Scientist Anoop Cherian gave an invited talk titled "Sound2Sight: Audio-Conditioned Visual Imagination" at the Multi-modal Video Analysis workshop held in conjunction with the European Conference on Computer Vision (ECCV), 2020. The talk was based on a recent ECCV paper that describes a new multimodal reasoning task called Sound2Sight and a generative adversarial machine learning algorithm for producing plausible video sequences conditioned on sound and visual context.

  • External Link:

    https://sites.google.com/view/multimodalvideo-v2/home

  • MERL Contact:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio

    •  Cherian, A., Chatterjee, M., Ahuja, N., "Sound2Sight: Generating Visual Dynamics from Sound and Context", European Conference on Computer Vision (ECCV), Vedaldi, A. and Bischof, H. and Brox, Th. and Frahm, J.-M., Eds., August 2020.
      BibTeX TR2020-121 PDF Software
      • @inproceedings{Cherian2020aug,
      • author = {Cherian, Anoop and Chatterjee, Moitreya and Ahuja, Narendra},
      • title = {Sound2Sight: Generating Visual Dynamics from Sound and Context},
      • booktitle = {European Conference on Computer Vision (ECCV)},
      • year = 2020,
      • editor = {Vedaldi, A. and Bischof, H. and Brox, Th. and Frahm, J.-M.},
      • month = aug,
      • publisher = {Springer},
      • url = {https://www.merl.com/publications/TR2020-121}
      • }