Software & Data Downloads — Sound2Sight

Generating Visual Dynamics from Sound and Context for generating video frames and their motion dynamics conditioned on audio and a few past frames.

Learning associations across modalities is critical for robust multimodal reasoning, especially when a modality may be missing during inference. In this paper, we study this problem in the context of audio-conditioned visual synthesis -- a task that is important, for example, in occlusion reasoning. Specifically, our goal is to generate video frames and their motion dynamics conditioned on audio and a few past frames. To tackle this problem, we present Sound2Sight, a deep variational framework, that is trained to learn a per frame stochastic prior conditioned on a joint embedding of audio and past frames. This embedding is learned via a multi-head attention-based audio-visual transformer encoder. The learned prior is then sampled to further condition a video forecasting module to generate future frames. The stochastic prior allows the model to sample multiple plausible futures that are consistent with the provided audio and the past context. Moreover, to improve the quality and coherence of the generated frames, we propose a multimodal discriminator that differentiates between a synthesized and a real audio-visual clip. In this software, we provide a PyTorch implementation of our algorithm. We also provide the code to generate our synthetic MNIST dataset.

  •  Cherian, A., Chatterjee, M., Ahuja, N., "Sound2Sight: Generating Visual Dynamics from Sound and Context", European Conference on Computer Vision (ECCV), Vedaldi, A. and Bischof, H. and Brox, Th. and Frahm, J.-M., Eds., August 2020.
    BibTeX TR2020-121 PDF Software
    • @inproceedings{Cherian2020aug,
    • author = {Cherian, Anoop and Chatterjee, Moitreya and Ahuja, Narendra},
    • title = {Sound2Sight: Generating Visual Dynamics from Sound and Context},
    • booktitle = {European Conference on Computer Vision (ECCV)},
    • year = 2020,
    • editor = {Vedaldi, A. and Bischof, H. and Brox, Th. and Frahm, J.-M.},
    • month = aug,
    • publisher = {Springer},
    • url = {https://www.merl.com/publications/TR2020-121}
    • }

Access software at https://github.com/merlresearch/Sound2Sight.