TR2022-140

Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation


    •  Chatterjee, M., Ahuja, N., Cherian, A., "Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation", Advances in Neural Information Processing Systems (NeurIPS), November 2022.
      BibTeX TR2022-140 PDF
      • @inproceedings{Chatterjee2022nov,
      • author = {Chatterjee, Moitreya and Ahuja, Narendra and Cherian, Anoop},
      • title = {Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation},
      • booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      • year = 2022,
      • month = nov,
      • url = {https://www.merl.com/publications/TR2022-140}
      • }
  • MERL Contact:
  • Research Areas:

    Computer Vision, Machine Learning, Speech & Audio

Abstract:

There exists an unequivocal distinction between the sound produced by a static source and that produced by a moving one, especially when the source moves towards or away from the microphone. In this paper, we propose to use this connection between audio and visual dynamics for solving two challenging tasks simultaneously, namely: (i) separating audio sources from a mixture using visual cues, and (ii) predicting the 3D visual motion of a sounding source using its sepa- rated audio. Towards this end, we present Audio Separator and Motion Predictor (ASMP) – a deep learning framework that leverages the 3D structure of the scene and the motion of sound sources for better audio source separation. At the heart of ASMP is a 2.5D scene graph capturing various objects in the video and their pseudo-3D spatial proximities. This graph is constructed by registering together 2.5D monocular depth predictions from the 2D video frames and associating the 2.5D scene regions with the outputs of an object detector applied on those frames. The ASMP task is then mathematically modeled as the joint problem of: (i) recur- sively segmenting the 2.5D scene graph into several sub-graphs, each associated with a constituent sound in the input audio mixture (which is then separated) and (ii) predicting the 3D motions of the corresponding sound sources from the separated audio. To empirically evaluate ASMP, we present experiments on two challenging audio-visual datasets, viz. Audio Separation in the Wild (ASIW) and Audio Visual Event (AVE). Our results demonstrate that ASMP achieves a clear improvement in source separation quality, outperforming prior works on both datasets, while also estimating the direction of motion of the sound sources better than other methods.

 

  • Related Publications

  •  Chatterjee, M., Ahuja, N., Cherian, A., "Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation", arXiv, October 2022.
    BibTeX arXiv
    • @article{Chatterjee2022oct,
    • author = {Chatterjee, Moitreya and Ahuja, Narendra and Cherian, Anoop},
    • title = {Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation},
    • journal = {arXiv},
    • year = 2022,
    • month = oct,
    • url = {http://arxiv.org/abs/2210.16472}
    • }
  •  Chatterjee, M., Ahuja, N., Cherian, A., "Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation", arXiv, October 2022.
    BibTeX arXiv
    • @article{Chatterjee2022oct2,
    • author = {Chatterjee, Moitreya and Ahuja, Narendra and Cherian, Anoop},
    • title = {Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation},
    • journal = {arXiv},
    • year = 2022,
    • month = oct,
    • url = {http://arxiv.org/abs/2210.16472}
    • }