TR2024-126

TF-Locoformer: Transformer with Local Modeling by Convolution for Speech Separation and Enhancement


Abstract:

Time-frequency (TF) domain dual-path models achieve high-fidelity speech separation. While some previous state-of-the-art (SoTA) models rely on RNNs, this reliance means they lack the parallelizability, scalability, and versatility of Transformer blocks. Given the wide-ranging success of pure Transformer-based architectures in other fields, in this work we focus on removing the RNN from TF-domain dual-path models, while maintaining SoTA performance. This work presents TF-Locoformer, a Transformer-based model with LOcal-modeling by COnvolution. The model uses feed- forward networks (FFNs) with convolution layers, instead of linear layers, to capture local information, letting the self-attention focus on capturing global patterns. We place two such FFNs before and after self-attention to enhance the local-modeling capability. We also introduce a novel normalization for TF-domain dual-path models. Experiments on separation and enhancement datasets show that the proposed model meets or exceeds SoTA in multiple benchmarks with an RNN-free architecture.

 

  • Software & Data Downloads

  • Related Publication

  •  Saijo, K., Wichern, G., Germain, F.G., Pan, Z., Le Roux, J., "TF-Locoformer: Transformer with Local Modeling by Convolution for Speech Separation and Enhancement", arXiv, August 2024.
    BibTeX arXiv
    • @article{Saijo2024aug2,
    • author = {Saijo, Kohei and Wichern, Gordon and Germain, François G and Pan, Zexu and Le Roux, Jonathan}},
    • title = {TF-Locoformer: Transformer with Local Modeling by Convolution for Speech Separation and Enhancement},
    • journal = {arXiv},
    • year = 2024,
    • month = aug,
    • url = {https://www.arxiv.org/abs/2408.03440}
    • }