- Date: February 10, 2014
MERL Contacts: Jonathan Le Roux; Daniel N. Nikovski; Anthony Vetro Brief - Mitsubishi Electric Corporation demonstrated an ultra-simple HMI for in-car device operation using algorithms developed by MERL to predict user actions and destinations.
-
- Date & Time: Thursday, October 24, 2013; 8:45 AM - 5:00 PM
Location: Columbia University
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - SANE 2013, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, will be held on Thursday October 24, 2013 at Columbia University, in New York City.
A follow-up to SANE 2012 held in October 2012 at MERL in Cambridge, MA, this year's SANE will be held in conjunction with the WASPAA workshop, held October 20-23 in upstate New York. WASPAA attendees are welcome and encouraged to attend SANE.
SANE 2013 will feature invited speakers from the Northeast, as well as from the international community. It will also feature a lively poster session during lunch time, open to both students and researchers.
SANE 2013 is organized by Prof. Dan Ellis (Columbia University), Jonathan Le Roux (MERL) and John R. Hershey (MERL).
-
- Date & Time: Thursday, October 17, 2013; 12:00 PM
Speaker: Prof. Laurent Daudet, Paris Diderot University, France
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
Abstract - In acoustics, one may wish to acquire a wavefield over a whole spatial domain, while we can only make point measurements (ie, with microphones). Even with few sources, this remains a difficult problem because of reverberation, which can be hard to characterize. This can be seen as a sampling / interpolation problem, and it raises a number of interesting questions: how many sample points are needed, where to choose the sampling points, etc. In this presentation, we will review some case studies, in 2D (vibrating plates) and 3D (room acoustics), with numerical and experimental data, where we have developed sparse models, possibly with additional 'structures', based on a physical modeling of the acoustic field. These type of models are well suited to reconstruction techniques known as compressed sensing. These principles can also be used for sub-nyquist optical imaging : we will show preliminary experimental results of a new compressive imager, remarkably simple in its principle, using a multiply scattering medium.
-
- Date: September 26, 2013
Awarded to: Jonathan Le Roux
Awarded for: "A new non-negative dynamical system for speech and audio modeling"
Awarded by: Acoustical Society of Japan (ASJ)
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date: June 1, 2013
Where: International Workshop on Machine Listening in Multisource Environments (CHiME)
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The paper "Discriminative Methods for Noise Robust Speech Recognition: A CHiME Challenge Benchmark" by Tachioka, Y., Watanabe, S., Le Roux, J. and Hershey, J.R. was presented at the International Workshop on Machine Listening in Multisource Environments (CHiME).
-
- Date: June 1, 2013
Awarded to: Yuuki Tachioka, Shinji Watanabe, Jonathan Le Roux and John R. Hershey
Awarded for: "Discriminative Methods for Noise Robust Speech Recognition: A CHiME Challenge Benchmark"
Awarded by: International Workshop on Machine Listening in Multisource Environments (CHiME)
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The results of the 2nd 'CHiME' Speech Separation and Recognition Challenge are out! The team formed by MELCO researcher Yuuki Tachioka and MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John Hershey obtained the best results in the continuous speech recognition task (Track 2). This very challenging task consisted in recognizing speech corrupted by highly non-stationary noises recorded in a real living room. Our proposal, which also included a simple yet extremely efficient denoising front-end, focused on investigating and developing state-of-the-art automatic speech recognition back-end techniques: feature transformation methods, as well as discriminative training methods for acoustic and language modeling. Our system significantly outperformed other participants. Our code has since been released as an improved baseline for the community to use.
-
- Date: June 1, 2013
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The results of the 2nd CHiME Speech Separation and Recognition Challenge are out! The team formed by MELCO researcher Yuuki Tachioka and MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John Hershey obtained the best results in the continuous speech recognition task (Track 2). This very challenging task consisted in recognizing speech corrupted by highly non-stationary noises recorded in a real living room. Our proposal, which also included a simple yet extremely efficient denoising front-end, focused on investigating and developing state-of-the-art automatic speech recognition back-end techniques: feature transformation methods, as well as discriminative training methods for acoustic and language modeling. Our system significantly outperformed other participants. Our code has since been released as an improved baseline for the community to use.
-
- Date & Time: Saturday, June 1, 2013; 9:00 AM - 6:00 PM
Location: Vancouver, Canada
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - MERL researchers Shinji Watanabe and Jonathan Le Roux are members of the organizing committee of CHiME 2013, the 2nd International Workshop on Machine Listening in Multisource Environments, Jonathan acting as Program Co-Chair. MERL is also a sponsor for the event.
CHiME 2013 is a one-day workshop to be held in conjunction with ICASSP 2013 that will consider the challenge of developing machine listening applications for operation in multisource environments, i.e. real-world conditions with acoustic clutter, where the number and nature of the sound sources is unknown and changing over time. CHiME brings together researchers from a broad range of disciplines (computational hearing, blind source separation, speech recognition, machine learning) to discuss novel and established approaches to this problem. The cross-fertilisation of ideas will foster fresh approaches that efficiently combine the complementary strengths of each research field.
-
- Date & Time: Thursday, May 30, 2013; 12:30 PM - 2:30 PM
Location: Vancouver, Canada
MERL Contacts: Anthony Vetro; Petros T. Boufounos; Jonathan Le Roux
Research Area: Speech & Audio
Brief - MERL is a sponsor for the first ICASSP Student Career Luncheon that will take place at ICASSP 2013. MERL members will take part in the event to introduce MERL and talk with students interested in positions or internships.
-
- Date: May 26, 2013
Where: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
MERL Contacts: Dehong Liu; Jianlin Guo; Anthony Vetro; Petros T. Boufounos; Jonathan Le Roux Brief - The papers "Stereo-based Feature Enhancement Using Dictionary Learning" by Watanabe, S. and Hershey, J.R., "Effectiveness of Discriminative Training and Feature Transformation for Reverberated and Noisy Speech" by Tachioka, Y., Watanabe, S. and Hershey, J.R., "Non-negative Dynamical System with Application to Speech and Audio" by Fevotte, C., Le Roux, J. and Hershey, J.R., "Source Localization in Reverberant Environments using Sparse Optimization" by Le Roux, J., Boufounos, P.T., Kang, K. and Hershey, J.R., "A Keypoint Descriptor for Alignment-Free Fingerprint Matching" by Garg, R. and Rane, S., "Transient Disturbance Detection for Power Systems with a General Likelihood Ratio Test" by Song, JX., Sahinoglu, Z. and Guo, J., "Disparity Estimation of Misaligned Images in a Scanline Optimization Framework" by Rzeszutek, R., Tian, D. and Vetro, A., "Screen Content Coding for HEVC Using Edge Modes" by Hu, S., Cohen, R.A., Vetro, A. and Kuo, C.C.J. and "Random Steerable Arrays for Synthetic Aperture Imaging" by Liu, D. and Boufounos, P.T. were presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
-
- Date: May 2, 2013
Where: International Conference on Learning Representations (ICLR)
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The paper "Block Coordinate Descent for Sparse NMF" by Potluru, V.K., Plis, S.M., Le Roux, J., Pearlmutter, B.A., Calhoun, V.D. and Hayes, T.P. was presented at the International Conference on Learning Representations (ICLR).
-
- Date: March 1, 2013
Where: IEEE Signal Processing Letters
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The article "Consistent Wiener Filtering for Audio Source Separation" by Le Roux, J. and Vincent, E. was published in IEEE Signal Processing Letters.
-
- Date & Time: Tuesday, February 26, 2013; 12:00 PM
Speaker: Prof. Taylan Cemgil, Bogazici University, Istanbul, Turkey
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
Abstract - Algorithms for decompositions of matrices are of central importance in machine learning, signal processing and information retrieval, with SVD and NMF (Nonnegative Matrix Factorisation) being the most widely used examples. Probabilistic interpretations of matrix factorisation models are also well known and are useful in many applications (Salakhutdinov and Mnih 2008; Cemgil 2009; Fevotte et. al. 2009). In the recent years, decompositions of multiway arrays, known as tensor factorisations have gained significant popularity for the analysis of large data sets with more than two entities (Kolda and Bader, 2009; Cichocki et. al. 2008). We will discuss a subset of these models from a statistical modelling perspective, building upon probabilistic Bayesian generative models and generalised linear models (McCulloch and Nelder). In both views, the factorisation is implicit in a well-defined hierarchical statistical model and factorisations can be computed via maximum likelihood.
We express a tensor factorisation model using a factor graph and the factor tensors are optimised iteratively. In each iteration, the update equation can be implemented by a message passing algorithm, reminiscent to variable elimination in a discrete graphical model. This setting provides a structured and efficient approach that enables very easy development of application specific custom models, as well as algorithms for the so called coupled (collective) factorisations where an arbitrary set of tensors are factorised simultaneously with shared factors. Extensions to full Bayesian inference for model selection, via variational approximations or MCMC are also feasible. Well known models of multiway analysis such as Nonnegative Matrix Factorisation (NMF), Parafac, Tucker, and audio processing (Convolutive NMF, NMF2D, SF-SSNTF) appear as special cases and new extensions can easily be developed. We will illustrate the approach with applications in link prediction and audio and music processing.
-
- Date: November 28, 2012
Where: Techniques for Noise Robustness in Automatic Speech Recognition
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The article "Factorial Models for Noise Robust Speech Recognition" by Hershey, J.R., Rennie, S.J. and Le Roux, J. was published in the book Techniques for Noise Robustness in Automatic Speech Recognition.
-
- Date & Time: Wednesday, October 24, 2012; 8:30 AM - 5:00 PM
Location: MERL
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - SANE 2012, a one-day event gathering researchers and students in speech and audio from the northeast of the American continent, will be held on Wednesday October 24, 2012 at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA.
-
- Date & Time: Wednesday, October 24, 2012; 11:00 AM
Speaker: Prof. Dan Ellis, Columbia University
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 9:10 AM
Speaker: Prof. Jim Glass and Chia-ying Lee, MIT CSAIL
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 4:05 PM
Speaker: Dr. John R. Hershey, MERL
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 3:20 PM
Speaker: Dr. Steven J. Rennie, IBM Research
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 1:30 PM
Speaker: Dr. Timothy J. Hazen and David Harwath, MIT Lincoln Labs / MIT CSAIL
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 2:15 PM
Speaker: Dr. Herb Gish, BBN - Raytheon
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 11:45 AM
Speaker: Josh McDermott, MIT, BCS
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date & Time: Wednesday, October 24, 2012; 9:55 AM
Speaker: Dr. Tara Sainath, IBM Research
MERL Host: Jonathan Le Roux
Research Area: Speech & Audio
-
- Date: March 31, 2012
Where: International Workshop on Statistical Machine Learning for Speech Processing (IWSML)
MERL Contact: Jonathan Le Roux
Research Area: Speech & Audio
Brief - The paper "Latent Dirichlet Reallocation for Term Swapping" by Heaukulani, C., Le Roux, J. and Hershey, J.R. was presented at the International Workshop on Statistical Machine Learning for Speech Processing (IWSML).
-
- Date: March 25, 2012
Where: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
MERL Contacts: Dehong Liu; Jonathan Le Roux; Petros T. Boufounos Brief - The papers "Dictionary Learning Based Pan-Sharpening" by Liu, D. and Boufounos, P.T., "Multiple Dictionary Learning for Blocking Artifacts Reduction" by Wang, Y. and Porikli, F., "A Compressive Phase-Locked Loop" by Schnelle, S.R., Slavinsky, J.P., Boufounos, P.T., Davenport, M.A. and Baraniuk, R.G., "Indirect Model-based Speech Enhancement" by Le Roux, J. and Hershey, J.R., "A Clustering Approach to Optimize Online Dictionary Learning" by Rao, N. and Porikli, F., "Parametric Multichannel Adaptive Signal Detection: Exploiting Persymmetric Structure" by Wang, P., Sahinoglu, Z., Pun, M.-O. and Li, H., "Additive Noise Removal by Sparse Reconstruction on Image Affinity Nets" by Sundaresan, R. and Porikli, F. and "Depth Sensing Using Active Coherent Illumination" by Boufounos, P.T. were presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
-