News & Events

238 News items, Awards, Events or Talks found.


  •  NEWS    MERL researchers presented 1 workshop talk as a panelist and 2 papers at ECOC 2017, including one invited talk
    Date: September 17, 2017 - September 21, 2017
    Where: 2017 European Conference on Optical Communication (ECOC), Sweden
    MERL Contacts: Toshiaki Koike-Akino; Kieran Parsons; Ye Wang
    Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
    Brief
    • Two papers from the Optical Communications team were presented at the 2017 European Conference on Optical Communication (ECOC) held in Gothenburg, Sweden in September 2017. The papers relate to enhanced error correction coding for coherent optical links and advanced precoding for optical data center networks. The invited paper studied irregular polar coding to reduce computational complexity, decoding latency, and bit error rate at the same time. In addition to two papers, the team member was invited to talk about constellation shaping as a workshop panelist.
  •  
  •  NEWS    MERL researchers presented 11 papers at ACC 2017 (American Controls Conference)
    Date: May 24, 2017 - May 26, 2017
    MERL Contacts: Stefano Di Cairano; Abraham Goldsmith; Daniel N. Nikovski; Arvind Raghunathan; Yebin Wang
    Research Areas: Control, Dynamical Systems, Machine Learning
    Brief
    • Talks were presented by members of several groups at MERL and covered a wide range of topics:
      - Similarity-Based Vehicle-Motion Prediction
      - Transfer Operator Based Approach for Optimal Stabilization of Stochastic Systems
      - Extended command governors for constraint enforcement in dual stage processing machines
      - Cooperative Optimal Output Regulation of Multi-Agent Systems Using Adaptive Dynamic Programming
      - Deep Reinforcement Learning for Partial Differential Equation Control
      - Indirect Adaptive MPC for Output Tracking of Uncertain Linear Polytopic Systems
      - Constraint Satisfaction for Switched Linear Systems with Restricted Dwell-Time
      - Path Planning and Integrated Collision Avoidance for Autonomous Vehicles
      - Least Squares Dynamics in Newton-Krylov Model Predictive Control
      - A Neuro-Adaptive Architecture for Extremum Seeking Control Using Hybrid Learning Dynamics
      - Robust POD Model Stabilization for the 3D Boussinesq Equations Based on Lyapunov Theory and Extremum Seeking.
  •  
  •  NEWS    MERL organizes Workshop on Advanced Digital Transmitters at 2017 International Microwave Symposium
    Date: June 5, 2017
    Where: Honolulu, HI
    MERL Contact: Philip V. Orlik
    Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
    Brief
    • MERL researcher Dr. Rui Ma, is organizing a Workshop in collaboration with Dr. SungWon Chung of the University of Southern California (USC) on advanced digital transmitters. This workshop overviews recent advances in digital-intensive wireless transmitter R&D for both base-stations and mobile devices. The focus will be on the digital signal processing techniques and related digital-intensive transmitter circuits and architectures for advanced modulation, linearization, spur cancellation, high efficiency encoding, and parallel processing. This workshop takes place on Monday, June 5th 2017 at International Microwave Week, in Honolulu, HI. In total, 8 technical presentations from world leading research groups will be given.

      Dr. Ma will present a talk titled, "Advanced Power Encoding and Non-Contiguous Multi-Band Digital Transmitter Architectures".
  •  
  •  EVENT    Tim Marks to give lunch talk at Face and Gesture 2017 conference
    Date: Thursday, June 1, 2017
    Location: IEEE Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC
    Speaker: Tim K. Marks
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL Senior Principal Research Scientist Tim K. Marks will give the invited lunch talk on Thursday, June 1, at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). The talk is entitled "Robust Real-Time 3D Head Pose and 2D Face Alignment.".
  •  
  •  NEWS    MERL Researcher Tim Marks presents an invited talk at MIT Lincoln Laboratory
    Date: April 27, 2017
    Where: Lincoln Laboratory, Massachusetts Institute of Technology
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."

      Abstract: Head pose estimation and facial landmark localization are key technologies, with widespread application areas including biometrics and human-computer interfaces. This talk describes two different robust real-time face-processing methods, each using a different modality of input image. The first part of the talk describes our system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. The method is based on a novel 3D Triangular Surface Patch (TSP) descriptor, which is viewpoint-invariant as well as robust to noise and to variations in the data resolution. This descriptor, combined with fast nearest-neighbor lookup and a joint voting scheme, enable our system to handle arbitrary head pose and significant occlusions. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Both our 3D head pose and 2D face alignment methods outperform the previous results on standard datasets. If permitted, I plan to end the talk with a live demonstration.
  •  
  •  NEWS    MERL researcher Tim Marks presents invited talk at University of Utah
    Date: April 10, 2017
    Where: University of Utah School of Computing
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."

      Abstract: The first part of the talk describes our multi-stream bi-directional recurrent neural network for action detection from video. In addition to a two-stream convolutional neural network (CNN) on full-frame appearance (images) and motion (optical flow), our system trains two additional streams on appearance and motion that have been cropped to a bounding box from a person tracker. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. Our method outperforms the previous state of the art on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we have made available to the community. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Our face alignment system outperforms the previous results on standard datasets. The talk will end with a live demo of our face alignment system.
  •  
  •  TALK    Generative Model-Based Text-to-Speech Synthesis
    Date & Time: Wednesday, February 1, 2017; 12:00-13:00
    Speaker: Dr. Heiga ZEN, Google
    MERL Host: Chiori Hori
    Research Area: Speech & Audio
    Abstract
    • Recent progress in generative modeling has improved the naturalness of synthesized speech significantly. In this talk I will summarize these generative model-based approaches for speech synthesis such as WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems.
      See https://deepmind.com/blog/wavenet-generative-model-raw-audio/ for further details.
  •  
  •  TALK    High-Dimensional Analysis of Stochastic Optimization Algorithms for Estimation and Learning
    Date & Time: Tuesday, December 13, 2016; Noon
    Speaker: Yue M. Lu, John A. Paulson School of Engineering and Applied Sciences, Harvard University
    MERL Host: Petros T. Boufounos
    Research Areas: Computational Sensing, Machine Learning
    Abstract
    • In this talk, we will present a framework for analyzing, in the high-dimensional limit, the exact dynamics of several stochastic optimization algorithms that arise in signal and information processing. For concreteness, we consider two prototypical problems: sparse principal component analysis and regularized linear regression (e.g. LASSO). For each case, we show that the time-varying estimates given by the algorithms will converge weakly to a deterministic "limiting process" in the high-dimensional limit. Moreover, this limiting process can be characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithms. For example, performance metrics such as the MSE, the cosine similarity and the misclassification rate in sparse support recovery can all be obtained by examining the deterministic limiting process. A steady-state analysis of the nonlinear PDE also reveals interesting phase transition phenomena related to the performance of the algorithms. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
  •  
  •  TALK    Reduced basis methods and their application in data science and uncertainty quantification
    Date & Time: Monday, December 12, 2016; 12:00 PM
    Speaker: Yanlai Chen, Department of Mathematics at the University of Massachusetts Dartmouth
    Research Areas: Control, Dynamical Systems
    Abstract
    • Models of reduced computational complexity is indispensable in scenarios where a large number of numerical solutions to a parametrized problem are desired in a fast/real-time fashion. These include simulation-based design, parameter optimization, optimal control, multi-model/scale analysis, uncertainty quantification. Thanks to an offline-online procedure and the recognition that the parameter-induced solution manifolds can be well approximated by finite-dimensional spaces, reduced basis method (RBM) and reduced collocation method (RCM) can improve efficiency by several orders of magnitudes. The accuracy of the RBM solution is maintained through a rigorous a posteriori error estimator whose efficient development is critical and involves fast eigensolves.

      In this talk, I will give a brief introduction of the RBM/RCM, and explain how they can be used for data compression, face recognition, and significantly delaying the curse of dimensionality for uncertainty quantification.
  •  
  •  TALK    Collaborative dictionary learning from big, distributed data
    Date & Time: Friday, December 2, 2016; 11:00 AM
    Speaker: Prof. Waheed Bajwa, Rutgers University
    MERL Host: Petros T. Boufounos
    Research Area: Computational Sensing
    Abstract
    • While distributed information processing has a rich history, relatively less attention has been paid to the problem of collaborative learning of nonlinear geometric structures underlying data distributed across sites that are connected to each other in an arbitrary topology. In this talk, we discuss this problem in the context of collaborative dictionary learning from big, distributed data. It is assumed that a number of geographically-distributed, interconnected sites have massive local data and they are interested in collaboratively learning a low-dimensional geometric structure underlying these data. In contrast to some of the previous works on subspace-based data representations, we focus on the geometric structure of a union of subspaces (UoS). In this regard, we propose a distributed algorithm, termed cloud K-SVD, for collaborative learning of a UoS structure underlying distributed data of interest. The goal of cloud K-SVD is to learn an overcomplete dictionary at each individual site such that every sample in the distributed data can be represented through a small number of atoms of the learned dictionary. Cloud K-SVD accomplishes this goal without requiring communication of individual data samples between different sites. In this talk, we also theoretically characterize deviations of the dictionaries learned at individual sites by cloud K-SVD from a centralized solution. Finally, we numerically illustrate the efficacy of cloud K-SVD in the context of supervised training of nonlinear classsifiers from distributed, labaled training data.
  •  
  •  NEWS    Rui Ma gave invited IEEE course on Modern Topics in Power Amplifier
    Date: October 11, 2016
    Where: MIT Lincoln Laboratory
    Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
    Brief
    • Dr. Rui Ma was invited to give a talk on Modern Topics in Power Amplifier, which was IEEE Chapter course organized by IEEE Boston Section.

      This five week lecture series intended to give a tutorial overview of the latest developments in power amplifier technology. It began with a review of RF power amplifier concepts then teaches the modern MMIC design flow process. Efficiency, and linearization techniques were discussed in the following weeks. The course was concluded with a hands on demonstration and exercise.

      Dr. Ma was addressing the advancement of Digital Transmitter as a enabling technology for next generation wireless communications.
  •  
  •  EVENT    SANE 2016 - Speech and Audio in the Northeast
    Date: Friday, October 21, 2016
    Location: MIT, McGovern Institute for Brain Research, Cambridge, MA
    MERL Contact: Jonathan Le Roux
    Research Area: Speech & Audio
    Brief
    • SANE 2016, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, will be held on Friday October 21, 2016 at MIT's Brain and Cognitive Sciences Department, at the McGovern Institute for Brain Research, in Cambridge, MA.

      It is a follow-up to SANE 2012 (Mitsubishi Electric Research Labs - MERL), SANE 2013 (Columbia University), SANE 2014 (MIT CSAIL), and SANE 2015 (Google NY). Since the first edition, the audience has steadily grown, gathering 140 researchers and students in 2015.

      SANE 2016 will feature invited talks by leading researchers: Juan P. Bello (NYU), William T. Freeman (MIT/Google), Nima Mesgarani (Columbia University), DAn Ellis (Google), Shinji Watanabe (MERL), Josh McDermott (MIT), and Jesse Engel (Google). It will also feature a lively poster session during lunch time, open to both students and researchers.

      SANE 2016 is organized by Jonathan Le Roux (MERL), Josh McDermott (MIT), Jim Glass (MIT), and John R. Hershey (MERL).
  •  
  •  TALK    Atomic-level modelling of materials with applications to semi-conductors
    Date & Time: Wednesday, August 17, 2016; 1 PM
    Speaker: Gilles Zerah, Centre Francais en Calcul Atomique et Moleculaire-Ile-de-France (CFCAM-IdF)
    Research Areas: Applied Physics, Electronic and Photonic Devices
    Abstract
    • The first part of the talk is a high-level review of modern technologies for atomic-level modelling of materials. The second part discusses band gap calculations and MERL results for semi-conductors.
  •  
  •  TALK    Controlling the Grid Edge: Emerging Grid Operation Paradigms
    Date & Time: Thursday, July 7, 2016; 2:00 PM
    Speaker: Dr. Sonja Glavaski, Program Director, ARPA-E
    MERL Host: Arvind Raghunathan
    Research Area: Electric Systems
    Abstract
    • The evolution of the grid faces significant challenges if it is to integrate and accept more energy from renewable generation and other Distributed Energy Resources (DERs). To maintain grid's reliability and turn intermittent power sources into major contributors to the U.S. energy mix, we have to think about the grid differently and design it to be smarter and more flexible.

      ARPA-E is interested in disruptive technologies that enable increased integration of DERs by real-time adaptation while maintaining grid reliability and reducing cost for customers with smart technologies. The potential impact is significant, with projected annual energy savings of more than 3 quadrillion BTU and annual CO2 emissions reductions of more than 250 million metric tons.

      This talk will identify opportunities in developing next generation control technologies and grid operation paradigms that address these challenges and enable secure, stable, and reliable transmission and distribution of electrical power. Summary of newly announced ARPA-E NODES (Network Optimized Distributed Energy Systems) Program funding development of these technologies will be presented.
  •  
  •  EVENT    MERL celebrates 25 years of innovation
    Date: Thursday, June 2, 2016
    Location: Norton's Woods Conference Center at American Academy of Arts & Sciences, Cambridge, MA
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Brief
    • MERL celebrated 25 years of innovation on Thursday, June 2 at the Norton's Woods Conference Center at the American Academy of Arts & Sciences in Cambridge, MA. The event was a great success, with inspiring keynote talks, insightful panel sessions, and an exciting research showcase of MERL's latest breakthroughs.

      Please visit the event page to view photos of each session, video presentations, as well as a commemorative booklet that highlights past and current research.
  •  
  •  TALK    Speech structure and its application to speech processing -- Relational, holistic and abstract representation of speech
    Date & Time: Friday, June 3, 2016; 1:30PM - 3:00PM
    Speaker: Nobuaki Minematsu and Daisuke Saito, The University of Tokyo
    Research Area: Speech & Audio
    Abstract
    • Speech signals covey various kinds of information, which are grouped into two kinds, linguistic and extra-linguistic information. Many speech applications, however, focus on only a single aspect of speech. For example, speech recognizers try to extract only word identity from speech and speaker recognizers extract only speaker identity. Here, irrelevant features are often treated as hidden or latent by applying the probability theory to a large number of samples or the irrelevant features are normalized to have quasi-standard values. In speech analysis, however, phases are usually removed, not hidden or normalized, and pitch harmonics are also removed, not hidden or normalized. The resulting speech spectrum still contains both linguistic information and extra-linguistic information. Is there any good method to remove extra-linguistic information from the spectrum? In this talk, our answer to that question is introduced, called speech structure. Extra-linguistic variation can be modeled as feature space transformation and our speech structure is based on the transform-invariance of f-divergence. This proposal was inspired by findings in classical studies of structural phonology and recent studies of developmental psychology. Speech structure has been applied to accent clustering, speech recognition, and language identification. These applications are also explained in the talk.
  •  
  •  TALK    On computer simulation of multiscale processes in porous electrodes of Li-ion batteries
    Date & Time: Friday, May 13, 2016; 12:00 PM
    Speaker: Oleg Iliev, Fraunhofer Institute for Industrial Mathematics, ITWM
    Research Area: Dynamical Systems
    Abstract
    • Li-ion batteries are widely used in automotive industry, in electronic devices, etc. In this talk we will discuss challenges related to the multiscale nature of batteries, mainly the understanding of processes in the porous electrodes at pore scale and at macroscale. A software tool for simulation of isothermal and non-isothermal electrochemical processes in porous electrodes will be presented. The pore scale simulations are done on 3D images of porous electrodes, or on computer generated 3D microstructures, which have the same characterization as real porous electrodes. Finite Volume and Finite Element algorithms for the highly nonlinear problems describing processes at pore level will be shortly presented. Model order reduction, MOR, empirical interpolation method, EIM-MOR algorithms for acceleration of the computations will be discussed, as well as the reduced basis method for studying parameters dependent problems. Next, homogenization of the equations describing the electrochemical processes at the pore scale will be presented, and the results will be compared to the engineering approach based on Newman's 1D+1D model. Simulations at battery cell level will also be addressed. Finally, the challenges in modeling and simulation of degradation processes in the battery will be discussed and our first simulation results in this area will be presented.

      This is joint work with A.Latz (DLR), M.Taralov, V.Taralova, J.Zausch, S.Zhang from Fraunhofer ITWM, Y.Maday from LJLL, Paris 6 and Y.Efendiev from Texas A&M.
  •  
  •  TALK    Advanced Recurrent Neural Networks for Automatic Speech Recognition
    Date & Time: Friday, April 29, 2016; 12:00 PM - 1:00 PM
    Speaker: Yu Zhang, MIT
    Research Area: Speech & Audio
    Abstract
    • A recurrent neural network (RNN) is a class of neural network models where connections between its neurons form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Recently the RNN-based acoustic models greatly improved automatic speech recognition (ASR) accuracy on many tasks, such as an advanced version of the RNN, which exploits a structure called long-short-term memory (LSTM). However, ASR performance with distant microphones, low resources, noisy, reverberant conditions, and on multi-talker speech are still far from satisfactory as compared to humans. To address these issues, we develop new strucute of RNNs inspired by two principles: (1) the structure follows the intuition of human speech recognition; (2) the structure is easy to optimize. The talk will go beyond basic RNNs, introduce prediction-adaptation-correction RNNs (PAC-RNNs) and highway LSTMs (HLSTMs). It studies both uni-directional and bi-direcitonal RNNs and discriminative training also applied on top the RNNs. For efficient training of such RNNs, the talk will describe two algorithms for learning their parameters in some detail: (1) Latency-Controlled bi-directional model training; and (2) Two pass forward computation for sequence training. Finally, this talk will analyze the advantages and disadvantages of different variants and propose future directions.
  •  
  •  EVENT    MERL to celebrate 25 years of innovation
    Date: Thursday, June 2, 2016
    Location: Norton's Woods Conference Center at American Academy of Arts & Sciences, Cambridge, MA
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Brief
    • A celebration event to mark MERL's 25th anniversary will be held on Thursday, June 2 at the Norton's Woods Conference Center at the American Academy of Arts & Sciences in Cambridge, MA. This event will feature keynote talks, panel sessions, and a research showcase. The event itself is invitation-only, but videos and other highlights will be made available online. Further details about the program can be obtained at the link below.
  •  
  •  NEWS    Toshiaki Koike-Akino gave invited talk at MIT Lincoln Laboratory by IEEE Boston Photonics Society Chapter
    Date: January 14, 2016
    Where: MIT Lincoln Laboratory
    MERL Contact: Toshiaki Koike-Akino
    Research Area: Communications
    Brief
    • Toshiaki Koike-Akino gave an invited talk on recent advances in LDPC Codes for high-speed optical communications in IEEE Boston Photonics Workshop.
  •  
  •  TALK    A data-centric approach to driving behavior research: How can signal processing methods contribute to the development of autonomous driving?
    Date & Time: Tuesday, March 15, 2016; 12:00 PM - 12:45 PM
    Speaker: Prof. Kazuya Takeda, Nagoya University
    Research Area: Speech & Audio
    Abstract
    • Thanks to advanced "internet of things" (IoT) technologies, situation-specific human behavior has become an area of development for practical applications involving signal processing. One important area of development of such practical applications is driving behavior research. Since 1999, I have been collecting driving behavior data in a wide range of signal modalities, including speech/sound, video, physical/physiological sensors, CAN bus, LIDAR and GNSS. The objective of this data collection is to evaluate how well signal models can represent human behavior while driving. In this talk, I would like to summarize our 10 years of study of driving behavior signal processing, which has been based on these signal corpora. In particular, statistical signal models of interactions between traffic contexts and driving behavior, i.e., stochastic driver modeling, will be discussed, in the context of risky lane change detection. I greatly look forward to discussing the scalability of such corpus-based approaches, which could be applied to almost any traffic situation.
  •  
  •  TALK    Driver's mental workload estimation based on the reflex eye movement
    Date & Time: Tuesday, March 15, 2016; 12:45 PM - 1:30 PM
    Speaker: Prof. Hirofumi Aoki, Nagoya University
    Research Area: Speech & Audio
    Abstract
    • Driving requires a complex skill that is involved with the vehicle itself (e.g., speed control and instrument operation), other road users (e.g., other vehicles, pedestrians), surrounding environment, and so on. During driving, visual cues are the main source to supply information to the brain. In order to stabilize the visual information when you are moving, the eyes move to the opposite direction based on the input to the vestibular system. This involuntary eye movement is called as the vestibulo-ocular reflex (VOR) and the physiological models have been studied so far. Obinata et al. found that the VOR can be used to estimate mental workload. Since then, our research group has been developing methods to quantitatively estimate mental workload during driving by means of reflex eye movement. In this talk, I will explain the basic mechanism of the reflex eye movement and how to apply for mental workload estimation. I also introduce the latest work to combine the VOR and OKR (optokinetic reflex) models for naturalistic driving environment.
  •  
  •  TALK    Emotion Detection for Health Related Issues
    Date & Time: Tuesday, February 16, 2016; 12:00 PM - 1:00 PM
    Speaker: Dr. Najim Dehak, MIT
    Research Area: Speech & Audio
    Abstract
    • Recently, there has been a great increase of interest in the field of emotion recognition based on different human modalities, such as speech, heart rate etc. Emotion recognition systems can be very useful in several areas, such as medical and telecommunications. In the medical field, identifying the emotions can be an important tool for detecting and monitoring patients with mental health disorder. In addition, the identification of the emotional state from voice provides opportunities for the development of automated dialogue system capable of producing reports to the physician based on frequent phone communication between the system and the patients. In this talk, we will describe a health related application of using emotion recognition system based on human voices in order to detect and monitor the emotion state of people.
  •  
  •  NEWS    John Hershey gives invited talk at Johns Hopkins University on MERL's "Deep Clustering" breakthrough
    Date: March 4, 2016
    Where: Johns Hopkins Center for Language and Speech Processing
    MERL Contact: Jonathan Le Roux
    Research Area: Speech & Audio
    Brief
    • MERL researcher and speech team leader, John Hershey, was invited by the Center for Language and Speech Processing at Johns Hopkins University to give a talk on MERL's breakthrough audio separation work, known as "Deep Clustering". The talk was entitled "Speech Separation by Deep Clustering: Towards Intelligent Audio Analysis and Understanding," and was given on March 4, 2016.

      This is work conducted by MERL researchers John Hershey, Jonathan Le Roux, and Shinji Watanabe, and MERL interns, Zhuo Chen of Columbia University, and Yusef Isik of Sabanci University.
  •  
  •  NEWS    MERL researcher invited to speak at the Institute for Mathematics and its Applications (IMA)
    Date: March 14, 2016 - March 18, 2016
    Where: Institute for Mathematics and its Applications
    Research Area: Dynamical Systems
    Brief
    • Mouhacine Benosman will give an invited talk about reduced order models stabilization at the next IMA workshop 'Computational Methods for Control of Infinite-dimensional Systems'.
  •