News & Events

28 Awards found.



Learn about the MERL Seminar Series.



  •  AWARD    University of Padua and MERL team wins the AI Olympics with RealAIGym competition at IROS24
    Date: October 17, 2024
    Awarded to: Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
    MERL Contact: Diego Romeres
    Research Areas: Artificial Intelligence, Dynamical Systems, Machine Learning, Robotics
    Brief
    • The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.

      The competition and award ceremony was hosted by IEEE International Conference on Intelligent Robots and Systems (IROS) on October 17, 2024 in Abu Dhabi, UAE. Diego Romeres presented the team's method, based on a model-based reinforcement learning algorithm called MC-PILCO.
  •  
  •  AWARD    MERL team wins the Listener Acoustic Personalisation (LAP) 2024 Challenge
    Date: August 29, 2024
    Awarded to: Yoshiki Masuyama, Gordon Wichern, Francois G. Germain, Christopher Ick, and Jonathan Le Roux
    MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern; Yoshiki Masuyama
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.

      The LAP Challenge workshop and award ceremony was hosted by the 32nd European Signal Processing Conference (EUSIPCO 24) on August 29, 2024 in Lyon, France. Yoshiki Masuyama presented the team's method, "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", and received the award from Prof. Michele Geronazzo (University of Padova, IT, and Imperial College London, UK), Chair of the Challenge's Organizing Committee.

      The LAP challenge aims to explore challenges in the field of personalized spatial audio, with the first edition focusing on the spatial upsampling and interpolation of head-related transfer functions (HRTFs). HRTFs with dense spatial grids are required for immersive audio experiences, but their recording is time-consuming. Although HRTF spatial upsampling has recently shown remarkable progress with approaches involving neural fields, HRTF estimation accuracy remains limited when upsampling from only a few measured directions, e.g., 3 or 5 measurements. The MERL team tackled this problem by proposing a retrieval-augmented neural field (RANF). RANF retrieves a subject whose HRTFs are close to those of the target subject at the measured directions from a library of subjects. The HRTF of the retrieved subject at the target direction is fed into the neural field in addition to the desired sound source direction. The team also developed a neural network architecture that can handle an arbitrary number of retrieved subjects, inspired by a multi-channel processing technique called transform-average-concatenate.
  •  
  •  AWARD    Jonathan Le Roux elevated to IEEE Fellow
    Date: January 1, 2024
    Awarded to: Jonathan Le Roux
    MERL Contact: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL Distinguished Scientist and Speech & Audio Senior Team Leader Jonathan Le Roux has been elevated to IEEE Fellow, effective January 2024, "for contributions to multi-source speech and audio processing."

      Mitsubishi Electric celebrated Dr. Le Roux's elevation and that of another researcher from the company, Dr. Shumpei Kameyama, with a worldwide news release on February 15.

      Dr. Jonathan Le Roux has made fundamental contributions to the field of multi-speaker speech processing, especially to the areas of speech separation and multi-speaker end-to-end automatic speech recognition (ASR). His contributions constituted a major advance in realizing a practically usable solution to the cocktail party problem, enabling machines to replicate humans’ ability to concentrate on a specific sound source, such as a certain speaker within a complex acoustic scene—a long-standing challenge in the speech signal processing community. Additionally, he has made key contributions to the measures used for training and evaluating audio source separation methods, developing several new objective functions to improve the training of deep neural networks for speech enhancement, and analyzing the impact of metrics used to evaluate the signal reconstruction quality. Dr. Le Roux’s technical contributions have been crucial in promoting the widespread adoption of multi-speaker separation and end-to-end ASR technologies across various applications, including smart speakers, teleconferencing systems, hearables, and mobile devices.

      IEEE Fellow is the highest grade of membership of the IEEE. It honors members with an outstanding record of technical achievements, contributing importantly to the advancement or application of engineering, science and technology, and bringing significant value to society. Each year, following a rigorous evaluation procedure, the IEEE Fellow Committee recommends a select group of recipients for elevation to IEEE Fellow. Less than 0.1% of voting members are selected annually for this member grade elevation.
  •  
  •  AWARD    Honorable Mention Award at NeurIPS 23 Instruction Workshop
    Date: December 15, 2023
    Awarded to: Lingfeng Sun, Devesh K. Jha, Chiori Hori, Siddharth Jain, Radu Corcodel, Xinghao Zhu, Masayoshi Tomizuka and Diego Romeres
    MERL Contacts: Radu Corcodel; Chiori Hori; Siddarth Jain; Devesh K. Jha; Diego Romeres
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • MERL Researchers received an "Honorable Mention award" at the Workshop on Instruction Tuning and Instruction Following at the NeurIPS 2023 conference in New Orleans. The workshop was on the topic of instruction tuning and Instruction following for Large Language Models (LLMs). MERL researchers presented their work on interactive planning using LLMs for partially observable robotic tasks during the oral presentation session at the workshop.
  •  
  •  AWARD    MERL team wins the Audio-Visual Speech Enhancement (AVSE) 2023 Challenge
    Date: December 16, 2023
    Awarded to: Zexu Pan, Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux
    MERL Contacts: François Germain; Chiori Hori; Sameer Khurana; Jonathan Le Roux; Gordon Wichern; Yoshiki Masuyama
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL's Speech & Audio team ranked 1st out of 12 teams in the 2nd COG-MHEAR Audio-Visual Speech Enhancement Challenge (AVSE). The team was led by Zexu Pan, and also included Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux.

      The AVSE challenge aims to design better speech enhancement systems by harnessing the visual aspects of speech (such as lip movements and gestures) in a manner similar to the brain’s multi-modal integration strategies. MERL’s system was a scenario-aware audio-visual TF-GridNet, that incorporates the face recording of a target speaker as a conditioning factor and also recognizes whether the predominant interference signal is speech or background noise. In addition to outperforming all competing systems in terms of objective metrics by a wide margin, in a listening test, MERL’s model achieved the best overall word intelligibility score of 84.54%, compared to 57.56% for the baseline and 80.41% for the next best team. The Fisher’s least significant difference (LSD) was 2.14%, indicating that our model offered statistically significant speech intelligibility improvements compared to all other systems.
  •  
  •  AWARD    Joint University of Padua-MERL team wins Challenge 'AI Olympics With RealAIGym'
    Date: August 25, 2023
    Awarded to: Alberto Dalla Libera, Niccolo' Turcato, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
    MERL Contact: Diego Romeres
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • A joint team consisting of members of University of Padua and MERL ranked 1st in the IJCAI2023 Challenge "Al Olympics With RealAlGym: Is Al Ready for Athletic Intelligence in the Real World?". The team was composed by MERL researcher Diego Romeres and a team from University Padua (UniPD) consisting of Alberto Dalla Libera, Ph.D., Ph.D. Candidates: Niccolò Turcato, Giulio Giacomuzzo and Prof. Ruggero Carli from University of Padua.

      The International Joint Conference on Artificial Intelligence (IJCAI) is a premier gathering for AI researchers and organizes several competitions. This year the competition CC7 "AI Olympics With RealAIGym: Is AI Ready for Athletic Intelligence in the Real World?" consisted of two stages: simulation and real-robot experiments on two under-actuated robotic systems. The two robotics systems were treated as separate tracks and one final winner was selected for each track based on specific performance criteria in the control tasks.

      The UniPD-MERL team competed and won in both tracks. The team's system made strong use of a Model-based Reinforcement Learning algorithm called (MC-PILCO) that we recently published in the journal IEEE Transaction on Robotics.
  •  
  •  AWARD    MERL Intern and Researchers Win ICASSP 2023 Best Student Paper Award
    Date: June 9, 2023
    Awarded to: Darius Petermann, Gordon Wichern, Aswin Subramanian, Jonathan Le Roux
    MERL Contacts: Jonathan Le Roux; Gordon Wichern
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • Former MERL intern Darius Petermann (Ph.D. Candidate at Indiana University) has received a Best Student Paper Award at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) for the paper "Hyperbolic Audio Source Separation", co-authored with MERL researchers Gordon Wichern and Jonathan Le Roux, and former MERL researcher Aswin Subramanian. The paper presents work performed during Darius's internship at MERL in the summer 2022. The paper introduces a framework for audio source separation using embeddings on a hyperbolic manifold that compactly represent the hierarchical relationship between sound sources and time-frequency features. Additionally, the code associated with the paper is publicly available at https://github.com/merlresearch/hyper-unmix.

      ICASSP is the flagship conference of the IEEE Signal Processing Society (SPS). ICASSP 2023 was held in the Greek island of Rhodes from June 04 to June 10, 2023, and it was the largest ICASSP in history, with more than 4000 participants, over 6128 submitted papers and 2709 accepted papers. Darius’s paper was first recognized as one of the Top 3% of all papers accepted at the conference, before receiving one of only 5 Best Student Paper Awards during the closing ceremony.
  •  
  •  AWARD    MERL’s Paper on Wi-Fi Sensing Earns Top 3% Paper Recognition at ICASSP 2023, Selected as a Best Student Paper Award Finalist
    Date: June 9, 2023
    Awarded to: Cristian J. Vaca-Rubio, Pu Wang, Toshiaki Koike-Akino, Ye Wang, Petros Boufounos and Petar Popovski
    MERL Contacts: Petros T. Boufounos; Toshiaki Koike-Akino; Pu (Perry) Wang; Ye Wang
    Research Areas: Artificial Intelligence, Communications, Computational Sensing, Dynamical Systems, Machine Learning, Signal Processing
    Brief
    • A MERL Paper on Wi-Fi sensing was recognized as a Top 3% Paper among all 2709 accepted papers at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). Co-authored by Cristian Vaca-Rubio and Petar Popovski from Aalborg University, Denmark, and MERL researchers Pu Wang, Toshiaki Koike-Akino, Ye Wang, and Petros Boufounos, the paper "MmWave Wi-Fi Trajectory Estimation with Continous-Time Neural Dynamic Learning" was also a Best Student Paper Award finalist.

      Performed during Cristian’s stay at MERL first as a visiting Marie Skłodowska-Curie Fellow and then as a full-time intern in 2022, this work capitalizes on standards-compliant Wi-Fi signals to perform indoor localization and sensing. The paper uses a neural dynamic learning framework to address technical issues such as low sampling rate and irregular sampling intervals.

      ICASSP, a flagship conference of the IEEE Signal Processing Society (SPS), was hosted on the Greek island of Rhodes from June 04 to June 10, 2023. ICASSP 2023 marked the largest ICASSP in history, boasting over 4000 participants and 6128 submitted papers, out of which 2709 were accepted.
  •  
  •  AWARD    Joint CMU-MERL team wins DCASE2023 Challenge on Automated Audio Captioning
    Date: June 1, 2023
    Awarded to: Shih-Lun Wu, Xuankai Chang, Gordon Wichern, Jee-weon Jung, Francois Germain, Jonathan Le Roux, Shinji Watanabe
    MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • A joint team consisting of members of CMU Professor and MERL Alumn Shinji Watanabe's WavLab and members of MERL's Speech & Audio team ranked 1st out of 11 teams in the DCASE2023 Challenge's Task 6A "Automated Audio Captioning". The team was led by student Shih-Lun Wu and also featured Ph.D. candidate Xuankai Chang, Postdoctoral research associate Jee-weon Jung, Prof. Shinji Watanabe, and MERL researchers Gordon Wichern, Francois Germain, and Jonathan Le Roux.

      The IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE Challenge), started in 2013, has been organized yearly since 2016, and gathers challenges on multiple tasks related to the detection, analysis, and generation of sound events. This year, the DCASE2023 Challenge received over 428 submissions from 123 teams across seven tasks.

      The CMU-MERL team competed in the Task 6A track, Automated Audio Captioning, which aims at generating informative descriptions for various sounds from nature and/or human activities. The team's system made strong use of large pretrained models, namely a BEATs transformer as part of the audio encoder stack, an Instructor Transformer encoding ground-truth captions to derive an audio-text contrastive loss on the audio encoder, and ChatGPT to produce caption mix-ups (i.e., grammatical and compact combinations of two captions) which, together with the corresponding audio mixtures, increase not only the amount but also the complexity and diversity of the training data. The team's best submission obtained a SPIDEr-FL score of 0.327 on the hidden test set, largely outperforming the 2nd best team's 0.315.
  •  
  •  AWARD    MERL Researchers Win Best Workshop Poster Award at the 2023 IEEE International Conference on Robotics and Automation (ICRA)
    Date: June 2, 2023
    Awarded to: Yuki Shirai, Devesh Jha, Arvind Raghunathan and Dennis Hong
    MERL Contacts: Devesh K. Jha; Arvind Raghunathan; Yuki Shirai
    Research Areas: Artificial Intelligence, Optimization, Robotics
    Brief
    • MERL's paper titled: "Closed-Loop Tactile Controller for Tool Manipulation" Won the Best Poster Award in the workshop on "Embracing contacts : Making robots physically interact with our world". First author and MERL intern, Yuki Shirai, was presented with the award at a ceremony held at ICRA in London. MERL researchers Devesh Jha, Principal Research Scientist, and Arvind Raghunathan, Senior Principal Research Scientist and Senior Team Leader as well as Prof. Dennis Hong of University of California, Los Angeles are also coauthors.

      The paper presents a technique to manipulate an object using a tool in a closed-loop fashion using vision-based tactile sensors. More information about the workshop and the various speakers can be found here https://sites.google.com/view/icra2023embracingcontacts/home.
  •  
  •  AWARD    ACM/IEEE Design Automation Conference 2022 Best Paper Award nominee
    Date: July 14, 2022
    Awarded to: Weidong Cao, Mouhacine Benosman, Xuan Zhang, and Rui Ma
    Research Area: Artificial Intelligence
    Brief
    • The Conference committee of the 59th Design Automation Conference has chosen MERL's paper entitled 'Domain Knowledge-Infused Deep Learning for Automated Analog/RF Circuit Parameter Optimization', as a DAC Best Paper Award nominee. The committee evaluated both manuscript and submitted presentation recording, and has chosen MERL's paper as one of six nominees for this prestigious award. Decisions were based on the submissions’ innovation, impact and exposition.
  •  
  •  AWARD    International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022 Openedges Award
    Date: June 15, 2022
    Awarded to: Yuxiang Sun, Mouhacine Benosman, Rui Ma.
    Research Area: Artificial Intelligence
    Brief
    • The committee of the International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022, has selected MERL's paper entitled 'GaN Distributed RF Power Amplifier Automation Design with Deep Reinforcement Learning' as a winner of the AICAS 2022 Openedges Award.

      In this paper MERL researchers propose a novel design automation methodology based on deep reinforcement learning (RL), for wide-band non-uniform distributed RF power amplifiers, known for their high dimensional design challenges.
  •  
  •  AWARD    MERL Ranked 1st Place in Cross-Subject Transfer Learning Task and 4th Place Overall at the NeurIPS2021 BEETL Competition for EEG Transfer Learning.
    Date: November 11, 2021
    Awarded to: Niklas Smedemark-Margulies, Toshiaki Koike-Akino, Ye Wang, Deniz Erdogmus
    MERL Contacts: Toshiaki Koike-Akino; Ye Wang
    Research Areas: Artificial Intelligence, Signal Processing, Human-Computer Interaction
    Brief
    • The MERL Signal Processing group achieved first place in the cross-subject transfer learning task and fourth place overall in the NeurIPS 2021 BEETL AI Challenge for EEG Transfer Learning. The team included Niklas Smedemark-Margulies (intern from Northeastern University), Toshiaki Koike-Akino, Ye Wang, and Prof. Deniz Erdogmus (Northeastern University). The challenge addresses two types of transfer learning tasks for EEG Biosignals: a homogeneous transfer learning task for cross-subject domain adaptation; and a heterogeneous transfer learning task for cross-data domain adaptation. There were 110+ registered teams in this competition, MERL ranked 1st in the homogeneous transfer learning task, 7th place in the heterogeneous transfer learning task, and 4th place for the combined overall score. For the homogeneous transfer learning task, MERL developed a new pre-shot learning framework based on feature disentanglement techniques for robustness against inter-subject variation to enable calibration-free brain-computer interfaces (BCI). MERL is invited to present our pre-shot learning technique at the NeurIPS 2021 workshop.
  •  
  •  AWARD    Daniel Nikovski receives Outstanding Reviewer Award at NeurIPS'21
    Date: October 18, 2021
    Awarded to: Daniel Nikovski
    MERL Contact: Daniel N. Nikovski
    Research Areas: Artificial Intelligence, Machine Learning
    Brief
    • Daniel Nikovski, Group Manager of MERL's Data Analytics group, has received an Outstanding Reviewer Award from the 2021 conference on Neural Information Processing Systems (NeurIPS'21). NeurIPS is the world's premier conference on neural networks and related technologies.
  •  
  •  AWARD    Best Poster Award and Best Video Award at the International Society for Music Information Retrieval Conference (ISMIR) 2020
    Date: October 15, 2020
    Awarded to: Ethan Manilow, Gordon Wichern, Jonathan Le Roux
    MERL Contacts: Jonathan Le Roux; Gordon Wichern
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • Former MERL intern Ethan Manilow and MERL researchers Gordon Wichern and Jonathan Le Roux won Best Poster Award and Best Video Award at the 2020 International Society for Music Information Retrieval Conference (ISMIR 2020) for the paper "Hierarchical Musical Source Separation". The conference was held October 11-14 in a virtual format. The Best Poster Awards and Best Video Awards were awarded by popular vote among the conference attendees.

      The paper proposes a new method for isolating individual sounds in an audio mixture that accounts for the hierarchical relationship between sound sources. Many sounds we are interested in analyzing are hierarchical in nature, e.g., during a music performance, a hi-hat note is one of many such hi-hat notes, which is one of several parts of a drumkit, itself one of many instruments in a band, which might be playing in a bar with other sounds occurring. Inspired by this, the paper re-frames the audio source separation problem as hierarchical, combining similar sounds together at certain levels while separating them at other levels, and shows on a musical instrument separation task that a hierarchical approach outperforms non-hierarchical models while also requiring less training data. The paper, poster, and video can be seen on the paper page on the ISMIR website.
  •  
  •  AWARD    Best Paper Award at the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2019
    Date: December 18, 2019
    Awarded to: Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe
    MERL Contact: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL researcher Jonathan Le Roux and co-authors Xuankai Chang, Shinji Watanabe (Johns Hopkins University), Wangyou Zhang, and Yanmin Qian (Shanghai Jiao Tong University) won the Best Paper Award at the 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), for the paper "MIMO-Speech: End-to-End Multi-Channel Multi-Speaker Speech Recognition". MIMO-Speech is a fully neural end-to-end framework that can transcribe the text of multiple speakers speaking simultaneously from multi-channel input. The system is comprised of a monaural masking network, a multi-source neural beamformer, and a multi-output speech recognition model, which are jointly optimized only via an automatic speech recognition (ASR) criterion. The award was received by lead author Xuankai Chang during the conference, which was held in Sentosa, Singapore from December 14-18, 2019.
  •  
  •  AWARD    MERL Researchers win Best Paper Award at ICCV 2019 Workshop on Statistical Deep Learning in Computer Vision
    Date: October 27, 2019
    Awarded to: Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Chen Feng, Xiaoming Liu
    MERL Contact: Tim K. Marks
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • MERL researcher Tim Marks, former MERL interns Abhinav Kumar and Wenxuan Mou, and MERL consultants Professor Chen Feng (NYU) and Professor Xiaoming Liu (MSU) received the Best Oral Paper Award at the IEEE/CVF International Conference on Computer Vision (ICCV) 2019 Workshop on Statistical Deep Learning in Computer Vision (SDL-CV) held in Seoul, Korea. Their paper, entitled "UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss," describes a method which, given an image of a face, estimates not only the locations of facial landmarks but also the uncertainty of each landmark location estimate.
  •  
  •  AWARD    MERL Researcher Devesh Jha Wins the Rudolf Kalman Best Paper Award 2019
    Date: October 10, 2019
    Awarded to: Devesh Jha, Nurali Virani, Zhenyuan Yuan, Ishana Shekhawat and Asok Ray
    MERL Contact: Devesh K. Jha
    Research Areas: Artificial Intelligence, Control, Data Analytics, Machine Learning, Robotics
    Brief
    • MERL researcher Devesh Jha has won the Rudolf Kalman Best Paper Award 2019 for the paper entitled "Imitation of Demonstrations Using Bayesian Filtering With Nonparametric Data-Driven Models". This paper, published in a Special Commemorative Issue for Rudolf E. Kalman in the ASME JDSMC in March 2018, uses Bayesian filtering for imitation learning in Hidden Mode Hybrid Systems. This award is given annually by the Dynamic Systems and Control Division of ASME to the authors of the best paper published in the ASME Journal of Dynamic Systems Measurement and Control during the preceding year.
  •  
  •  AWARD    MERL Researchers Won IEEE ICC Best Paper Award
    Date: May 22, 2019
    Awarded to: Siriramya Bhamidipati, Kyeong Jin Kim, Hongbo Sun, Philip Orlik
    MERL Contacts: Hongbo Sun; Philip V. Orlik
    Research Areas: Artificial Intelligence, Communications, Machine Learning, Signal Processing, Information Security
    Brief
    • MERL researchers, Kyeong Jin Kim, Hongbo Sun, Philip Orlik, along with lead author and former MERL intern Siriramya Bhamidipati were awarded the Smart Grid Symposium Best Paper Award at this year's International Conference on Communications (ICC) held in Shanghai, China. There paper titled "GPS Spoofing Detection and Mitigation in PMUs Using Distributed Multiple Directional Antennas," described a technique to rapidly detect and mitigate GPS timing attacks/errors via hardware (antennas) and signal processing (Kalman Filtering).
  •  
  •  AWARD    MERL researcher wins Best Visualization Note Award at PacificVis2019 Conference
    Date: April 23, 2019
    Awarded to: Teng-yok Lee
    Research Areas: Artificial Intelligence, Computer Vision, Data Analytics, Machine Learning
    Brief
    • MERL researcher Teng-yok Lee has won the Best Visualization Note Award at the PacificVis 2019 conference held in Bangkok Thailand, from April 23-26, 2019. The paper entitled "Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences" presents a visualization method called Space-Time Slicing to assist a human developer in the development of object detectors for driving applications without requiring labeled data. Space-Time Slicing reveals patterns in the detection data that can suggest the presence of false positives and false negatives.
  •  
  •  AWARD    R&D100 award for Deep Learning-based Water Detector
    Date: November 16, 2018
    Awarded to: Ziming Zhang, Alan Sullivan, Hideaki Maehara, Kenji Taira, Kazuo Sugimoto
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • Researchers and developers from MERL, Mitsubishi Electric and Mitsubishi Electric Engineering (MEE) have been recognized with an R&D100 award for the development of a deep learning-based water detector. Automatic detection of water levels in rivers and streams is critical for early warning of flash flooding. Existing systems require a height gauge be placed in the river or stream, something that is costly and sometimes impossible. The new deep learning-based water detector uses only images from a video camera along with 3D measurements of the river valley to determine water levels and warn of potential flooding. The system is robust to lighting and weather conditions working well during the night as well as during fog or rain. Deep learning is a relatively new technique that uses neural networks and AI that are trained from real data to perform human-level recognition tasks. This work is powered by Mitsubishi Electric's Maisart AI technology.
  •  
  •  AWARD    CHiME 2012 Speech Separation and Recognition Challenge Best Performance
    Date: June 1, 2013
    Awarded to: Yuuki Tachioka, Shinji Watanabe, Jonathan Le Roux and John R. Hershey
    Awarded for: "Discriminative Methods for Noise Robust Speech Recognition: A CHiME Challenge Benchmark"
    Awarded by: International Workshop on Machine Listening in Multisource Environments (CHiME)
    MERL Contact: Jonathan Le Roux
    Research Area: Speech & Audio
    Brief
    • The results of the 2nd 'CHiME' Speech Separation and Recognition Challenge are out! The team formed by MELCO researcher Yuuki Tachioka and MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John Hershey obtained the best results in the continuous speech recognition task (Track 2). This very challenging task consisted in recognizing speech corrupted by highly non-stationary noises recorded in a real living room. Our proposal, which also included a simple yet extremely efficient denoising front-end, focused on investigating and developing state-of-the-art automatic speech recognition back-end techniques: feature transformation methods, as well as discriminative training methods for acoustic and language modeling. Our system significantly outperformed other participants. Our code has since been released as an improved baseline for the community to use.
  •  
  •  AWARD    AVSS 2011 Best Paper Award
    Date: September 2, 2011
    Awarded to: Fatih Porikli and Huseyin Ozkan.
    Awarded for: "Data Driven Frequency Mapping for Computationally Scalable Object Detection"
    Awarded by: IEEE Advanced Video and Signal Based Surveillance (AVSS)
    Research Area: Machine Learning
  •  
  •  AWARD    CVPR 2011 Longuet-Higgins Prize
    Date: June 25, 2011
    Awarded to: Paul A. Viola and Michael J. Jones
    Awarded for: "Rapid Object Detection using a Boosted Cascade of Simple Features"
    Awarded by: Conference on Computer Vision and Pattern Recognition (CVPR)
    MERL Contact: Michael J. Jones
    Research Area: Machine Learning
    Brief
    • Paper from 10 years ago with the largest impact on the field: "Rapid Object Detection using a Boosted Cascade of Simple Features", originally published at Conference on Computer Vision and Pattern Recognition (CVPR 2001).
  •  
  •  AWARD    OTCBVS 2010 Best Paper Award
    Date: June 1, 2010
    Awarded to: Vijay Venkataraman and Fatih Porikli
    Awarded for: "RelCom: Relational Combinatorics Features for Rapid Object Detection"
    Awarded by: IEEE Workshop on Object Tracking and Classification Beyond and in the Visible Spectrum (OTCBVS)
    Research Area: Machine Learning
  •