Artificial Intelligence
Making machines smarter for improved safety, efficiency and comfort.
Our AI research encompasses advances in computer vision, speech and audio processing, as well as data analytics. Key research themes include improved perception based on machine learning techniques, learning control policies through model-based reinforcement learning, as well as cognition and reasoning based on learned semantic representations. We apply our work to a broad range of automotive and robotics applications, as well as building and home systems.
Quick Links
-
Researchers
Jonathan
Le Roux
Toshiaki
Koike-Akino
Ye
Wang
Gordon
Wichern
Anoop
Cherian
Tim K.
Marks
Chiori
Hori
Michael J.
Jones
Kieran
Parsons
François
Germain
Daniel N.
Nikovski
Devesh K.
Jha
Jing
Liu
Suhas
Lohit
Matthew
Brand
Philip V.
Orlik
Diego
Romeres
Pu
(Perry)
WangPetros T.
Boufounos
Siddarth
Jain
Moitreya
Chatterjee
Hassan
Mansour
Kuan-Chuan
Peng
William S.
Yerazunis
Radu
Corcodel
Yoshiki
Masuyama
Arvind
Raghunathan
Pedro
Miraldo
Hongbo
Sun
Yebin
Wang
Ankush
Chakrabarty
Jianlin
Guo
Chungwei
Lin
Yanting
Ma
Bingnan
Wang
Ryo
Aihara
Stefano
Di Cairano
Saviz
Mowlavi
Anthony
Vetro
Jinyun
Zhang
Vedang M.
Deshpande
Christopher R.
Laughman
Dehong
Liu
Alexander
Schperberg
Wataru
Tsujita
Abraham P.
Vinod
Kenji
Inomata
Na
Li
-
Awards
-
AWARD MERL Wins Awards at NeurIPS LLM Privacy Challenge Date: December 15, 2024
Awarded to: Jing Liu, Ye Wang, Toshiaki Koike-Akino, Tsunato Nakai, Kento Oonishi, Takuya Higashi
MERL Contacts: Toshiaki Koike-Akino; Jing Liu; Ye Wang
Research Areas: Artificial Intelligence, Machine Learning, Information SecurityBrief- The Mitsubishi Electric Privacy Enhancing Technologies (MEL-PETs) team, consisting of a collaboration of MERL and Mitsubishi Electric researchers, won awards at the NeurIPS 2024 Large Language Model (LLM) Privacy Challenge. In the Blue Team track of the challenge, we won the 3rd Place Award, and in the Red Team track, we won the Special Award for Practical Attack.
-
AWARD University of Padua and MERL team wins the AI Olympics with RealAIGym competition at IROS24 Date: October 17, 2024
Awarded to: Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Dynamical Systems, Machine Learning, RoboticsBrief- The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.
The competition and award ceremony was hosted by IEEE International Conference on Intelligent Robots and Systems (IROS) on October 17, 2024 in Abu Dhabi, UAE. Diego Romeres presented the team's method, based on a model-based reinforcement learning algorithm called MC-PILCO.
- The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.
-
AWARD MERL team wins the Listener Acoustic Personalisation (LAP) 2024 Challenge Date: August 29, 2024
Awarded to: Yoshiki Masuyama, Gordon Wichern, Francois G. Germain, Christopher Ick, and Jonathan Le Roux
MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern; Yoshiki Masuyama
Research Areas: Artificial Intelligence, Machine Learning, Speech & AudioBrief- MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.
The LAP Challenge workshop and award ceremony was hosted by the 32nd European Signal Processing Conference (EUSIPCO 24) on August 29, 2024 in Lyon, France. Yoshiki Masuyama presented the team's method, "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", and received the award from Prof. Michele Geronazzo (University of Padova, IT, and Imperial College London, UK), Chair of the Challenge's Organizing Committee.
The LAP challenge aims to explore challenges in the field of personalized spatial audio, with the first edition focusing on the spatial upsampling and interpolation of head-related transfer functions (HRTFs). HRTFs with dense spatial grids are required for immersive audio experiences, but their recording is time-consuming. Although HRTF spatial upsampling has recently shown remarkable progress with approaches involving neural fields, HRTF estimation accuracy remains limited when upsampling from only a few measured directions, e.g., 3 or 5 measurements. The MERL team tackled this problem by proposing a retrieval-augmented neural field (RANF). RANF retrieves a subject whose HRTFs are close to those of the target subject at the measured directions from a library of subjects. The HRTF of the retrieved subject at the target direction is fed into the neural field in addition to the desired sound source direction. The team also developed a neural network architecture that can handle an arbitrary number of retrieved subjects, inspired by a multi-channel processing technique called transform-average-concatenate.
- MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.
See All Awards for Artificial Intelligence -
-
News & Events
-
EVENT MERL Contributes to ICASSP 2025 Date: Sunday, April 6, 2025 - Friday, April 11, 2025
Location: Hyderabad, India
MERL Contacts: Wael H. Ali; Petros T. Boufounos; Radu Corcodel; François Germain; Chiori Hori; Siddarth Jain; Devesh K. Jha; Toshiaki Koike-Akino; Jonathan Le Roux; Yanting Ma; Hassan Mansour; Yoshiki Masuyama; Joshua Rapp; Diego Romeres; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern
Research Areas: Artificial Intelligence, Communications, Computational Sensing, Electronic and Photonic Devices, Machine Learning, Robotics, Signal Processing, Speech & AudioBrief- MERL has made numerous contributions to both the organization and technical program of ICASSP 2025, which is being held in Hyderabad, India from April 6-11, 2025.
Sponsorship
MERL is proud to be a Silver Patron of the conference and will participate in the student job fair on Thursday, April 10. Please join this session to learn more about employment opportunities at MERL, including openings for research scientists, post-docs, and interns.
MERL is pleased to be the sponsor of two IEEE Awards that will be presented at the conference. We congratulate Prof. Björn Erik Ottersten, the recipient of the 2025 IEEE Fourier Award for Signal Processing, and Prof. Shrikanth Narayanan, the recipient of the 2025 IEEE James L. Flanagan Speech and Audio Processing Award. Both awards will be presented in-person at ICASSP by Anthony Vetro, MERL President & CEO.
Technical Program
MERL is presenting 15 papers in the main conference on a wide range of topics including source separation, sound event detection, sound anomaly detection, speaker diarization, music generation, robot action generation from video, indoor airflow imaging, WiFi sensing, Doppler single-photon Lidar, optical coherence tomography, and radar imaging. Another paper on spatial audio will be presented at the Generative Data Augmentation for Real-World Signal Processing Applications (GenDA) Satellite Workshop.
MERL Researchers Petros Boufounos and Hassan Mansour will present a Tutorial on “Computational Methods in Radar Imaging” in the afternoon of Monday, April 7.
Petros Boufounos will also be giving an industry talk on Thursday April 10 at 12pm, on “A Physics-Informed Approach to Sensing".
About ICASSP
ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event has been attracting more than 4000 participants each year.
- MERL has made numerous contributions to both the organization and technical program of ICASSP 2025, which is being held in Hyderabad, India from April 6-11, 2025.
-
TALK [MERL Seminar Series 2025] Andy Zou presents talk titled Red Teaming AI Agents in-the-wild: Revealing Deployment Vulnerabilities Date & Time: Wednesday, March 26, 2025; 1:00 PM
Speaker: Andy Zou, CMU & Gray Swan AI
MERL Host: Ye Wang
Research Areas: Artificial Intelligence, Machine Learning, Information SecurityAbstractThis presentation demonstrates how red teaming uncovers critical vulnerabilities in AI agents that challenge assumptions about safe deployment. The talk discusses the risks of integrating AI into real-world applications and recommends practical safeguards to enhance resilience and ensure dependable deployment in high-risk settings.
See All News & Events for Artificial Intelligence -
-
Research Highlights
-
PS-NeuS: A Probability-guided Sampler for Neural Implicit Surface Rendering -
Quantum AI Technology -
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models -
Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-Aware Spatio-Temporal Sampling -
Steered Diffusion -
Sustainable AI -
Robust Machine Learning -
mmWave Beam-SNR Fingerprinting (mmBSF) -
Video Anomaly Detection -
Biosignal Processing for Human-Machine Interaction -
Task-aware Unified Source Separation - Audio Examples
-
-
Internships
-
OR0127: Internship - Deep Learning for Robotic Manipulation
MERL is looking for a highly motivated and qualified intern to work on deep learning methods for detection and pose estimation of objects using vision and tactile sensing, in manufacturing and assembly environments. This role involves developing, fine-tuning and deploying models on existing hardware. The method will be applied for robotic manipulation where the knowledge of accurate position and orientation of objects within the scene would allow the robot to interact with the objects. The ideal candidate would be a Ph.D. student familiar with the state-of-the-art methods for pose estimation and tracking of objects. The successful candidate will work closely with MERL researchers to develop and implement novel algorithms, conduct experiments, and publish research findings at a top-tier conference. Start date and expected duration of the internship is flexible. Interested candidates are encouraged to apply with their updated CV and list of relevant publications.
Required Specific Experience
- Prior experience in Computer Vision and Robotic Manipulation.
- Experience with ROS and deep learning frameworks such as PyTorch are essential.
- Strong programming skills in Python.
- Experience with simulation tools, such as PyBullet, Issac Lab, or MuJoCo.
-
CI0082: Internship - Quantum AI
MERL is excited to announce an internship opportunity in the field of Quantum Machine Learning (QML) and Quantum AI (QAI). We are seeking a highly motivated and talented individual to join our research team. This is an exciting opportunity to make a real impact in the field of quantum computing and AI, with the aim of publishing at leading research venues.
Responsibilities:
- Conduct cutting-edge research in quantum machine learning.
- Collaborate with a team of experts in quantum computing, deep learning, and signal processing.
- Develop and implement algorithms using PyTorch and PennyLane.
- Publish research results at leading research venues.
Qualifications:
- Currently pursuing a PhD or a post-graduate researcher in a relevant field.
- Strong background and solid publication records in quantum computing, deep learning, and signal processing.
- Proficient programming skills in PyTorch and PennyLane are highly desirable.
What We Offer:
- An opportunity to work on groundbreaking research in a leading research lab.
- Collaboration with a team of experienced researchers.
- A stimulating and supportive work environment.
If you are passionate about quantum machine learning and meet the above qualifications, we encourage you to apply. Please submit your resume and a brief cover letter detailing your research experience and interests. Join us at MERL and contribute to the future of quantum machine learning!
-
CI0080: Internship - Efficient AI
We are on the lookout for passionate and skilled interns to join our cutting-edge research team focused on developing efficient machine learning techniques for sustainability. This is an exciting opportunity to make a real impact in the field of AI and environmental conservation, with the aim of publishing at leading AI research venues.
What We're Looking For:
- Advanced research experience in generative models and computationally efficient models
- Hands-on skills for large language models (LLM), vision language models (VLM), large multi-modal models (LMM), foundation models (FoMo)
- Deep understanding of state-of-the-art machine learning methods
- Proficiency in Python and PyTorch
- Familiarity with various deep learning frameworks
- Ph.D. candidates who have completed at least half of their program
Internship Details:
- Duration: approximately 3 months
- Flexible start dates available
- Objective: publish research results at leading AI research venues
If you are a highly motivated individual with a passion for applying AI to sustainability challenges, we want to hear from you! This internship offers a unique chance to work on meaningful projects at the intersection of machine learning and environmental sustainability.
See All Internships for Artificial Intelligence -
-
Openings
See All Openings at MERL -
Recent Publications
- "Data Augmentation Using Neural Acoustic Fields With Retrieval-Augmented Pre-training", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Satellite Workshop on Generative Data Augmentation for Real-World Signal Processing Applications (GenDA), April 2025.BibTeX TR2025-045 PDF
- @inproceedings{Ick2025apr,
- author = {Ick, Christopher and Wichern, Gordon and Masuyama, Yoshiki and Germain, François G and {Le Roux}, Jonathan},
- title = {{Data Augmentation Using Neural Acoustic Fields With Retrieval-Augmented Pre-training}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Satellite Workshop on Generative Data Augmentation for Real-World Signal Processing Applications (GenDA)},
- year = 2025,
- month = apr,
- url = {https://www.merl.com/publications/TR2025-045}
- }
, - "Improving Subject Transfer in EEG Classification with Divergence Estimation", Journal of Neural Engineering, DOI: 10.1088/1741-2552/ad9777, Vol. 21, No. 6, April 2025.BibTeX TR2025-044 PDF Software
- @article{Smedemark-Margulies2025apr,
- author = {Smedemark-Margulies, Niklas and Wang, Ye and Koike-Akino, Toshiaki and Liu, Jing and Parsons, Kieran and Bicer, Yunus and Erdogmus, Deniz},
- title = {{Improving Subject Transfer in EEG Classification with Divergence Estimation}},
- journal = {Journal of Neural Engineering},
- year = 2025,
- volume = 21,
- number = 6,
- month = apr,
- doi = {10.1088/1741-2552/ad9777},
- url = {https://www.merl.com/publications/TR2025-044}
- }
, - "SurfR: Surface Reconstruction with Multi-scale Attention", International Conference on 3D Vision (3DV), March 2025.BibTeX TR2025-039 PDF
- @inproceedings{Ranade2025mar,
- author = {Ranade, Siddhant and Pais, Goncalo and Whitaker, Ross and Nascimento, Jacinto and Miraldo, Pedro and Ramalingam, Srikumar},
- title = {{SurfR: Surface Reconstruction with Multi-scale Attention}},
- booktitle = {International Conference on 3D Vision (3DV)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-039}
- }
, - "30+ Years of Source Separation Research: Achievements and Future Challenges", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2025.BibTeX TR2025-036 PDF
- @inproceedings{Araki2025mar,
- author = {Araki, Shoko and Ito, Nobutaka and Haeb-Umbach, Reinhold and Wichern, Gordon and Wang, Zhong-Qiu and Mitsufuji, Yuki},
- title = {{30+ Years of Source Separation Research: Achievements and Future Challenges}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-036}
- }
, - "No Class Left Behind: A Closer Look at Class Balancing for Audio Tagging", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2025.BibTeX TR2025-037 PDF
- @inproceedings{Ebbers2025mar,
- author = {Ebbers, Janek and Germain, François G and Wilkinghoff, Kevin and Wichern, Gordon and {Le Roux}, Jonathan},
- title = {{No Class Left Behind: A Closer Look at Class Balancing for Audio Tagging}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-037}
- }
, - "O-EENC-SD: Efficient Online End-to-End Neural Clustering for Speaker Diarization", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2025.BibTeX TR2025-031 PDF
- @inproceedings{Gruttadauria2025mar,
- author = {Gruttadauria, Elio and Fontaine, Mathieu and {Le Roux}, Jonathan and Essid, Slim},
- title = {{O-EENC-SD: Efficient Online End-to-End Neural Clustering for Speaker Diarization}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-031}
- }
, - "Interactive Robot Action Replanning using Multimodal LLM Trained from Human Demonstration Videos", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2025.BibTeX TR2025-034 PDF
- @inproceedings{Hori2025mar,
- author = {Hori, Chiori and Kambara, Motonari and Sugiura, Komei and Ota, Kei and Khurana, Sameer and Jain, Siddarth and Corcodel, Radu and Jha, Devesh K. and Romeres, Diego and {Le Roux}, Jonathan},
- title = {{Interactive Robot Action Replanning using Multimodal LLM Trained from Human Demonstration Videos}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-034}
- }
, - "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2025.BibTeX TR2025-029 PDF Software
- @inproceedings{Masuyama2025mar,
- author = {Masuyama, Yoshiki and Wichern, Gordon and Germain, François G and Ick, Christopher and {Le Roux}, Jonathan},
- title = {{Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-029}
- }
,
- "Data Augmentation Using Neural Acoustic Fields With Retrieval-Augmented Pre-training", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Satellite Workshop on Generative Data Augmentation for Real-World Signal Processing Applications (GenDA), April 2025.
-
Videos
-
Software & Data Downloads
-
MEL-PETs Joint-Context Attack for LLM Privacy Challenge -
MEL-PETs Defense for LLM Privacy Challenge -
Learned Born Operator for Reflection Tomographic Imaging -
Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization -
Self-Monitored Inference-Time INtervention for Generative Music Transformers -
Transformer-based model with LOcal-modeling by COnvolution -
Sound Event Bounding Boxes -
Enhanced Reverberation as Supervision -
Gear Extensions of Neural Radiance Fields -
Long-Tailed Anomaly Detection Dataset -
Neural IIR Filter Field for HRTF Upsampling and Personalization -
Target-Speaker SEParation -
Pixel-Grounded Prototypical Part Networks -
Steered Diffusion -
Hyperbolic Audio Source Separation -
Simple Multimodal Algorithmic Reasoning Task Dataset -
Partial Group Convolutional Neural Networks -
SOurce-free Cross-modal KnowledgE Transfer -
Audio-Visual-Language Embodied Navigation in 3D Environments -
Nonparametric Score Estimators -
3D MOrphable STyleGAN -
Instance Segmentation GAN -
Audio Visual Scene-Graph Segmentor -
Generalized One-class Discriminative Subspaces -
Goal directed RL with Safety Constraints -
Hierarchical Musical Instrument Separation -
Generating Visual Dynamics from Sound and Context -
Adversarially-Contrastive Optimal Transport -
Online Feature Extractor Network -
MotionNet -
FoldingNet++ -
Quasi-Newton Trust Region Policy Optimization -
Landmarks’ Location, Uncertainty, and Visibility Likelihood -
Robust Iterative Data Estimation -
Gradient-based Nikaido-Isoda -
Discriminative Subspace Pooling
-