News & Events

49 MERL Events and MERL Talks found.

Learn about the MERL Seminar Series.

  •  TALK    [MERL Seminar Series 2024] Chuchu Fan presents talk titled Neural Certificates and LLMs in Large-Scale Autonomy Design
    Date & Time: Wednesday, May 29, 2024; 12:00 PM
    Speaker: Chuchu Fan, MIT
    MERL Host: Abraham P. Vinod
    Research Areas: Artificial Intelligence, Control, Machine Learning
    • Learning-enabled control systems have demonstrated impressive empirical performance on challenging control problems in robotics. However, this performance often arrives with the trade-off of diminished transparency and the absence of guarantees regarding the safety and stability of the learned controllers. In recent years, new techniques have emerged to provide these guarantees by learning certificates alongside control policies — these certificates provide concise, data-driven proofs that guarantee the safety and stability of the learned control system. These methods not only allow the user to verify the safety of a learned controller but also provide supervision during training, allowing safety and stability requirements to influence the training process itself. In this talk, we present two exciting updates on neural certificates. In the first work, we explore the use of graph neural networks to learn collision-avoidance certificates that can generalize to unseen and very crowded environments. The second work presents a novel reinforcement learning approach that can produce certificate functions with the policies while addressing the instability issues in the optimization process. Finally, if time permits, I will also talk about my group's recent work using LLM and domain-specific task and motion planners to allow natural language as input for robot planning.
  •  TALK    [MERL Seminar Series 2024] Na Li presents talk titled Close the Loop: From Data to Actions in Complex Systems
    Date & Time: Wednesday, April 10, 2024; 12:00 PM
    Speaker: Na Li, Harvard University
    MERL Host: Yebin Wang
    Research Areas: Control, Dynamical Systems, Machine Learning
    • The explosive growth of machine learning and data-driven methodologies have revolutionized numerous fields. Yet, translating these successes to the domain of dynamical, physical systems remains a significant challenge, hindered by the complex and often unpredictable nature of such environments. Closing the loop from data to actions in these systems faces many difficulties, stemming from the need for sample efficiency and computational feasibility amidst intricate dynamics, along with many other requirements such as verifiability, robustness, and safety. In this talk, we bridge this gap by introducing innovative approaches that harness representation-based methods, domain knowledge, and the physical structures of systems. We present a comprehensive framework that integrates these components to develop reinforcement learning and control strategies that are not only tailored for the complexities of physical systems but also achieve efficiency, safety, and robustness with provable performance.
  •  TALK    [MERL Seminar Series 2024] Sanmi Koyejo presents talk titled Are Emergent Abilities of Large Language Models a Mirage?
    Date & Time: Wednesday, March 20, 2024; 1:00 PM
    Speaker: Sanmi Koyejo, Stanford University
    MERL Host: Jing Liu
    Research Areas: Artificial Intelligence, Machine Learning
    • Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model. Via the presented analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.
  •  EVENT    MERL Contributes to ICASSP 2024
    Date: Sunday, April 14, 2024 - Friday, April 19, 2024
    Location: Seoul, South Korea
    MERL Contacts: Petros T. Boufounos; François Germain; Chiori Hori; Sameer Khurana; Toshiaki Koike-Akino; Jonathan Le Roux; Hassan Mansour; Kieran Parsons; Joshua Rapp; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern
    Research Areas: Artificial Intelligence, Computational Sensing, Machine Learning, Robotics, Signal Processing, Speech & Audio
    • MERL has made numerous contributions to both the organization and technical program of ICASSP 2024, which is being held in Seoul, Korea from April 14-19, 2024.

      Sponsorship and Awards

      MERL is proud to be a Bronze Patron of the conference and will participate in the student job fair on Thursday, April 18. Please join this session to learn more about employment opportunities at MERL, including openings for research scientists, post-docs, and interns.

      MERL is pleased to be the sponsor of two IEEE Awards that will be presented at the conference. We congratulate Prof. Stéphane G. Mallat, the recipient of the 2024 IEEE Fourier Award for Signal Processing, and Prof. Keiichi Tokuda, the recipient of the 2024 IEEE James L. Flanagan Speech and Audio Processing Award.

      Jonathan Le Roux, MERL Speech and Audio Senior Team Leader, will also be recognized during the Awards Ceremony for his recent elevation to IEEE Fellow.

      Technical Program

      MERL will present 13 papers in the main conference on a wide range of topics including automated audio captioning, speech separation, audio generative models, speech and sound synthesis, spatial audio reproduction, multimodal indoor monitoring, radar imaging, depth estimation, physics-informed machine learning, and integrated sensing and communications (ISAC). Three workshop papers have also been accepted for presentation on audio-visual speaker diarization, music source separation, and music generative models.

      Perry Wang is the co-organizer of the Workshop on Signal Processing and Machine Learning Advances in Automotive Radars (SPLAR), held on Sunday, April 14. It features keynote talks from leaders in both academia and industry, peer-reviewed workshop papers, and lightning talks from ICASSP regular tracks on signal processing and machine learning for automotive radar and, more generally, radar perception.

      Gordon Wichern will present an invited keynote talk on analyzing and interpreting audio deep learning models at the Workshop on Explainable Machine Learning for Speech and Audio (XAI-SA), held on Monday, April 15. He will also appear in a panel discussion on interpretable audio AI at the workshop.

      Perry Wang also co-organizes a two-part special session on Next-Generation Wi-Fi Sensing (SS-L9 and SS-L13) which will be held on Thursday afternoon, April 18. The special session includes papers on PHY-layer oriented signal processing and data-driven deep learning advances, and supports upcoming 802.11bf WLAN Sensing Standardization activities.

      Petros Boufounos is participating as a mentor in ICASSP’s Micro-Mentoring Experience Program (MiME).

      About ICASSP

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 3000 participants.
  •  TALK    [MERL Seminar Series 2024] Stefanos Nikolaidis presents talk titled Enhancing the Efficiency and Robustness of Human-Robot Interactions
    Date & Time: Friday, March 8, 2024; 1:00 PM
    Speaker: Stefanos Nikolaidis, University of Southern California
    MERL Host: Siddarth Jain
    Research Areas: Machine Learning, Robotics, Human-Computer Interaction
    • While robots have been successfully deployed in factory floors and warehouses, there has been limited progress in having them perform physical tasks with people at home and in the workplace. I aim to bridge the gap between their current performance in human environments and what robots are capable of doing, by making human-robot interactions efficient and robust.

      In the first part of my talk, I discuss enhancing the efficiency of human-robot interactions by enabling robot manipulators to infer the preference of a human teammate and proactively assist them in a collaborative task. I show how we can leverage similarities between different users and tasks to learn compact representations of user preferences and use these representations as priors for efficient inference.

      In the second part, I talk about enhancing the robustness of human-robot interactions by algorithmically generating diverse and realistic scenarios in simulation that reveal system failures. I propose formulating the problem of algorithmic scenario generation as a quality diversity problem and show how standard quality diversity algorithms can discover surprising and unexpected failure cases. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space and the integration of these algorithms with generative models, which enables the generation of complex and realistic scenarios.

      Finally, I conclude the talk with applications in mining operations, collaborative manufacturing and assistive care.
  •  TALK    [MERL Seminar Series 2024] Melanie Mitchell presents talk titled "The Debate Over 'Understanding' in AI's Large Language Models"
    Date & Time: Tuesday, February 13, 2024; 1:00 PM
    Speaker: Melanie Mitchell, Santa Fe Institute
    MERL Host: Suhas Lohit
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Human-Computer Interaction
    • I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language -- and the physical and social situations language encodes -- in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
  •  TALK    [MERL Seminar Series 2024] Greta Tuckute presents talk titled Computational models of human auditory and language processing
    Date & Time: Wednesday, January 31, 2024; 12:00 PM
    Speaker: Greta Tuckute, MIT
    MERL Host: Sameer Khurana
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    • Advances in machine learning have led to powerful models for audio and language, proficient in tasks like speech recognition and fluent language generation. Beyond their immense utility in engineering applications, these models offer valuable tools for cognitive science and neuroscience. In this talk, I will demonstrate how these artificial neural network models can be used to understand how the human brain processes language. The first part of the talk will cover how audio neural networks serve as computational accounts for brain activity in the auditory cortex. The second part will focus on the use of large language models, such as those in the GPT family, to non-invasively control brain activity in the human language system.
  •  TALK    [MERL Seminar Series 2023] Dr. Kristina Monakhova presents talk titled Robust and Physics-informed machine learning for low light imaging
    Date & Time: Tuesday, November 28, 2023; 12:00 PM
    Speaker: Kristina Monakhova, MIT and Cornell
    MERL Host: Joshua Rapp
    Research Areas: Computational Sensing, Computer Vision, Machine Learning, Signal Processing
    • Imaging in low light settings is extremely challenging due to low photon counts, both in photography and in microscopy. In photography, imaging under low light, high gain settings often results in highly structured, non-Gaussian sensor noise that’s hard to characterize or denoise. In this talk, we address this by developing a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light, and highest gain settings. Using this noise model, we train a video denoiser using synthetic data and demonstrate photorealistic videography at starlight (submillilux levels of illumination) for the first time.

      For multiphoton microscopy, which is a form a scanning microscopy, there’s a trade-off between field of view, phototoxicity, acquisition time, and image quality, often resulting in noisy measurements. While deep learning-based methods have shown compelling denoising performance, can we trust these methods enough for critical scientific and medical applications? In the second part of this talk, I’ll introduce a learned, distribution-free uncertainty quantification technique that can both denoise and predict pixel-wise uncertainty to gauge how much we can trust our denoiser’s performance. Furthermore, we propose to leverage this learned, pixel-wise uncertainty to drive an adaptive acquisition technique that rescans only the most uncertain regions of a sample. With our sample and algorithm-informed adaptive acquisition, we demonstrate a 120X improvement in total scanning time and total light dose for multiphoton microscopy, while successfully recovering fine structures within the sample.
  •  TALK    [MERL Seminar Series 2023] Prof. Flavio Calmon presents talk titled Multiplicity in Machine Learning
    Date & Time: Tuesday, November 7, 2023; 12:00 PM
    Speaker: Flavio Calmon, Harvard University
    MERL Host: Ye Wang
    Research Areas: Artificial Intelligence, Machine Learning
    • This talk reviews the concept of predictive multiplicity in machine learning. Predictive multiplicity arises when different classifiers achieve similar average performance for a specific learning task yet produce conflicting predictions for individual samples. We discuss a metric called “Rashomon Capacity” for quantifying predictive multiplicity in multi-class classification. We also present recent findings on the multiplicity cost of differentially private training methods and group fairness interventions in machine learning.

      This talk is based on work published at ICML'20, NeurIPS'22, ACM FAccT'23, and NeurIPS'23.
  •  TALK    [MERL Seminar Series 2023] Dr. Tanmay Gupta presents talk titled Visual Programming - A compositional approach to building General Purpose Vision Systems
    Date & Time: Tuesday, October 31, 2023; 2:00 PM
    Speaker: Tanmay Gupta, Allen Institute for Artificial Intelligence
    MERL Host: Moitreya Chatterjee
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    • Building General Purpose Vision Systems (GPVs) that can perform a huge variety of tasks has been a long-standing goal for the computer vision community. However, end-to-end training of these systems to handle different modalities and tasks has proven to be extremely challenging. In this talk, I will describe a lucrative neuro-symbolic alternative to the common end-to-end learning paradigm called Visual Programming. Visual Programming is a general framework that leverages the code-generation abilities of LLMs, existing neural models, and non-differentiable programs to enable powerful applications. Some of these applications continue to remain elusive for the current generation of end-to-end trained GPVs.
  •  TALK    [MERL Seminar Series 2023] Prof. Komei Sugiura presents talk titled The Confluence of Vision, Language, and Robotics
    Date & Time: Thursday, September 28, 2023; 12:00 PM
    Speaker: Komei Sugiura, Keio University
    MERL Host: Chiori Hori
    Research Areas: Artificial Intelligence, Machine Learning, Robotics, Speech & Audio
    • Recent advances in multimodal models that fuse vision and language are revolutionizing robotics. In this lecture, I will begin by introducing recent multimodal foundational models and their applications in robotics. The second topic of this talk will address our recent work on multimodal language processing in robotics. The shortage of home care workers has become a pressing societal issue, and the use of domestic service robots (DSRs) to assist individuals with disabilities is seen as a possible solution. I will present our work on DSRs that are capable of open-vocabulary mobile manipulation, referring expression comprehension and segmentation models for everyday objects, and future captioning methods for cooking videos and DSRs.
  •  TALK    [MERL Seminar Series 2023] Prof. Faruque Hasan presents talk titled A Process Systems Engineering Perspective on Carbon Capture: Key Challenges and Opportunities
    Date & Time: Tuesday, September 19, 2023; 1:00 PM
    Speaker: Faruque Hasan, Texas A&M University
    MERL Host: Scott A. Bortoff
    Research Areas: Applied Physics, Machine Learning, Multi-Physical Modeling, Optimization
    • Carbon capture, utilization, and storage (CCUS) is a promising pathway to decarbonize fossil-based power and industrial sectors and is a bridging technology for a sustainable transition to a net-zero emission energy future. This talk aims to provide an overview of design and optimization of CCUS systems. I will also attempt to give a brief perspective on emerging interests in process systems engineering research (e.g., systems integration, multiscale modeling, strategic planning, and optimization under uncertainty). The purpose is not to cover all aspects of PSE research for CCUS but rather to foster discussion by presenting some plausible future directions and ideas.
  •  EVENT    MERL Contributes to ICASSP 2023
    Date: Sunday, June 4, 2023 - Saturday, June 10, 2023
    Location: Rhodes Island, Greece
    MERL Contacts: Petros T. Boufounos; François Germain; Toshiaki Koike-Akino; Jonathan Le Roux; Dehong Liu; Suhas Lohit; Yanting Ma; Hassan Mansour; Joshua Rapp; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern
    Research Areas: Artificial Intelligence, Computational Sensing, Machine Learning, Signal Processing, Speech & Audio
    • MERL has made numerous contributions to both the organization and technical program of ICASSP 2023, which is being held in Rhodes Island, Greece from June 4-10, 2023.


      Petros Boufounos is serving as General Co-Chair of the conference this year, where he has been involved in all aspects of conference planning and execution.

      Perry Wang is the organizer of a special session on Radar-Assisted Perception (RAP), which will be held on Wednesday, June 7. The session will feature talks on signal processing and deep learning for radar perception, pose estimation, and mutual interference mitigation with speakers from both academia (Carnegie Mellon University, Virginia Tech, University of Illinois Urbana-Champaign) and industry (Mitsubishi Electric, Bosch, Waveye).

      Anthony Vetro is the co-organizer of the Workshop on Signal Processing for Autonomous Systems (SPAS), which will be held on Monday, June 5, and feature invited talks from leaders in both academia and industry on timely topics related to autonomous systems.


      MERL is proud to be a Silver Patron of the conference and will participate in the student job fair on Thursday, June 8. Please join this session to learn more about employment opportunities at MERL, including openings for research scientists, post-docs, and interns.

      MERL is pleased to be the sponsor of two IEEE Awards that will be presented at the conference. We congratulate Prof. Rabab Ward, the recipient of the 2023 IEEE Fourier Award for Signal Processing, and Prof. Alexander Waibel, the recipient of the 2023 IEEE James L. Flanagan Speech and Audio Processing Award.

      Technical Program

      MERL is presenting 13 papers in the main conference on a wide range of topics including source separation and speech enhancement, radar imaging, depth estimation, motor fault detection, time series recovery, and point clouds. One workshop paper has also been accepted for presentation on self-supervised music source separation.

      Perry Wang has been invited to give a keynote talk on Wi-Fi sensing and related standards activities at the Workshop on Integrated Sensing and Communications (ISAC), which will be held on Sunday, June 4.

      Additionally, Anthony Vetro will present a Perspective Talk on Physics-Grounded Machine Learning, which is scheduled for Thursday, June 8.

      About ICASSP

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
  •  TALK    [MERL Seminar Series 2023] Prof. Dan Stowell presents talk titled Fine-grained wildlife sound recognition: Towards the accuracy of a naturalist
    Date & Time: Tuesday, April 25, 2023; 11:00 AM
    Speaker: Dan Stowell, Tilburg University / Naturalis Biodiversity Centre
    MERL Host: Gordon Wichern
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    • Machine learning can be used to identify animals from their sound. This could be a valuable tool for biodiversity monitoring, and for understanding animal behaviour and communication. But to get there, we need very high accuracy at fine-grained acoustic distinctions across hundreds of categories in diverse conditions. In our group we are studying how to achieve this at continental scale. I will describe aspects of bioacoustic data that challenge even the latest deep learning workflows, and our work to address this. Methods covered include adaptive feature representations, deep embeddings and few-shot learning.
  •  TALK    [MERL Seminar Series 2023] Dr. Michael Muehlebach presents talk titled Learning and Dynamical Systems
    Date & Time: Tuesday, April 11, 2023; 11:00 AM
    Speaker: Michael Muehlebach, Max Planck Institute for Intelligent Systems
    Research Areas: Control, Dynamical Systems, Machine Learning, Optimization, Robotics
    • The talk will be divided into two parts. The first part of the talk introduces a class of first-order methods for constrained optimization that are based on an analogy to non-smooth dynamical systems. The key underlying idea is to express constraints in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. This results is a simplified suite of algorithms and an expanded range of possible applications in machine learning. In the second part of my talk, I will present a robot learning algorithm for trajectory tracking. The method incorporates prior knowledge about the system dynamics and by optimizing over feedforward actions, the risk of instability during deployment is mitigated. The algorithm will be evaluated on a ping-pong playing robot that is actuated by soft pneumatic muscles.
  •  TALK    [MERL Seminar Series 2023] Prof. Zoltan Nagy presents talk titled Investigating Multi-Agent Reinforcement Learning for Grid-Interactive Smart Communities using CityLearn
    Date & Time: Wednesday, March 29, 2023; 1:00 PM
    Speaker: Zoltan Nagy, The University of Texas at Austin
    MERL Host: Ankush Chakrabarty
    Research Areas: Control, Machine Learning, Multi-Physical Modeling
    • The decarbonization of buildings presents new challenges for the reliability of the electrical grid because of the intermittency of renewable energy sources and increase in grid load brought about by end-use electrification. To restore reliability, grid-interactive efficient buildings can provide flexibility services to the grid through demand response. Residential demand response programs are hindered by the need for manual intervention by customers. To maximize the energy flexibility potential of residential buildings, an advanced control architecture is needed. Reinforcement learning is well-suited for the control of flexible resources as it can adapt to unique building characteristics compared to expert systems. Yet, factors hindering the adoption of RL in real-world applications include its large data requirements for training, control security and generalizability. This talk will cover some of our recent work addressing these challenges. We proposed the MERLIN framework and developed a digital twin of a real-world 17-building grid-interactive residential community in CityLearn. We show that 1) independent RL-controllers for batteries improve building and district level KPIs compared to a reference RBC by tailoring their policies to individual buildings, 2) despite unique occupant behaviors, transferring the RL policy of any one of the buildings to other buildings provides comparable performance while reducing the cost of training, 3) training RL-controllers on limited temporal data that does not capture full seasonality in occupant behavior has little effect on performance. Although, the zero-net-energy (ZNE) condition of the buildings could be maintained or worsened because of controlled batteries, KPIs that are typically improved by ZNE condition (electricity price and carbon emissions) are further improved when the batteries are managed by an advanced controller.
  •  TALK    [MERL Seminar Series 2023] Dr. Suraj Srinivas presents talk titled Pitfalls and Opportunities in Interpretable Machine Learning
    Date & Time: Tuesday, March 14, 2023; 1:00 PM
    Speaker: Suraj Srinivas, Harvard University
    MERL Host: Suhas Lohit
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    • In this talk, I will discuss our recent research on understanding post-hoc interpretability. I will begin by introducing a characterization of post-hoc interpretability methods as local function approximators, and the implications of this viewpoint, including a no-free-lunch theorem for explanations. Next, we shall challenge the assumption that post-hoc explanations provide information about a model's discriminative capabilities p(y|x) and instead demonstrate that many common methods instead rely on a conditional generative model p(x|y). This observation underscores the importance of being cautious when using such methods in practice. Finally, I will propose to resolve this via regularization of model structure, specifically by training low curvature neural networks, resulting in improved model robustness and stable gradients.
  •  TALK    [MERL Seminar Series 2023] Prof. Shaowu Pan presents talk titled Neural Implicit Flow
    Date & Time: Wednesday, March 1, 2023; 1:00 PM
    Speaker: Shaowu Pan, Rensselaer Polytechnic Institute
    MERL Host: Saviz Mowlavi
    Research Areas: Computational Sensing, Data Analytics, Machine Learning
    • High-dimensional spatio-temporal dynamics can often be encoded in a low-dimensional subspace. Engineering applications for modeling, characterization, design, and control of such large-scale systems often rely on dimensionality reduction to make solutions computationally tractable in real-time. Common existing paradigms for dimensionality reduction include linear methods, such as the singular value decomposition (SVD), and nonlinear methods, such as variants of convolutional autoencoders (CAE). However, these encoding techniques lack the ability to efficiently represent the complexity associated with spatio-temporal data, which often requires variable geometry, non-uniform grid resolution, adaptive meshing, and/or parametric dependencies. To resolve these practical engineering challenges, we propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatial-temporal data. NIF consists of two modified multilayer perceptrons (MLPs): (i) ShapeNet, which isolates and represents the spatial complexity, and (ii) ParameterNet, which accounts for any other input complexity, including parametric dependencies, time, and sensor measurements. We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatio-temporal dynamics, efficient many-spatial-query tasks, and improved generalization performance for sparse reconstruction.
  •  TALK    Prof. Kevin Lynch presents talk titled Autonomous and Human-Collaborative Robotic Manipulation
    Date & Time: Tuesday, February 28, 2023; 12:00 PM
    Speaker: Prof. Kevin Lynch, Northwestern University
    MERL Host: Diego Romeres
    Research Areas: Machine Learning, Robotics
    • Research at the Center for Robotics and Biosystems at Northwestern University includes bio-inspiration, neuromechanics, human-machine systems, and swarm robotics, among other topics. In this talk I will focus on our work on manipulation, including autonomous in-hand robotic manipulation and safe, intuitive human-collaborative manipulation among one or more humans and a team of mobile manipulators.
  •  EVENT    MERL's Virtual Open House 2022
    Date & Time: Monday, December 12, 2022; 1:00pm-5:30pm ET
    Location: Mitsubishi Electric Research Laboratories (MERL)/Virtual
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video
    • Join MERL's virtual open house on December 12th, 2022! Featuring a keynote, live sessions, research area booths, and opportunities to interact with our research team. Discover who we are and what we do, and learn about internship and employment opportunities.
  •  TALK    [MERL Seminar Series 2022] Prof. Jiajun Wu presents talk titled Understanding the Visual World Through Naturally Supervised Code
    Date & Time: Tuesday, November 1, 2022; 1:00 PM
    Speaker: Jiajun Wu, Stanford University
    MERL Host: Anoop Cherian
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    • The visual world has its inherent structure: scenes are made of multiple identical objects; different objects may have the same color or material, with a regular layout; each object can be symmetric and have repetitive parts. How can we infer, represent, and use such structure from raw data, without hampering the expressiveness of neural networks? In this talk, I will demonstrate that such structure, or code, can be learned from natural supervision. Here, natural supervision can be from pixels, where neuro-symbolic methods automatically discover repetitive parts and objects for scene synthesis. It can also be from objects, where humans during fabrication introduce priors that can be leveraged by machines to infer regular intrinsics such as texture and material. When solving these problems, structured representations and neural nets play complementary roles: it is more data-efficient to learn with structured representations, and they generalize better to new scenarios with robustly captured high-level information; neural nets effectively extract complex, low-level features from cluttered and noisy visual data.
  •  TALK    A Tunable Control/Learning Framework for Autonomous Systems
    Date & Time: Thursday, October 13, 2022; 1:30pm-2:30pm
    Speaker: Prof. Shaoshuai Mou, Purdue University
    MERL Host: Yebin Wang
    Research Areas: Control, Machine Learning, Optimization
    • Modern society has been relying more and more on engineering advance of autonomous systems, ranging from individual systems (such as a robotic arm for manufacturing, a self-driving car, or an autonomous vehicle for planetary exploration) to cooperative systems (such as a human-robot team, swarms of drones, etc). In this talk we will present our most recent progress in developing a fundamental framework for learning and control in autonomous systems. The framework comes from a differentiation of Pontryagin’s Maximum Principle and is able to provide a unified solution to three classes of learning/control tasks, i.e. adaptive autonomy, inverse optimization, and system identification. We will also present applications of this framework into human-autonomy teaming, especially in enabling an autonomous system to take guidance from human operators, which is usually sparse and vague.
  •  EVENT    SANE 2022 - Speech and Audio in the Northeast
    Date: Thursday, October 6, 2022
    Location: Kendall Square, Cambridge, MA
    MERL Contacts: Anoop Cherian; Jonathan Le Roux
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    • SANE 2022, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, was held on Thursday October 6, 2022 in Kendall Square, Cambridge, MA.

      It was the 9th edition in the SANE series of workshops, which started in 2012 and was held every year alternately in Boston and New York until 2019. Since the first edition, the audience has grown to a record 200 participants and 45 posters in 2019. After a 2-year hiatus due to the pandemic, SANE returned with an in-person gathering of 140 students and researchers.

      SANE 2022 featured invited talks by seven leading researchers from the Northeast: Rupal Patel (Northeastern/VocaliD), Wei-Ning Hsu (Meta FAIR), Scott Wisdom (Google), Tara Sainath (Google), Shinji Watanabe (CMU), Anoop Cherian (MERL), and Chuang Gan (UMass Amherst/MIT-IBM Watson AI Lab). It also featured a lively poster session with 29 posters.

      SANE 2022 was co-organized by Jonathan Le Roux (MERL), Arnab Ghoshal (Apple), John Hershey (Google), and Shinji Watanabe (CMU). SANE remained a free event thanks to generous sponsorship by Bose, Google, MERL, and Microsoft.

      Slides and videos of the talks will be released on the SANE workshop website.
  •  TALK    [MERL Seminar Series 2022] Prof. Chuang Gan presents talk titled Learning to Perceive Physical Scenes from Multi-Sensory Data
    Date & Time: Tuesday, September 6, 2022; 12:00 PM EDT
    Speaker: Chuang Gan, UMass Amherst & MIT-IBM Watson AI Lab
    MERL Host: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    • Human sensory perception of the physical world is rich and multimodal and can flexibly integrate input from all five sensory modalities -- vision, touch, smell, hearing, and taste. However, in AI, attention has primarily focused on visual perception. In this talk, I will introduce my efforts in connecting vision with sound, which will allow machine perception systems to see objects and infer physics from multi-sensory data. In the first part of my talk, I will introduce a. self-supervised approach that could learn to parse images and separate the sound sources by watching and listening to unlabeled videos without requiring additional manual supervision. In the second part of my talk, I will show we may further infer the underlying causal structure in 3D environments through visual and auditory observations. This enables agents to seek the sound source of repeating environmental sound (e.g., alarm) or identify what object has fallen, and where, from an intermittent impact sound.
  •  TALK    [MERL Seminar Series 2022] Prof. Vincent Sitzmann presents talk titled Self-Supervised Scene Representation Learning
    Date & Time: Wednesday, March 30, 2022; 11:00 AM EDT
    Speaker: Vincent Sitzmann, MIT
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    • Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. This poses a unique set of challenges that sets neural scene representations apart from conventional representations of 3D scenes: Rendering and processing operations need to be differentiable, and the type of information they encode is unknown a priori, requiring them to be extraordinarily flexible. At the same time, training them without ground-truth 3D supervision is an underdetermined problem, highlighting the need for structure and inductive biases without which models converge to spurious explanations.

      I will demonstrate how we can equip neural networks with inductive biases that enables them to learn 3D geometry, appearance, and even semantic information, self-supervised only from posed images. I will show how this approach unlocks the learning of priors, enabling 3D reconstruction from only a single posed 2D image, and how we may extend these representations to other modalities such as sound. I will then discuss recent work on learning the neural rendering operator to make rendering and training fast, and how this speed-up enables us to learn object-centric neural scene representations, learning to decompose 3D scenes into objects, given only images. Finally, I will talk about a recent application of self-supervised scene representation learning in robotic manipulation, where it enables us to learn to manipulate classes of objects in unseen poses from only a handful of human demonstrations.