News & Events

205 News items, Awards, Events or Talks found.


  •  EVENT    SANE 2022 - Speech and Audio in the Northeast
    Date: Thursday, October 6, 2022
    Location: Kendall Square, Cambridge, MA
    MERL Contacts: Anoop Cherian; Jonathan Le Roux
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Brief
    • SANE 2022, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, was held on Thursday October 6, 2022 in Kendall Square, Cambridge, MA.

      It was the 9th edition in the SANE series of workshops, which started in 2012 and was held every year alternately in Boston and New York until 2019. Since the first edition, the audience has grown to a record 200 participants and 45 posters in 2019. After a 2-year hiatus due to the pandemic, SANE returned with an in-person gathering of 140 students and researchers.

      SANE 2022 featured invited talks by seven leading researchers from the Northeast: Rupal Patel (Northeastern/VocaliD), Wei-Ning Hsu (Meta FAIR), Scott Wisdom (Google), Tara Sainath (Google), Shinji Watanabe (CMU), Anoop Cherian (MERL), and Chuang Gan (UMass Amherst/MIT-IBM Watson AI Lab). It also featured a lively poster session with 29 posters.

      SANE 2022 was co-organized by Jonathan Le Roux (MERL), Arnab Ghoshal (Apple), John Hershey (Google), and Shinji Watanabe (CMU). SANE remained a free event thanks to generous sponsorship by Bose, Google, MERL, and Microsoft.

      Slides and videos of the talks will be released on the SANE workshop website.
  •  
  •  NEWS    MERL launches Postdoctoral Research Fellow program
    Date: September 21, 2022
    MERL Contacts: Philip V. Orlik; Anthony Vetro
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio
    Brief
    • Mitsubishi Electric Research Laboratories (MERL) invites qualified postdoctoral candidates to apply for the position of Postdoctoral Research Fellow. This position provides early career scientists the opportunity to work at a unique, academically-oriented industrial research laboratory. Successful candidates will be expected to define and pursue their own original research agenda, explore connections to established laboratory initiatives, and publish high impact articles in leading venues. Please refer to our web page for further details.
  •  
  •  TALK    [MERL Seminar Series 2022] Prof. Chuang Gan presents talk titled Learning to Perceive Physical Scenes from Multi-Sensory Data
    Date & Time: Tuesday, September 6, 2022; 12:00 PM EDT
    Speaker: Chuang Gan, UMass Amherst & MIT-IBM Watson AI Lab
    MERL Host: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Abstract
    • Human sensory perception of the physical world is rich and multimodal and can flexibly integrate input from all five sensory modalities -- vision, touch, smell, hearing, and taste. However, in AI, attention has primarily focused on visual perception. In this talk, I will introduce my efforts in connecting vision with sound, which will allow machine perception systems to see objects and infer physics from multi-sensory data. In the first part of my talk, I will introduce a. self-supervised approach that could learn to parse images and separate the sound sources by watching and listening to unlabeled videos without requiring additional manual supervision. In the second part of my talk, I will show we may further infer the underlying causal structure in 3D environments through visual and auditory observations. This enables agents to seek the sound source of repeating environmental sound (e.g., alarm) or identify what object has fallen, and where, from an intermittent impact sound.
  •  
  •  NEWS    MERL congratulates Prof. Alex Waibel on receiving 2023 IEEE James L. Flanagan Speech and Audio Processing Award
    Date: August 22, 2022
    MERL Contacts: Chiori Hori; Jonathan Le Roux; Anthony Vetro
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • IEEE has announced that the recipient of the 2023 IEEE James L. Flanagan Speech and Audio Processing Award will be Prof. Alex Waibel (CMU/Karlsruhe Institute of Technology), “For pioneering contributions to spoken language translation and supporting technologies.” Mitsubishi Electric Research Laboratories (MERL), which has become the new sponsor of this prestigious award in 2022, extends our warmest congratulations to Prof. Waibel.

      MERL Senior Principal Research Scientist Dr. Chiori Hori, who worked with Dr. Waibel at Carnegie Mellon University and collaborated with him as part of national projects on speech summarization and translation, comments on his invaluable contributions to the field: “He has contributed not only to the invention of groundbreaking technology in speech and spoken language processing but also to the promotion of an abundance of research projects through international research consortiums by linking American, European, and Asian research communities. Many of his former laboratory members and collaborators are now leading R&D in the AI field.”

      The IEEE Board of Directors established the IEEE James L. Flanagan Speech and Audio Processing Award in 2002 for outstanding contributions to the advancement of speech and/or audio signal processing. This award has recognized the contributions of some of the most renowned pioneers and leaders in their respective fields. MERL is proud to support the recognition of outstanding contributions to the field of speech and audio processing through its sponsorship of this award.
  •  
  •  AWARD    ACM/IEEE Design Automation Conference 2022 Best Paper Award nominee
    Date: July 14, 2022
    Awarded to: Weidong Cao, Mouhacine Benosman, Xuan Zhang, and Rui Ma
    Research Area: Artificial Intelligence
    Brief
    • The Conference committee of the 59th Design Automation Conference has chosen MERL's paper entitled 'Domain Knowledge-Infused Deep Learning for Automated Analog/RF Circuit Parameter Optimization', as a DAC Best Paper Award nominee. The committee evaluated both manuscript and submitted presentation recording, and has chosen MERL's paper as one of six nominees for this prestigious award. Decisions were based on the submissions’ innovation, impact and exposition.
  •  
  •  AWARD    International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022 Openedges Award
    Date: June 15, 2022
    Awarded to: Yuxiang Sun, Mouhacine Benosman, Rui Ma.
    Research Area: Artificial Intelligence
    Brief
    • The committee of the International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022, has selected MERL's paper entitled 'GaN Distributed RF Power Amplifier Automation Design with Deep Reinforcement Learning' as a winner of the AICAS 2022 Openedges Award.

      In this paper MERL researchers propose a novel design automation methodology based on deep reinforcement learning (RL), for wide-band non-uniform distributed RF power amplifiers, known for their high dimensional design challenges.
  •  
  •  NEWS    MERL researchers presented 5 papers and an invited workshop talk at ICRA 2022
    Date: May 23, 2022 - May 27, 2022
    Where: International Conference on Robotics and Automation (ICRA)
    MERL Contacts: Ankush Chakrabarty; Stefano Di Cairano; Siddarth Jain; Devesh K. Jha; Pedro Miraldo; Daniel N. Nikovski; Arvind Raghunathan; Diego Romeres; Abraham P. Vinod; Yebin Wang
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • MERL researchers presented 5 papers at the IEEE International Conference on Robotics and Automation (ICRA) that was held in Philadelphia from May 23-27, 2022. The papers covered a broad range of topics from manipulation, tactile sensing, planning and multi-agent control. The invited talk was presented in the "Workshop on Collaborative Robots and Work of the Future" which covered some of the work done by MERL researchers on collaborative robotic assembly. The workshop was co-organized by MERL, Mitsubishi Electric Automation's North America Development Center (NADC), and MIT.
  •  
  •  NEWS    MERL presenting 8 papers at ICASSP 2022
    Date: May 22, 2022 - May 27, 2022
    Where: Singapore
    MERL Contacts: Anoop Cherian; Chiori Hori; Toshiaki Koike-Akino; Jonathan Le Roux; Tim K. Marks; Philip V. Orlik; Kuan-Chuan Peng; Pu (Perry) Wang; Gordon Wichern
    Research Areas: Artificial Intelligence, Computer Vision, Signal Processing, Speech & Audio
    Brief
    • MERL researchers are presenting 8 papers at the IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), which is being held in Singapore from May 22-27, 2022. A week of virtual presentations also took place earlier this month.

      Topics to be presented include recent advances in speech recognition, audio processing, scene understanding, computational sensing, and classification.

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
  •  
  •  NEWS    MERL Scientists Presenting 5 Papers at IEEE International Conference on Communications (ICC) 2022
    Date: May 16, 2022 - May 20, 2022
    Where: Seoul, Korea
    MERL Contacts: Jianlin Guo; Toshiaki Koike-Akino; Philip V. Orlik; Kieran Parsons; Pu (Perry) Wang; Ye Wang
    Research Areas: Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Machine Learning, Signal Processing
    Brief
    • MERL Connectivity & Information Processing Team scientists remotely presented 5 papers at the IEEE International Conference on Communications (ICC) 2022, held in Seoul Korea on May 16-20, 2022. Topics presented include recent advancements in communications technologies, deep learning methods, and quantum machine learning (QML). Presentation videos are also found on our YouTube channel. In addition, K. J. Kim organized "Industrial Private 5G-and-beyond Wireless Networks Workshop" at the conference.

      IEEE ICC is one of two IEEE Communications Society’s flagship conferences (ICC and Globecom). Each year, close to 2,000 attendees from over 70 countries attend IEEE ICC to take advantage of a program which consists of exciting keynote session, robust technical paper sessions, innovative tutorials and workshops, and engaging industry sessions. This 5-day event is known for bringing together audiences from both industry and academia to learn about the latest research and innovations in communications and networking technology, share ideas and best practices, and collaborate on future projects.
  •  
  •  NEWS    Arvind Raghunathan's publication is Featured Article in the current issue of the INFORMS Journal on Computing
    Date: April 1, 2022
    Where: INFORMS Journal on Computing (https://pubsonline.informs.org/journal/ijoc)
    MERL Contact: Arvind Raghunathan
    Research Areas: Artificial Intelligence, Machine Learning, Optimization
    Brief
    • Arvind Raghunathan co-authored a publication titled "JANOS: An Integrated Predictive and Prescriptive Modeling Framework" which has been chosen as a Featured Article in the current issue of the INFORMS Journal on Computing. The article was co-authored with Prof. David Bergman, a collaborator of MERL and Teng Huang, a former MERL intern, among others.

      The paper describes a new software tool, JANOS, that integrates predictive modeling and discrete optimization to assist decision making. Specifically, the proposed solver takes as input user-specified pretrained predictive models and formulates optimization models directly over those predictive models by embedding them within an optimization model through linear transformations.
  •  
  •  NEWS    Toshiaki Koike-Akino gave an invited lecture to USPTO on advanced photonics
    Date: May 4, 2022
    MERL Contact: Toshiaki Koike-Akino
    Research Areas: Artificial Intelligence, Communications, Electronic and Photonic Devices, Machine Learning, Optimization, Signal Processing
    Brief
    • Toshiaki Koike-Akino gave an invited lecture on advanced photonic devices at the United States Patent and Trademark Office (USPTO) Technology Fair on May 4, 2022. Topics of the lecture included the recent progress of applied artificial intelligence (AI) technologies for optical systems, nano-photonic devices, and quantum technology. During the 2-hour interactive online presentation, he lectured to more than 200 patent examiner participants.

      USPTO Tech Fair Organizer mentioned:
      "Thank you very much for representing Advanced Photonic Devices at this year’s Technology Center 2800 Virtual Tech Fair held May 4th, 2022. Tech Fair is an important part of the United States Patent and Trademark Office’s Patent Examiner Technical Training Program (PETTP). Having a scientifically well-trained examiner workforce and ensuring the quality, consistency, and reliability of issued patents are top priorities at the USPTO. The PETTP is designed to achieve those priorities by giving examiners direct access to technical experts who are willing to share their knowledge about prior art and industry standards for both emerging and established technologies. Experts like yourself help to maintain our high quality of patent examination by keeping examiners updated on technologies and innovations pertinent to their field of examination.
      We very much appreciate your efforts, time, and contributions."
  •  
  •  TALK    [MERL Seminar Series 2022] Prof. Vincent Sitzmann presents talk titled Self-Supervised Scene Representation Learning
    Date & Time: Wednesday, March 30, 2022; 11:00 AM EDT
    Speaker: Vincent Sitzmann, MIT
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Abstract
    • Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. This poses a unique set of challenges that sets neural scene representations apart from conventional representations of 3D scenes: Rendering and processing operations need to be differentiable, and the type of information they encode is unknown a priori, requiring them to be extraordinarily flexible. At the same time, training them without ground-truth 3D supervision is an underdetermined problem, highlighting the need for structure and inductive biases without which models converge to spurious explanations.

      I will demonstrate how we can equip neural networks with inductive biases that enables them to learn 3D geometry, appearance, and even semantic information, self-supervised only from posed images. I will show how this approach unlocks the learning of priors, enabling 3D reconstruction from only a single posed 2D image, and how we may extend these representations to other modalities such as sound. I will then discuss recent work on learning the neural rendering operator to make rendering and training fast, and how this speed-up enables us to learn object-centric neural scene representations, learning to decompose 3D scenes into objects, given only images. Finally, I will talk about a recent application of self-supervised scene representation learning in robotic manipulation, where it enables us to learn to manipulate classes of objects in unseen poses from only a handful of human demonstrations.
  •  
  •  NEWS    Devesh Jha delivers invited talk at Mechanical and Aerospace Engineering Department, NYU
    Date: March 1, 2022
    Where: Online/Zoom
    MERL Contact: Devesh K. Jha
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • Devesh Jha, a Principal Research Scientist in MERL's Data Analytics group, gave an invited talk at the Mechanical and Aerospace Engineering Department, NYU. The title of the talk was "Robotic Manipulation in the Wild: Planning, Learning and Control through Contacts". The talk presented some of the recent work done at MERL for robotic manipulation in unstructured environments in the presence of significant uncertainty.
  •  
  •  NEWS    MERL work on scene-aware interaction featured in IEEE Spectrum
    Date: March 1, 2022
    MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Brief
    • MERL's research on scene-aware interaction was recently featured in an IEEE Spectrum article. The article, titled "At Last, A Self-Driving Car That Can Explain Itself" and authored by MERL Senior Principal Research Scientist Chiori Hori and MERL Director Anthony Vetro, gives an overview of MERL's efforts towards developing a system that can analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.

      Scene-Aware Interaction for car navigation, one target application that the article focuses on, will provide drivers with intuitive route guidance. Scene-Aware Interaction technology is expected to have wide applicability, including human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. MERL's Scene-Aware Interaction Technology had previously been featured in a Mitsubishi Electric Corporation Press Release.

      IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. IEEE Spectrum has a circulation of over 400,000 engineers worldwide, making it one of the leading science and engineering magazines.
  •  
  •  TALK    [MERL Seminar Series 2022] Learning Speech Representations with Multimodal Self-Supervision
    Date & Time: Tuesday, March 1, 2022; 1:00 PM EST
    Speaker: David Harwath, The University of Texas at Austin
    MERL Host: Chiori Hori
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Abstract
    • Humans learn spoken language and visual perception at an early age by being immersed in the world around them. Why can't computers do the same? In this talk, I will describe our ongoing work to develop methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. I will first present self-supervised models capable of discovering discrete, hierarchical structure (words and sub-word units) in the speech signal. Instead of conventional annotations, these models learn from correspondences between speech sounds and visual patterns such as objects and textures. Next, I will demonstrate how these discrete units can be used as a drop-in replacement for text transcriptions in an image captioning system, enabling us to directly synthesize spoken descriptions of images without the need for text as an intermediate representation. Finally, I will describe our latest work on Transformer-based models of visually-grounded speech. These models significantly outperform the prior state of the art on semantic speech-to-image retrieval tasks, and also learn representations that are useful for a multitude of other speech processing tasks.
  •  
  •  NEWS    Jonathan Le Roux discusses MERL's audio source separation work on popular machine learning podcast
    Date: January 24, 2022
    Where: The TWIML AI Podcast
    MERL Contact: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL Speech & Audio Senior Team Leader Jonathan Le Roux was featured in an extended interview on the popular TWIML AI Podcast, presenting MERL's work towards solving the "cocktail party problem". Humans have the extraordinary ability to focus on particular sounds of interest within a complex acoustic scene, such as a cocktail party. MERL's Speech & Audio Team has been at the forefront of the field's effort to develop algorithms giving machines similar abilities. Jonathan talked with host Sam Charrington about the group's decade-long journey on this topic, from early pioneering work using deep learning for speech enhancement and speech separation, to recent works on weakly-supervised separation, hierarchical sound separation, as well as the separation of real-world soundtracks into speech, music, and sound effects (aka the "cocktail fork problem").

      The TWIML AI podcast, formerly known as This Week in Machine Learning & AI, was created in 2016 and is followed by more than 10,000 subscribers on Youtube and Twitter. Jonathan's interview marks the 555th episode of the podcast.
  •  
  •  EVENT    Prof. Melanie Zeilinger of ETH to give keynote at MERL's Virtual Open House
    Date & Time: Thursday, December 9, 2021; 1:00pm - 5:30pm EST
    Location: Virtual Event
    Speaker: Prof. Melanie Zeilinger, ETH
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • MERL is excited to announce the second keynote speaker for our Virtual Open House 2021:
      Prof. Melanie Zeilinger from ETH .

      Our virtual open house will take place on December 9, 2021, 1:00pm - 5:30pm (EST).

      Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities. Prof. Zeilinger's talk is scheduled for 3:15pm - 3:45pm (EST).

      Registration: https://mailchi.mp/merl/merlvoh2021

      Keynote Title: Control Meets Learning - On Performance, Safety and User Interaction

      Abstract: With increasing sensing and communication capabilities, physical systems today are becoming one of the largest generators of data, making learning a central component of autonomous control systems. While this paradigm shift offers tremendous opportunities to address new levels of system complexity, variability and user interaction, it also raises fundamental questions of learning in a closed-loop dynamical control system. In this talk, I will present some of our recent results showing how even safety-critical systems can leverage the potential of data. I will first briefly present concepts for using learning for automatic controller design and for a new safety framework that can equip any learning-based controller with safety guarantees. The second part will then discuss how expert and user information can be utilized to optimize system performance, where I will particularly highlight an approach developed together with MERL for personalizing the motion planning in autonomous driving to the individual driving style of a passenger.
  •  
  •  EVENT    Prof. Ashok Veeraraghavan of Rice University to give keynote at MERL's Virtual Open House
    Date & Time: Thursday, December 9, 2021; 1:00pm - 5:30pm EST
    Location: Virtual Event
    Speaker: Prof. Ashok Veeraraghavan, Rice University
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • MERL is excited to announce the first keynote speaker for our Virtual Open House 2021:
      Prof. Ashok Veeraraghavan from Rice University.

      Our virtual open house will take place on December 9, 2021, 1:00pm - 5:30pm (EST).

      Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities. Prof. Veeraraghavan's talk is scheduled for 1:15pm - 1:45pm (EST).

      Registration: https://mailchi.mp/merl/merlvoh2021

      Keynote Title: Computational Imaging: Beyond the limits imposed by lenses.

      Abstract: The lens has long been a central element of cameras, since its early use in the mid-nineteenth century by Niepce, Talbot, and Daguerre. The role of the lens, from the Daguerrotype to modern digital cameras, is to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. This effect enables the sensor to compute a particular two-dimensional (2D) integral of the incident 4D light-field. We propose a radical departure from this practice and the many limitations it imposes. In the talk we focus on two inter-related research projects that attempt to go beyond lens-based imaging.

      First, we discuss our lab’s recent efforts to build flat, extremely thin imaging devices by replacing the lens in a conventional camera with an amplitude mask and computational reconstruction algorithms. These lensless cameras, called FlatCams can be less than a millimeter in thickness and enable applications where size, weight, thickness or cost are the driving factors. Second, we discuss high-resolution, long-distance imaging using Fourier Ptychography, where the need for a large aperture aberration corrected lens is replaced by a camera array and associated phase retrieval algorithms resulting again in order of magnitude reductions in size, weight and cost. Finally, I will spend a few minutes discussing how the wholistic computational imaging approach can be used to create ultra-high-resolution wavefront sensors.
  •  
  •  AWARD    MERL Ranked 1st Place in Cross-Subject Transfer Learning Task and 4th Place Overall at the NeurIPS2021 BEETL Competition for EEG Transfer Learning.
    Date: November 11, 2021
    Awarded to: Niklas Smedemark-Margulies, Toshiaki Koike-Akino, Ye Wang, Deniz Erdogmus
    MERL Contacts: Toshiaki Koike-Akino; Ye Wang
    Research Areas: Artificial Intelligence, Signal Processing, Human-Computer Interaction
    Brief
    • The MERL Signal Processing group achieved first place in the cross-subject transfer learning task and fourth place overall in the NeurIPS 2021 BEETL AI Challenge for EEG Transfer Learning. The team included Niklas Smedemark-Margulies (intern from Northeastern University), Toshiaki Koike-Akino, Ye Wang, and Prof. Deniz Erdogmus (Northeastern University). The challenge addresses two types of transfer learning tasks for EEG Biosignals: a homogeneous transfer learning task for cross-subject domain adaptation; and a heterogeneous transfer learning task for cross-data domain adaptation. There were 110+ registered teams in this competition, MERL ranked 1st in the homogeneous transfer learning task, 7th place in the heterogeneous transfer learning task, and 4th place for the combined overall score. For the homogeneous transfer learning task, MERL developed a new pre-shot learning framework based on feature disentanglement techniques for robustness against inter-subject variation to enable calibration-free brain-computer interfaces (BCI). MERL is invited to present our pre-shot learning technique at the NeurIPS 2021 workshop.
  •  
  •  EVENT    MERL Virtual Open House 2021
    Date & Time: Thursday, December 9, 2021; 100pm-5:30pm (EST)
    Location: Virtual Event
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • Mitsubishi Electric Research Laboratories cordially invites you to join our Virtual Open House, on December 9, 2021, 1:00pm - 5:30pm (EST).

      The event will feature keynotes, live sessions, research area booths, and time for open interactions with our researchers. Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities.

      Registration: https://mailchi.mp/merl/merlvoh2021
  •  
  •  TALK    [MERL Seminar Series 2021] Dr. Hsiao-Yu (Fish) Tung presents talk at MERL entitled Learning to See by Moving: Self-supervising 3D scene representations for perception, control, and visual reasoning
    Date & Time: Tuesday, November 2, 2021; 1:00 PM EST
    Speaker: Dr. Hsiao-Yu (Fish) Tung, MIT BCS
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Robotics
    Abstract
    • Current state-of-the-art CNNs can localize and name objects in internet photos, yet, they miss the basic knowledge that a two-year-old toddler has possessed: objects persist over time despite changes in the observer’s viewpoint or during cross-object occlusions; objects have 3D extent; solid objects do not pass through each other. In this talk, I will introduce neural architectures that learn to parse video streams of a static scene into world-centric 3D feature maps by disentangling camera motion from scene appearance. I will show the proposed architectures learn object permanence, can imagine RGB views from novel viewpoints in truly novel scenes, can conduct basic spatial reasoning and planning, can infer affordability in sentences, and can learn geometry-aware 3D concepts that allow pose-aware object recognition to happen with weak/sparse labels. Our experiments suggest that the proposed architectures are essential for the models to generalize across objects and locations, and it overcomes many limitations of 2D CNNs. I will show how we can use the proposed 3D representations to build machine perception and physical understanding more close to humans.
  •  
  •  NEWS    Ankush Chakrabarty gave an invited talk at CRAN: Centre de Recherche en Automatique de Nancy, France
    Date: October 21, 2021
    Where: Université de Lorraine, France
    MERL Contact: Ankush Chakrabarty
    Research Areas: Artificial Intelligence, Control, Machine Learning, Multi-Physical Modeling, Optimization
    Brief
    • Ankush Chakrabarty (RS, Multiphysical Systems Team) gave an invited talk on `Bayesian-Optimized Estimation and Control for Buildings and HVAC' at the Research Center for Automatic Control (CRAN) in the University of Lorraine in France. The talk presented recent MERL research on probabilistic machine learning for set-point optimization and calibration of digital twins for building energy systems.
  •  
  •  AWARD    Daniel Nikovski receives Outstanding Reviewer Award at NeurIPS'21
    Date: October 18, 2021
    Awarded to: Daniel Nikovski
    MERL Contact: Daniel N. Nikovski
    Research Areas: Artificial Intelligence, Machine Learning
    Brief
    • Daniel Nikovski, Group Manager of MERL's Data Analytics group, has received an Outstanding Reviewer Award from the 2021 conference on Neural Information Processing Systems (NeurIPS'21). NeurIPS is the world's premier conference on neural networks and related technologies.
  •  
  •  NEWS    Diego Romeres appointed as Associate Editor at ICRA 2022.
    Date: September 17, 2021 - October 31, 2021
    MERL Contact: Diego Romeres
    Research Areas: Artificial Intelligence, Control, Data Analytics, Dynamical Systems, Optimization, Robotics
    Brief
    • Diego Romeres, a Principal Research Scientist in MERL's Data Analytics group, is serving as an Associate Editor (AE) for the IEEE International Conference on Robotics and Automation (ICRA) 2022.
  •  
  •  NEWS    Anoop Cherian gave an invited talk at the Department of Computer Science at the University of Bristol, UK
    Date: September 7, 2021
    MERL Contact: Anoop Cherian
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • Anoop Cherian, a Principal Research Scientist in MERL's Computer Vision group, gave an invited virtual talk on "InSeGAN: An Unsupervised Approach to Identical Instance Segmentation" at the Visual Information Laboratory of University of Bristol, UK. The talk described a new approach to segmenting varied appearances of nearly identical 3D objects in depth images. More details of the talk can be found in the following paper https://arxiv.org/abs/2108.13865, which will be presented at the International Conference on Computer Vision (ICCV'21).
  •