MERL’s Virtual Open House 2022

December 12, 2022

Join us for MERL's virtual open house on December 12. Live sessions will be held from 1:00-5:30pm EST, including an overview of recent activities by our research groups, a featured guest speaker and live interaction with our research staff through the Gather platform. Registered attendees will be able to browse our virtual booths at their convenience and connect with our research staff on engagement opportunities, including internship/post-doc openings as well as visiting faculty positions.


Details

  • Date: Monday, December 12, 2022
  • Time: 1:00 - 5:30 PM EST

Live Session Schedule

1:00 - 1:25 EST Auditorium A
Welcome / Opening Remarks
1:30 - 1:50 EST Auditorium A
Speech & Audio

Jonathan Le Roux
Auditorium B
Computational Sensing

Petros T. Boufounos
1:50 - 2:10 EST Auditorium A
Computer Vision

Tim K. Marks
Auditorium B
Connectivity and Information Processing

Kieran Parsons
2:10 - 3:10 EST Open Interaction on Gather Platform
3:15 - 3:45 EST Auditorium A
Keynote

"Dragging Audio Processing Past the 1970s (and the 2010s!)"
Paris Smaragdis, University of Illinois at Urbana-Champaign
3:50 - 4:10 EST Auditorium A
Controls for Autonomy

Stefano Di Cairano
Auditorium B
Electric Machines & Devices

Yebin Wang
4:10 - 4:30 EST Auditorium A
Data Analytics

Daniel N. Nikovski
Auditorium B
Multi-Physical Systems

Chris R. Laughman
4:30 - 5:30 EST Open Interaction on Gather Platform

 


Virtual Exhibit Booths

Attendees are invited to visit our virtual booths at their convenience to learn more about MERL's research activities and internship opportunities. These virtual spaces will provide:

  • Material that provides a more in-depth view of our latest research results
  • Links to relevant internship and post-doc opportunities
  • An opportunity to interact live with researchers on the Gather Platform

The event will feature more than a dozen virtual booths in key research areas.

  • End-to-end Speech and Audio Processing
  • Multimodal AI
  • Visual Analysis and Synthesis using Machine Learning
  • Robotic Perception
  • Machine Learning and Optimization for Robot Control and Human Robot Interaction
  • Power Systems Analytics
  • IoT Communications
  • Robust and Distributed Machine Learning
  • Computational Sensing
  • Autonomous vehicles and mobile robots
  • Dynamical Systems and Control
  • Advanced motor technologies
  • Multiphysical Systems Modeling, Control, and Learning
  • Optics and Metasurfaces
  • Visual Localization & Mapping

 

Featured Guest Speaker

Prof. Paris Smaragdis, the University of Illinois at Urbana-Champaign

Dragging Audio Processing Past the 1970s (and the 2010s!)

Abstract: Audio processing has not changed appreciably in the last 50 years. However, novel tasks, new computational demands, attention to human-centered evaluation, and a strong influence from machine learning, all point towards new ways of thinking about sound. In this talk I will go over multiple examples of how one can modernize standard audio processing in order to serve ambitious project goals. I will specifically talk about the use of meta learning for adaptive filtering, and how we can outperform humans in the game of optimizer design; I will show new ways to represent and process time series based on graph networks that results in highly desirable scaling properties for audio and speech recognition; and I will also talk about how we can move towards unsupervised learning from real-world data in a way that (almost) matches curated data performance and allows highly-distributed learning from audio devices in the wild.

Prof. Paris Smaragdis

Paris Smaragdis is a Professor and an Associate Department Head in the Computer Science department in the University of Illinois at Urbana-Champaign. He completer his graduate studies and postdoc at MIT in 2001. He has been a research scientist at Mitsubishi Electric Research Labs in Cambridge MA, a senior research scientist at Adobe Research, and an Amazon Scholar with AWS. His research lies in the intersection of signal processing and machine learning, where he has contributed multiple widely used methods for source separation and audio analysis throughout his 150+ publications and 60+ US and international patents. His research has been productized many times worldwide, has been widely used in personal computers and commercial systems, and has been used in award winning movies and music releases. He was recognized by the MIT Technology Review as one of the "world's top innovators under 35 years old" in 2006 (TR35 award) and he has received the IEEE Signal Processing Society (SPS) Best Paper Award twice (2017,2020). He was elected an IEEE Fellow (class of 2015), and selected as an IEEE SPS Distinguished Lecturer (2016-2017). Within IEEE SPS he has served as the chair the Machine Learning for Signal Processing Technical Committee, the Audio and Acoustic Signal Processing Technical Committee, and the Data Science Initiative. He has been elected to and served in the IEEE Signal Processing Society Board of Governors, and is currently the Editor in Chief of the ACM/IEEE Transactions on Audio, Speech, and Language Processing.


 


Contact Us

If you are experiencing any issues with registration or accessing the event site, or would like further information about this event, please contact us at voh22[at]merl[dot]com .