- Date: May 22, 2022 - May 27, 2022
Where: Singapore
MERL Contacts: Anoop Cherian; Chiori Hori; Toshiaki Koike-Akino; Jonathan Le Roux; Tim K. Marks; Philip V. Orlik; Kuan-Chuan Peng; Pu (Perry) Wang; Gordon Wichern
Research Areas: Artificial Intelligence, Computer Vision, Signal Processing, Speech & Audio
Brief - MERL researchers are presenting 8 papers at the IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), which is being held in Singapore from May 22-27, 2022. A week of virtual presentations also took place earlier this month.
Topics to be presented include recent advances in speech recognition, audio processing, scene understanding, computational sensing, and classification.
ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
-
- Date: March 1, 2022
MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Brief - MERL's research on scene-aware interaction was recently featured in an IEEE Spectrum article. The article, titled "At Last, A Self-Driving Car That Can Explain Itself" and authored by MERL Senior Principal Research Scientist Chiori Hori and MERL Director Anthony Vetro, gives an overview of MERL's efforts towards developing a system that can analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.
Scene-Aware Interaction for car navigation, one target application that the article focuses on, will provide drivers with intuitive route guidance. Scene-Aware Interaction technology is expected to have wide applicability, including human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. MERL's Scene-Aware Interaction Technology had previously been featured in a Mitsubishi Electric Corporation Press Release.
IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. IEEE Spectrum has a circulation of over 400,000 engineers worldwide, making it one of the leading science and engineering magazines.
-
- Date: July 22, 2020
Where: Tokyo, Japan
MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Brief - Mitsubishi Electric Corporation announced that the company has developed what it believes to be the world’s first technology capable of highly natural and intuitive interaction with humans based on a scene-aware capability to translate multimodal sensing information into natural language.
The novel technology, Scene-Aware Interaction, incorporates Mitsubishi Electric’s proprietary Maisart® compact AI technology to analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.
Scene-Aware Interaction for car navigation, one target application, will provide drivers with intuitive route guidance. The technology is also expected to have applicability to human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. The technology is based on recent research by MERL's Speech & Audio and Computer Vision groups.
-
- Date: June 14, 2020 - June 19, 2020
MERL Contacts: Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Tim K. Marks; Kuan-Chuan Peng; Ye Wang
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Brief - MERL researchers are presenting four papers (two oral papers and two posters) and organizing two workshops at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR 2020) conference.
CVPR 2020 Orals with MERL authors:
1. "Dynamic Multiscale Graph Neural Networks for 3D Skeleton Based Human Motion Prediction," by Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, Qi Tian
2. "Collaborative Motion Prediction via Neural Motion Message Passing," by Yue Hu, Siheng Chen, Ya Zhang, Xiao Gu
CVPR 2020 Posters with MERL authors:
3. "LUVLi Face Alignment: Estimating Landmarks’ Location, Uncertainty, and Visibility Likelihood," by Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng
4. "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps," by Pengxiang Wu, Siheng Chen, Dimitris N. Metaxas
CVPR 2020 Workshops co-organized by MERL researchers:
1. Fair, Data-Efficient and Trusted Computer Vision
2. Deep Declarative Networks.
-
- Date: October 27, 2019
Awarded to: Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Chen Feng, Xiaoming Liu
MERL Contact: Tim K. Marks
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Brief - MERL researcher Tim Marks, former MERL interns Abhinav Kumar and Wenxuan Mou, and MERL consultants Professor Chen Feng (NYU) and Professor Xiaoming Liu (MSU) received the Best Oral Paper Award at the IEEE/CVF International Conference on Computer Vision (ICCV) 2019 Workshop on Statistical Deep Learning in Computer Vision (SDL-CV) held in Seoul, Korea. Their paper, entitled "UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss," describes a method which, given an image of a face, estimates not only the locations of facial landmarks but also the uncertainty of each landmark location estimate.
-
- Date: May 12, 2019 - May 17, 2019
Where: Brighton, UK
MERL Contacts: Petros T. Boufounos; Anoop Cherian; Chiori Hori; Toshiaki Koike-Akino; Jonathan Le Roux; Dehong Liu; Hassan Mansour; Tim K. Marks; Philip V. Orlik; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern
Research Areas: Computational Sensing, Computer Vision, Machine Learning, Signal Processing, Speech & Audio
Brief - MERL researchers will be presenting 16 papers at the IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), which is being held in Brighton, UK from May 12-17, 2019. Topics to be presented include recent advances in speech recognition, audio processing, scene understanding, computational sensing, and parameter estimation. MERL is also a sponsor of the conference and will be participating in the student career luncheon; please join us at the lunch to learn about our internship program and career opportunities.
ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
-
- Date: October 28, 2017
Where: Venice, Italy
MERL Contact: Tim K. Marks
Research Area: Machine Learning
Brief - MERL Senior Principal Research Scientist Tim K. Marks will give an invited keynote talk at the 2017 IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2017). The workshop will take place On October 28, 2017, at the International Conference on Computer Vision (ICCV 2017) in Venice, Italy.
-
- Date: Thursday, June 1, 2017
Location: IEEE Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC
Speaker: Tim K. Marks
MERL Contact: Tim K. Marks
Research Area: Machine Learning
Brief - MERL Senior Principal Research Scientist Tim K. Marks will give the invited lunch talk on Thursday, June 1, at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). The talk is entitled "Robust Real-Time 3D Head Pose and 2D Face Alignment.".
-
- Date: April 27, 2017
Where: Lincoln Laboratory, Massachusetts Institute of Technology
MERL Contact: Tim K. Marks
Research Area: Machine Learning
Brief - MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."
Abstract: Head pose estimation and facial landmark localization are key technologies, with widespread application areas including biometrics and human-computer interfaces. This talk describes two different robust real-time face-processing methods, each using a different modality of input image. The first part of the talk describes our system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. The method is based on a novel 3D Triangular Surface Patch (TSP) descriptor, which is viewpoint-invariant as well as robust to noise and to variations in the data resolution. This descriptor, combined with fast nearest-neighbor lookup and a joint voting scheme, enable our system to handle arbitrary head pose and significant occlusions. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Both our 3D head pose and 2D face alignment methods outperform the previous results on standard datasets. If permitted, I plan to end the talk with a live demonstration.
-
- Date: April 10, 2017
Where: University of Utah School of Computing
MERL Contact: Tim K. Marks
Research Area: Machine Learning
Brief - MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."
Abstract: The first part of the talk describes our multi-stream bi-directional recurrent neural network for action detection from video. In addition to a two-stream convolutional neural network (CNN) on full-frame appearance (images) and motion (optical flow), our system trains two additional streams on appearance and motion that have been cropped to a bounding box from a person tracker. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. Our method outperforms the previous state of the art on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we have made available to the community. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Our face alignment system outperforms the previous results on standard datasets. The talk will end with a live demo of our face alignment system.
-
- Date: June 27, 2016 - June 30, 2016
Where: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV
MERL Contacts: Michael J. Jones; Tim K. Marks
Research Area: Machine Learning
Brief - MERL researchers in the Computer Vision group presented three papers at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), which had a paper acceptance rate of 29.9%.
-
- Date: May 8, 2012
Where: The International Journal of Robotics Research
MERL Contact: Tim K. Marks
Research Area: Computer Vision
Brief - The article "Fast Object Localization and Pose Estimation in Heavy Clutter for Robotic Bin Picking" by Liu, M.-Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T.K. and Chellappa, R. was published in The International Journal of Robotics Research.
-
- Date: November 6, 2011
Where: IEEE International Conference on Computer Vision (ICCV)
MERL Contacts: Tim K. Marks; Michael J. Jones Brief - The paper "Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization" by Asthana, A., Marks, T.K., Jones, M.J., Tieu, K.H. and Rohith, M. was presented at the IEEE International Conference on Computer Vision (ICCV).
-
- Date: September 25, 2011
Where: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
MERL Contact: Tim K. Marks Brief - The paper "Entropy-Based Motion Selection for Touch-Based Registration Using Rao-Blackwellized Particle Filtering" by Taguchi, Y., Marks, T.K. and Hershey, J.R. was presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
-
- Date: August 29, 2011
Where: British Machine Vision Conference (BMVC)
MERL Contacts: Michael J. Jones; Tim K. Marks Brief - The paper "Pose Normalization via Learned 2D Warping for Fully Automatic Face Recognition" by Asthana, A., Jones, M.J., Marks, T.K., Tieu, K.H. and Goecke, R. was presented at the British Machine Vision Conference (BMVC).
-
- Date: September 5, 2010
Where: European Conference on Computer Vision (ECCV)
MERL Contact: Tim K. Marks
Research Area: Computer Vision
Brief - The papers "Image Invariants for Smooth Reflective Surfaces" by Sankaranarayanan, A.C., Veeraraghavan, A., Tuzel, O. and Agrawal, A., "Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras" by Agrawal, A., Taguchi, Y. and Ramalingam, S., "P2Pi: A Minimal Solution for Registration of 3D Points to 3D Planes" by Ramalingam, S., Taguchi, Y., Marks, T.K. and Tuzel, O., "Fast Approximate Nearest Neighbor Methods for Non-Euclidean Manifolds with Applications to Human Activity Analysis in Videos" by Chaudhry, R. and Ivanov, Y. and "Flexible Voxels for Motion-Aware Videography" by Gupta, M., Agrawal, A., Veeraraghavan, A. and Narasimhan, S.G. were presented at the European Conference on Computer Vision (ECCV).
-
- Date: June 13, 2010
Where: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
MERL Contacts: Michael J. Jones; Tim K. Marks Brief - The papers "Optimal Coded Sampling for Temporal Super-Resolution" by Agrawal, A.K., Gupta, M., Veeraraghavan, A.N. and Narasimhan, S.G., "Breaking the Interactive Bottleneck in Multi-class Classification with Active Selection and Binary Feedback" by Joshi, A.J., Porikli, F.M. and Papanikolopoulos, N., "Axial Light Field for Curved Mirrors: Reflect Your Perspective, Widen Your View" by Taguchi, Y., Agrawal, A.K., Ramalingam, S. and Veeraraghavan, A.N., "Morphable Reflectance Fields for Enhancing Face Recognition" by Kumar, R., Jones, M.J. and Marks, T.K., "Increasing Depth Resolution of Electron Microscopy of Neural Circuits using Sparse Tomographic Reconstruction" by Veeraraghavan, A., Genkin, A.V., Vitaladevuni, S., Scheffer, L., Xu, S., Hess, H., Fetter, R., Cantoni, M., Knott, G. and Chklovskii, D., "Specular Surface Reconstruction from Sparse Reflection Correspondences" by Sankaranarayanan, A., Veeraraghavan, A.N., Tuzel, C.O. and Agrawal, A.K., "Fast Directional Chamfer Matching" by Liu, M.-Y., Tuzel, C.O., Veeraraghavan, A.N. and Chellappa, R. and "Robust RVM regression using sparse outlier model" by Mitra, K., Veeraraghavan, A. and Chellappa, R. were presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
-
- Date: May 3, 2010
Where: IEEE International Conference on Robotics and Automation (ICRA)
MERL Contact: Tim K. Marks
Research Area: Computer Vision
Brief - The papers "Pose Estimation in Heavy Clutter Using a Multi-Flash Camera" by Liu, M.-Y., Tuzel, C.O., Veeraraghavan, A.N., Chellappa, R., Agrawal, A.K. and Okuda, H., "Rao-Blackwellized Particle Filtering for Probing-based 6-DOF Localization in Robotic Assembly" by Taguchi, Y., Marks, T.K. and Okuda, H. and "Multi-Class Batch-mode Active Learning for Image Classification" by Joshi, A.J., Porikli, F. and Papanikolopoulos, N. were presented at the IEEE International Conference on Robotics and Automation (ICRA).
-
- Date: February 1, 2010
Where: IEEE Transactions on Pattern Analysis and Machine Intelligence
MERL Contact: Tim K. Marks
Research Area: Computer Vision
Brief - The article "Tracking Motion, Deformation and Texture Using Conditionally Gaussian Processes" by Marks, T.K., Hershey, J.R. and Movellan, J.R. was published in IEEE Transactions on Pattern Analysis and Machine Intelligence.
-