Initiated in 2021, the MERL Seminar Series aims to increase exposure to outstanding emerging researchers and new research topics. We invite carefully chosen external speakers to share their work, focusing on technical topics of broad interest. The talks help promote interdisciplinary collaboration within and outside MERL.
Seminars in the series take place approximately every two weeks during the fall and spring.
I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language -- and the physical and social situations language encodes -- in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Advances in machine learning have led to powerful models for audio and language, proficient in tasks like speech recognition and fluent language generation. Beyond their immense utility in engineering applications, these models offer valuable tools for cognitive science and neuroscience. In this talk, I will demonstrate how these artificial neural network models can be used to understand how the human brain processes language. The first part of the talk will cover how audio neural networks serve as computational accounts for brain activity in the auditory cortex. The second part will focus on the use of large language models, such as those in the GPT family, to non-invasively control brain activity in the human language system.
Imaging in low light settings is extremely challenging due to low photon counts, both in photography and in microscopy. In photography, imaging under low light, high gain settings often results in highly structured, non-Gaussian sensor noise that’s hard to characterize or denoise. In this talk, we address this by developing a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light, and highest gain settings. Using this noise model, we train a video denoiser using synthetic data and demonstrate photorealistic videography at starlight (submillilux levels of illumination) for the first time.
For multiphoton microscopy, which is a form a scanning microscopy, there’s a trade-off between field of view, phototoxicity, acquisition time, and image quality, often resulting in noisy measurements. While deep learning-based methods have shown compelling denoising performance, can we trust these methods enough for critical scientific and medical applications? In the second part of this talk, I’ll introduce a learned, distribution-free uncertainty quantification technique that can both denoise and predict pixel-wise uncertainty to gauge how much we can trust our denoiser’s performance. Furthermore, we propose to leverage this learned, pixel-wise uncertainty to drive an adaptive acquisition technique that rescans only the most uncertain regions of a sample. With our sample and algorithm-informed adaptive acquisition, we demonstrate a 120X improvement in total scanning time and total light dose for multiphoton microscopy, while successfully recovering fine structures within the sample.
When designing complex systems, we need to consider multiple trade-offs at various abstraction levels and scales, and choices of single components need to be studied jointly. For instance, the design of future mobility solutions (e.g., autonomous vehicles, micromobility) and the design of the mobility systems they enable are closely coupled. Indeed, knowledge about the intended service of novel mobility solutions would impact their design and deployment process, whilst insights about their technological development could significantly affect transportation management policies. Optimally co-designing sociotechnical systems is a complex task for at least two reasons. On one hand, the co-design of interconnected systems (e.g., large networks of cyber-physical systems) involves the simultaneous choice of components arising from heterogeneous natures (e.g., hardware vs. software parts) and fields, while satisfying systemic constraints and accounting for multiple objectives. On the other hand, components are connected via collaborative and conflicting interactions between different stakeholders (e.g., within an intermodal mobility system). In this talk, I will present a framework to co-design complex systems, leveraging a monotone theory of co-design and tools from game theory. The framework will be instantiated in the task of designing future mobility systems, all the way from the policies that a city can design, to the autonomy of vehicles part of an autonomous mobility-on-demand service. Through various case studies, I will show how the proposed approaches allow one to efficiently answer heterogeneous questions, unifying different modeling techniques and promoting interdisciplinarity, modularity, and compositionality. I will then discuss open challenges for compositional systems design optimization, and present my agenda to tackle them.
This talk reviews the concept of predictive multiplicity in machine learning. Predictive multiplicity arises when different classifiers achieve similar average performance for a specific learning task yet produce conflicting predictions for individual samples. We discuss a metric called “Rashomon Capacity” for quantifying predictive multiplicity in multi-class classification. We also present recent findings on the multiplicity cost of differentially private training methods and group fairness interventions in machine learning.
This talk is based on work published at ICML'20, NeurIPS'22, ACM FAccT'23, and NeurIPS'23.
Building General Purpose Vision Systems (GPVs) that can perform a huge variety of tasks has been a long-standing goal for the computer vision community. However, end-to-end training of these systems to handle different modalities and tasks has proven to be extremely challenging. In this talk, I will describe a lucrative neuro-symbolic alternative to the common end-to-end learning paradigm called Visual Programming. Visual Programming is a general framework that leverages the code-generation abilities of LLMs, existing neural models, and non-differentiable programs to enable powerful applications. Some of these applications continue to remain elusive for the current generation of end-to-end trained GPVs.
Inverse Optimal Control (IOC) aims to achieve an objective function corresponding to a certain task from an expert robot driven by optimal control, which has become a powerful tool in many applications in robotics. We will present our recent solutions to IOC based on incomplete observations of systems' trajectories, which enables an autonomous system to “sense-and-adapt", i.e., incrementally improving the learning of objective functions as new data arrives. This also leads to a distributed algorithm to solve IOC in multi-agent systems, in which each agent can only access part of the overall trajectory of an optimal control system and cannot solve IOC by itself. This is perhaps the first distributed method to IOC. Applications of IOC into human prediction will also be given.
Recent advances in multimodal models that fuse vision and language are revolutionizing robotics. In this lecture, I will begin by introducing recent multimodal foundational models and their applications in robotics. The second topic of this talk will address our recent work on multimodal language processing in robotics. The shortage of home care workers has become a pressing societal issue, and the use of domestic service robots (DSRs) to assist individuals with disabilities is seen as a possible solution. I will present our work on DSRs that are capable of open-vocabulary mobile manipulation, referring expression comprehension and segmentation models for everyday objects, and future captioning methods for cooking videos and DSRs.
Contact interactions are pervasive in key real-world robotic tasks like manipulation and walking. However, the non-smooth dynamics associated with impacts and friction remain challenging to model, and motion planning and control algorithms that can fluently and efficiently reason about contact remain elusive. In this talk, I will share recent work from my research group that takes an “optimization-first” approach to these challenges: collision detection, physics, motion planning, and control are all posed as constrained optimization problems. We then build a set of algorithmic and numerical tools that allow us to flexibly compose these optimization sub-problems to solve complex robotics problems involving discontinuous, unplanned, and uncertain contact mechanics.
Carbon capture, utilization, and storage (CCUS) is a promising pathway to decarbonize fossil-based power and industrial sectors and is a bridging technology for a sustainable transition to a net-zero emission energy future. This talk aims to provide an overview of design and optimization of CCUS systems. I will also attempt to give a brief perspective on emerging interests in process systems engineering research (e.g., systems integration, multiscale modeling, strategic planning, and optimization under uncertainty). The purpose is not to cover all aspects of PSE research for CCUS but rather to foster discussion by presenting some plausible future directions and ideas.
Quantum technology holds potential for revolutionizing how information is processed, transmitted, and acquired. While quantum computation and quantum communication have been among the well-known examples of quantum technology, it is increasingly recognized that quantum sensing is the application with the most potential for immediate wide-spread practical utilization. In this talk, I will provide an overview of the field of quantum sensing with nitrogen vacancy (NV) centers in diamond as a specific example. I will introduce the physical system of NV and describe some basic quantum sensing protocols. Then, I will present some state-of-the-art and examples where quantum sensors such as NV can accomplish what traditional sensors cannot. Lastly, I will discuss potential future directions in the area of NV quantum sensing.
Machine learning can be used to identify animals from their sound. This could be a valuable tool for biodiversity monitoring, and for understanding animal behaviour and communication. But to get there, we need very high accuracy at fine-grained acoustic distinctions across hundreds of categories in diverse conditions. In our group we are studying how to achieve this at continental scale. I will describe aspects of bioacoustic data that challenge even the latest deep learning workflows, and our work to address this. Methods covered include adaptive feature representations, deep embeddings and few-shot learning.
The talk will be divided into two parts. The first part of the talk introduces a class of first-order methods for constrained optimization that are based on an analogy to non-smooth dynamical systems. The key underlying idea is to express constraints in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. This results is a simplified suite of algorithms and an expanded range of possible applications in machine learning. In the second part of my talk, I will present a robot learning algorithm for trajectory tracking. The method incorporates prior knowledge about the system dynamics and by optimizing over feedforward actions, the risk of instability during deployment is mitigated. The algorithm will be evaluated on a ping-pong playing robot that is actuated by soft pneumatic muscles.
The decarbonization of buildings presents new challenges for the reliability of the electrical grid because of the intermittency of renewable energy sources and increase in grid load brought about by end-use electrification. To restore reliability, grid-interactive efficient buildings can provide flexibility services to the grid through demand response. Residential demand response programs are hindered by the need for manual intervention by customers. To maximize the energy flexibility potential of residential buildings, an advanced control architecture is needed. Reinforcement learning is well-suited for the control of flexible resources as it can adapt to unique building characteristics compared to expert systems. Yet, factors hindering the adoption of RL in real-world applications include its large data requirements for training, control security and generalizability. This talk will cover some of our recent work addressing these challenges. We proposed the MERLIN framework and developed a digital twin of a real-world 17-building grid-interactive residential community in CityLearn. We show that 1) independent RL-controllers for batteries improve building and district level KPIs compared to a reference RBC by tailoring their policies to individual buildings, 2) despite unique occupant behaviors, transferring the RL policy of any one of the buildings to other buildings provides comparable performance while reducing the cost of training, 3) training RL-controllers on limited temporal data that does not capture full seasonality in occupant behavior has little effect on performance. Although, the zero-net-energy (ZNE) condition of the buildings could be maintained or worsened because of controlled batteries, KPIs that are typically improved by ZNE condition (electricity price and carbon emissions) are further improved when the batteries are managed by an advanced controller.
In this talk, I will discuss our recent research on understanding post-hoc interpretability. I will begin by introducing a characterization of post-hoc interpretability methods as local function approximators, and the implications of this viewpoint, including a no-free-lunch theorem for explanations. Next, we shall challenge the assumption that post-hoc explanations provide information about a model's discriminative capabilities p(y|x) and instead demonstrate that many common methods instead rely on a conditional generative model p(x|y). This observation underscores the importance of being cautious when using such methods in practice. Finally, I will propose to resolve this via regularization of model structure, specifically by training low curvature neural networks, resulting in improved model robustness and stable gradients.
High-dimensional spatio-temporal dynamics can often be encoded in a low-dimensional subspace. Engineering applications for modeling, characterization, design, and control of such large-scale systems often rely on dimensionality reduction to make solutions computationally tractable in real-time. Common existing paradigms for dimensionality reduction include linear methods, such as the singular value decomposition (SVD), and nonlinear methods, such as variants of convolutional autoencoders (CAE). However, these encoding techniques lack the ability to efficiently represent the complexity associated with spatio-temporal data, which often requires variable geometry, non-uniform grid resolution, adaptive meshing, and/or parametric dependencies. To resolve these practical engineering challenges, we propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatial-temporal data. NIF consists of two modified multilayer perceptrons (MLPs): (i) ShapeNet, which isolates and represents the spatial complexity, and (ii) ParameterNet, which accounts for any other input complexity, including parametric dependencies, time, and sensor measurements. We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatio-temporal dynamics, efficient many-spatial-query tasks, and improved generalization performance for sparse reconstruction.
Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. Existing approaches use action-based representations that do not capture the goal-based meaning of a language expression and do not generalize to partially observed environments. The aim of my research program is to create autonomous robots that can understand complex goal-based commands and execute those commands in partially observed, dynamic environments. I will describe demonstrations of object-search in a POMDP setting with information about object locations provided by language, and mapping between English and Linear Temporal Logic, enabling a robot to understand complex natural language commands in city-scale environments. These advances represent steps towards robots that interpret complex natural language commands in partially observed environments using a decision theoretic framework.
Rapidly decarbonising the global energy system is critical for addressing climate change, but concerns about costs have been a barrier to implementation. Historically, most energy-economy models have overestimated the future costs of renewable energy technologies and underestimated their deployment, thereby overestimating total energy transition costs. These issues have driven calls for alternative approaches and more reliable technology forecasting methods. We use an approach based on probabilistic cost forecasting methods to estimate future energy system costs in a variety of scenarios. Our findings suggest that, compared to continuing with a fossil fuel-based system, a rapid green energy transition will likely result in net savings of many trillions of dollars - even without accounting for climate damages or co-benefits of climate policy.
Sustainability today encompasses three interconnected imperatives that all businesses must face and help to address: the increasing impact of climate change, the degradation of natural systems, and the growth of inequality. Business leaders today are increasingly understanding, particularly with the engagement of capital markets, that investors, consumers, and other business stakeholders are setting expectations on how companies are responding to these challenges and preparing for their business impact. More and more companies have shifted from sustainability as a single function in the company to one the is integrated across the firm. This translates directly into how companies are rethinking their product design and innovation efforts for sustainability and the technologies they will require. Some product categories, like heating and air conditioning systems for buildings, are both a part of the problem as well as potentially offering real solutions.
A seminar based upon the Author’s bestselling book, CLIMATE CHANGE and the road to NET-ZERO. The session shall explore how humanity has broken free from the shackles of poverty, suffering, and war and for the first time in human history grown both population and prosperity. It will also delve into how a single species has reconfigured the natural world, repurposed the Earth’s resources, and begun to re-engineer the climate.
Using these conflicting narratives, the talk will explore the science, economics, technology, and politics of climate change. Constructing an argument that demonstrates, under many energy transition pathways, solving global warming requires no trade-off between the economy and environment, present and future generations, or rich and poor. Ultimately concluding that a twenty-year transition to a zero-carbon system provides a win-win solution for all on planet Earth.
The visual world has its inherent structure: scenes are made of multiple identical objects; different objects may have the same color or material, with a regular layout; each object can be symmetric and have repetitive parts. How can we infer, represent, and use such structure from raw data, without hampering the expressiveness of neural networks? In this talk, I will demonstrate that such structure, or code, can be learned from natural supervision. Here, natural supervision can be from pixels, where neuro-symbolic methods automatically discover repetitive parts and objects for scene synthesis. It can also be from objects, where humans during fabrication introduce priors that can be leveraged by machines to infer regular intrinsics such as texture and material. When solving these problems, structured representations and neural nets play complementary roles: it is more data-efficient to learn with structured representations, and they generalize better to new scenarios with robustly captured high-level information; neural nets effectively extract complex, low-level features from cluttered and noisy visual data.
Autonomous systems are emerging as a driving technology for countlessly many applications. Numerous disciplines tackle the challenges toward making these systems trustworthy, adaptable, user-friendly, and economical. On the other hand, the existing disciplinary boundaries delay and possibly even obstruct progress. I argue that the nonconventional problems that arise in designing and verifying autonomous systems require hybrid solutions in the intersection of learning, formal methods, and controls. I will present examples of such hybrid solutions in the context of learning in sequential decision-making processes. These results offer novel means for effectively integrating physics-based, contextual, or structural prior knowledge into data-driven learning algorithms. They improve data efficiency by several orders of magnitude and generalizability to environments and tasks that the system had not experienced previously.
This seminar presents a comprehensive design and simulation procedure for Permanent Magnet Synchronous Machines (PMSMs) for traction application. The design of heavily saturated traction PMSMs is a multidisciplinary engineering challenge that CAD software suites struggle to grasp, whereas design equations are way too approximated for the purpose. This tutorial will present the design toolchain of SyR-e, where magnetic and structural design equations are fast-FEA corrected for an insightful initial design, later FEA calibrated with free or commercial FEA tools. One e-motor will be designed from zero referring to the specs and size of the Tesla Model 3 rear-axle e-motor. The circuital model of one motor with inverter and discrete-time control will be automatically generated, in Simulink and PLECS, with accessible torque control source code, for simulation of healthy and faulty conditions, ready for real-time implementation (e.g. HiL).
Human sensory perception of the physical world is rich and multimodal and can flexibly integrate input from all five sensory modalities -- vision, touch, smell, hearing, and taste. However, in AI, attention has primarily focused on visual perception. In this talk, I will introduce my efforts in connecting vision with sound, which will allow machine perception systems to see objects and infer physics from multi-sensory data. In the first part of my talk, I will introduce a. self-supervised approach that could learn to parse images and separate the sound sources by watching and listening to unlabeled videos without requiring additional manual supervision. In the second part of my talk, I will show we may further infer the underlying causal structure in 3D environments through visual and auditory observations. This enables agents to seek the sound source of repeating environmental sound (e.g., alarm) or identify what object has fallen, and where, from an intermittent impact sound.
Machine learning has shown incredible promise in robotics, with some notable recent demonstrations in manipulation and sim2real transfer. These results, however, require either an accurate a priori model (for simulation) or a large amount of data. In contrast, my lab is focused on enabling robots to enter novel environments and then, with minimal time to gather information, accomplish complex tasks. In this talk, I will argue that the hybrid or contact-driven nature of real-world robotics, where a robot must safely and quickly interact with objects, drives this high data requirement. In particular, the inductive biases inherent in standard learning methods fundamentally clash with the non-differentiable physics of contact-rich robotics. Focusing on model learning, or system identification, I will show both empirical and theoretical results which demonstrate that contact stiffness leads to poor training and generalization, leading to some healthy skepticism of simulation experiments trained on artificially soft environments. Fortunately, implicit learning formulations, which embed convex optimization problems, can dramatically reshape the optimization landscape for these stiff problems. By carefully reasoning about the roles of stiffness and discontinuity, and integrating non-smooth structures, we demonstrate dramatically improved learning performance. Within this family of approaches, ContactNets accurately identifies the geometry and dynamics of a six-sided cube bouncing, sliding, and rolling across a surface from only a handful of sample trajectories. Similarly, a piecewise-affine hybrid system with thousands of modes can be identified purely from state transitions. Time permitting, I'll discuss how these learned models can be deployed for control via recent results in real-time, multi-contact MPC.