TR2024-115

MPC of Uncertain Nonlinear Systems with Meta-Learning for Fast Adaptation of Neural Predictive Models


    •  Yan, J., Chakrabarty, A., Rupenyan, A., Lygeros, J., "MPC of Uncertain Nonlinear Systems with Meta-Learning for Fast Adaptation of Neural Predictive Models", International Conference on Automation Science and Engineering (CASE), August 2024.
      BibTeX TR2024-115 PDF
      • @inproceedings{Yan2024aug,
      • author = {Yan, Jiaqi and Chakrabarty, Ankush and Rupenyan, Alisa and Lygeros, John}},
      • title = {MPC of Uncertain Nonlinear Systems with Meta-Learning for Fast Adaptation of Neural Predictive Models},
      • booktitle = {International Conference on Automation Science and Engineering (CASE)},
      • year = 2024,
      • month = aug,
      • url = {https://www.merl.com/publications/TR2024-115}
      • }
  • MERL Contact:
  • Research Areas:

    Control, Machine Learning, Optimization

Abstract:

In this paper, we consider the problem of reference tracking in uncertain nonlinear systems. A neural State-Space Model (NSSM) is used to approximate the nonlinear system, where a deep encoder network learns the nonlinearity from data, and a state-space component captures the temporal relationship. This transforms the nonlinear system into a linear system in a latent space, enabling the application of model predictive control (MPC) to deter- mine effective control actions. Our objective is to design the optimal controller using limited data from the target system (the system of interest). To this end, we employ an implicit model-agnostic meta-learning (iMAML) framework that leverages information from source systems (systems that share similarities with the target system) to expedite training in the target system and enhance its control performance. The framework consists of two phases: the (offine) meta- training phase learns a aggregated NSSM using data from source systems, and the (online) meta-inference phase quickly adapts this aggregated model to the target system using only a few data points and few online training iterations, based on local loss function gradients. The iMAML algorithm exploits the implicit function theorem to exactly compute the gradient during training, without relying on the entire optimization path. By focusing solely on the optimal solution, rather than the path, we can meta-train with less storage complexity and fewer approximations than other contemporary meta-learning algorithms. We demonstrate through numerical examples that our proposed method can yield accurate predictive models by adaptation, resulting in a downstream MPC that outperforms several baselines.