Jing Liu

- Phone: 617-621-7584
- Email:
-
Position:
Research / Technical Staff
Visiting Research Scientist -
Education:
Ph.D., University of California, San Diego, 2019 -
Research Areas:
Jing's Quick Links
-
Biography
Before joining MERL, Jing was an Illinois Future Faculty fellow at the Computer Science department of the University of Illinois, Urbana Champaign (UIUC). Prior to that, he was a Postdoctoral Research Associate at the Coordinated Science Lab of UIUC. His research interests include Trustworthy AI, Distributed Learning and Inference, Robust and Efficient Internet-of-Things (IoT), and green AI.
-
Recent News & Events
-
NEWS MERL researchers presenting workshop papers at NeurIPS 2022 Date: December 2, 2022 - December 8, 2022
MERL Contacts: Matthew Brand; Toshiaki Koike-Akino; Jing Liu; Saviz Mowlavi; Kieran Parsons; Ye Wang
Research Areas: Artificial Intelligence, Control, Dynamical Systems, Machine Learning, Signal ProcessingBrief- In addition to 5 papers in recent news (https://www.merl.com/news/news-20221129-1450), MERL researchers presented 2 papers at the NeurIPS Conference Workshop, which was held Dec. 2-8. NeurIPS is one of the most prestigious and competitive international conferences in machine learning.
- “Optimal control of PDEs using physics-informed neural networks” by Saviz Mowlavi and Saleh Nabi
Physics-informed neural networks (PINNs) have recently become a popular method for solving forward and inverse problems governed by partial differential equations (PDEs). By incorporating the residual of the PDE into the loss function of a neural network-based surrogate model for the unknown state, PINNs can seamlessly blend measurement data with physical constraints. Here, we extend this framework to PDE-constrained optimal control problems, for which the governing PDE is fully known and the goal is to find a control variable that minimizes a desired cost objective. We validate the performance of the PINN framework by comparing it to state-of-the-art adjoint-based optimization, which performs gradient descent on the discretized control variable while satisfying the discretized PDE.
- “Learning with noisy labels using low-dimensional model trajectory” by Vasu Singla, Shuchin Aeron, Toshiaki Koike-Akino, Matthew E. Brand, Kieran Parsons, Ye Wang
Noisy annotations in real-world datasets pose a challenge for training deep neural networks (DNNs), detrimentally impacting generalization performance as incorrect labels may be memorized. In this work, we probe the observations that early stopping and low-dimensional subspace learning can help address this issue. First, we show that a prior method is sensitive to the early stopping hyper-parameter. Second, we investigate the effectiveness of PCA, for approximating the optimization trajectory under noisy label information. We propose to estimate the low-rank subspace through robust and structured variants of PCA, namely Robust PCA, and Sparse PCA. We find that the subspace estimated through these variants can be less sensitive to early stopping, and can outperform PCA to achieve better test error when trained on noisy labels.
- In addition, new MERL researcher, Jing Liu, also presented a paper entitled “CoPur: Certifiably Robust Collaborative Inference via Feature Purification" based on his previous work before joining MERL. His paper was elected as a spotlight paper to be highlighted in lightening talks and featured paper panel.
- In addition to 5 papers in recent news (https://www.merl.com/news/news-20221129-1450), MERL researchers presented 2 papers at the NeurIPS Conference Workshop, which was held Dec. 2-8. NeurIPS is one of the most prestigious and competitive international conferences in machine learning.
-
-
Research Highlights
-
Internships with Jing
-
CI2109: Trustworthy Generative AI
MERL is seeking a highly motivated and qualified intern to work on methods for trustworthy generative AI. The ideal candidate would have significant research experience in trustworthy AI methods for large language models, such as for preventing hallucinations, handling data memorization issues, generation provenance tracking, and/or grounding with world modeling. A mature understanding of modern machine learning methods, proficiency with Python, and familiarity with deep learning frameworks are expected. Candidates at or beyond the middle of their Ph.D. program, possessing a background in Machine Learning, especially in the context of Natural Language Processing, are strongly encouraged to apply. The expected duration is 3 months with flexible start dates. Join us at MERL and be part of a transformative journey in Generative AI research!
-
-
MERL Publications
- "Stabilizing Subject Transfer in EEG Classification with Divergence Estimation", arXiv, October 2023.BibTeX arXiv
- @article{Smedemark-Margulies2023oct,
- author = {Smedemark-Margulies, Niklas and Wang, Ye and Koike-Akino, Toshiaki and Liu, Jing and Parsons, Kieran and Bicer, Yunus and Erdogmus, Deniz},
- title = {Stabilizing Subject Transfer in EEG Classification with Divergence Estimation},
- journal = {arXiv},
- year = 2023,
- month = oct,
- url = {https://arxiv.org/abs/2310.08762}
- }
,
- "Stabilizing Subject Transfer in EEG Classification with Divergence Estimation", arXiv, October 2023.
-
Other Publications
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.BibTeX
- @Inproceedings{deshmukh2022robust,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V},
- title = {Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm},
- booktitle = {2022 IEEE International Symposium on Information Theory (ISIT)},
- year = 2022,
- pages = {1115--1120},
- organization = {IEEE}
- }
, - "CoPur: Certifiably Robust Collaborative Inference via Feature Purification", Advances in Neural Information Processing Systems, 2022.BibTeX
- @Inproceedings{liu2022copur,
- author = {Liu, Jing and Xie, Chulin and Koyejo, Oluwasanmi O and Li, Bo},
- title = {CoPur: Certifiably Robust Collaborative Inference via Feature Purification},
- booktitle = {Advances in Neural Information Processing Systems},
- year = 2022
- }
, - "Rvfr: Robust vertical federated learning via feature subspace recovery", NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership, 2021.BibTeX
- @Inproceedings{liu2021rvfr,
- author = {Liu, Jing and Xie, Chulin and Kenthapadi, Krishnaram and Koyejo, Sanmi and Li, Bo},
- title = {Rvfr: Robust vertical federated learning via feature subspace recovery},
- booktitle = {NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership},
- year = 2021
- }
, - "Information flow optimization in inference networks", ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 8289-8293.BibTeX
- @Inproceedings{deshmukh2020information,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V and Verma, Gunjan},
- title = {Information flow optimization in inference networks},
- booktitle = {ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
- year = 2020,
- pages = {8289--8293},
- organization = {IEEE}
- }
, - "Sparse Bayesian learning for robust PCA: Algorithms and analyses", IEEE Transactions on Signal Processing, Vol. 67, No. 22, pp. 5837-5849, 2019.BibTeX
- @Article{liu2019sparse,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Sparse Bayesian learning for robust PCA: Algorithms and analyses},
- journal = {IEEE Transactions on Signal Processing},
- year = 2019,
- volume = 67,
- number = 22,
- pages = {5837--5849},
- publisher = {IEEE}
- }
, - "Robust PCA via ℓ0-ℓ1 Regularization", IEEE Transactions on Signal Processing, Vol. 67, No. 2, pp. 535-549, 2018.BibTeX
- @Article{liu2018robust,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Robust PCA via ℓ0-ℓ1 Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2018,
- volume = 67,
- number = 2,
- pages = {535--549},
- publisher = {IEEE}
- }
, - "Robust Linear Regression via ℓ0 Regularization", IEEE Transactions on Signal Processing, Vol. 66, No. 3, pp. 698-713, 2017.BibTeX
- @Article{liu2017robust,
- author = {Liu, Jing and Cosman, Pamela C and Rao, Bhaskar D},
- title = {Robust Linear Regression via ℓ0 Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2017,
- volume = 66,
- number = 3,
- pages = {698--713},
- publisher = {IEEE}
- }
,
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.