Jing Liu

- Phone: 617-621-7584
- Email:
-
Position:
Research / Technical Staff
Visiting Research Scientist -
Education:
Ph.D., University of California, San Diego, 2019 -
Research Areas:
Jing's Quick Links
-
Biography
Before joining MERL, Jing was an Illinois Future Faculty fellow at the Computer Science department of the University of Illinois, Urbana Champaign (UIUC). Prior to that, he was a Postdoctoral Research Associate at the Coordinated Science Lab of UIUC. His research interests include Trustworthy AI, Distributed Learning and Inference, Robust and Efficient Internet-of-Things (IoT), and green AI.
-
Recent News & Events
-
NEWS MERL Papers and Workshops at AAAI 2025 Date: February 25, 2025 - March 4, 2025
Where: The Association for the Advancement of Artificial Intelligence (AAAI)
MERL Contacts: Ankush Chakrabarty; Toshiaki Koike-Akino; Jing Liu; Kuan-Chuan Peng; Diego Romeres; Ye Wang
Research Areas: Artificial Intelligence, Machine Learning, OptimizationBrief- MERL researchers presented 2 conference papers, 2 workshop papers, and co-organized 1 workshop at the AAAI 2025 conference, which was held in Philadelphia from Feb. 25 to Mar. 4, 2025. AAAI is one of the most prestigious and competitive international conferences in artificial intelligence (AI). Details of MERL contributions are provided below.
- AAAI Papers in Main Tracks:
1. "Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage" by M.R.U. Rashid, J. Liu, T. Koike-Akino, Y. Wang, and S. Mehnaz. [Oral Presentation]
This work proposes a novel unlearning-based model poisoning method that amplifies privacy breaches during fine-tuning. Extensive empirical studies show the proposed method’s efficacy on both membership inference and data extraction attacks. The attack is stealthy enough to bypass detection based defenses, and differential privacy cannot effectively defend against the attacks without significantly impacting model utility.
Paper: https://www.merl.com/publications/TR2025-017
2. "User-Preference Meets Pareto-Optimality: Multi-Objective Bayesian Optimization with Local Gradient Search" by J.H.S. Ip, A. Chakrabarty, A. Mesbah, and D. Romeres. [Poster Presentation]
This paper introduces a sample-efficient multi-objective Bayesian optimization method that integrates user preferences with gradient-based search to find near-Pareto optimal solutions. The proposed method achieves high utility and reduces distance to Pareto-front solutions across both synthetic and real-world problems, underscoring the importance of minimizing gradient uncertainty during gradient-based optimization. Additionally, the study introduces a novel utility function that respects Pareto dominance and effectively captures diverse user preferences.
Paper: https://www.merl.com/publications/TR2025-018
- AAAI Workshop Papers:
1. "Quantum Diffusion Models for Few-Shot Learning" by R. Wang, Y. Wang, J. Liu, and T. Koike-Akino.
This work presents the quantum diffusion model (QDM) as an approach to overcome the challenges of quantum few-shot learning (QFSL). It introduces three novel algorithms developed from complementary data-driven and algorithmic perspectives to enhance the performance of QFSL tasks. The extensive experiments demonstrate that these algorithms achieve significant performance gains over traditional baselines, underscoring the potential of QDM to advance QFSL by effectively leveraging quantum noise modeling and label guidance.
Paper: https://www.merl.com/publications/TR2025-025
2. "Quantum Implicit Neural Compression", by T. Fujihashi and T., Koike-Akino.
This work introduces a quantum counterpart of implicit neural representation (quINR) which leverages the exponentially rich expressivity of quantum neural networks to improve the classical INR-based signal compression methods. Evaluations using some benchmark datasets show that the proposed quINR-based compression could improve rate-distortion performance in image compression compared with traditional codecs and classic INR-based coding methods.
Paper: https://www.merl.com/publications/TR2025-024
- AAAI Workshops Contributed by MERL:
1. "Scalable and Efficient Artificial Intelligence Systems (SEAS)"
K.-C. Peng co-organized this workshop, which offers a timely forum for experts to share their perspectives in designing and developing robust computer vision (CV), machine learning (ML), and artificial intelligence (AI) algorithms, and translating them into real-world solutions.
Workshop link: https://seasworkshop.github.io/aaai25/index.html
2. "Quantum Computing and Artificial Intelligence"
T. Koike-Akino served a session chair of Quantum Neural Network in this workshop, which focuses on seeking contributions encompassing theoretical and applied advances in quantum AI, quantum computing (QC) to enhance classical AI, and classical AI to tackle various aspects of QC.
Workshop link: https://sites.google.com/view/qcai2025/
- MERL researchers presented 2 conference papers, 2 workshop papers, and co-organized 1 workshop at the AAAI 2025 conference, which was held in Philadelphia from Feb. 25 to Mar. 4, 2025. AAAI is one of the most prestigious and competitive international conferences in artificial intelligence (AI). Details of MERL contributions are provided below.
-
NEWS MERL Researchers to Present 2 Conference and 11 Workshop Papers at NeurIPS 2024 Date: December 10, 2024 - December 15, 2024
Where: Advances in Neural Processing Systems (NeurIPS)
MERL Contacts: Petros T. Boufounos; Matthew Brand; Ankush Chakrabarty; Anoop Cherian; François Germain; Toshiaki Koike-Akino; Christopher R. Laughman; Jonathan Le Roux; Jing Liu; Suhas Lohit; Tim K. Marks; Yoshiki Masuyama; Kieran Parsons; Kuan-Chuan Peng; Diego Romeres; Pu (Perry) Wang; Ye Wang; Gordon Wichern
Research Areas: Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Human-Computer Interaction, Information SecurityBrief- MERL researchers will attend and present the following papers at the 2024 Advances in Neural Processing Systems (NeurIPS) Conference and Workshops.
1. "RETR: Multi-View Radar Detection Transformer for Indoor Perception" by Ryoma Yataka (Mitsubishi Electric), Adriano Cardace (Bologna University), Perry Wang (Mitsubishi Electric Research Laboratories), Petros Boufounos (Mitsubishi Electric Research Laboratories), Ryuhei Takahashi (Mitsubishi Electric). Main Conference. https://neurips.cc/virtual/2024/poster/95530
2. "Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads" by Anoop Cherian (Mitsubishi Electric Research Laboratories), Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories), Suhas Lohit (Mitsubishi Electric Research Laboratories), Joanna Matthiesen (Math Kangaroo USA), Kevin Smith (Massachusetts Institute of Technology), Josh Tenenbaum (Massachusetts Institute of Technology). Main Conference, Datasets and Benchmarks track. https://neurips.cc/virtual/2024/poster/97639
3. "Probabilistic Forecasting for Building Energy Systems: Are Time-Series Foundation Models The Answer?" by Young-Jin Park (Massachusetts Institute of Technology), Jing Liu (Mitsubishi Electric Research Laboratories), François G Germain (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Gordon Wichern (Mitsubishi Electric Research Laboratories), Navid Azizan (Massachusetts Institute of Technology), Christopher R. Laughman (Mitsubishi Electric Research Laboratories), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories). Time Series in the Age of Large Models Workshop.
4. "Forget to Flourish: Leveraging Model-Unlearning on Pretrained Language Models for Privacy Leakage" by Md Rafi Ur Rashid (Penn State University), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Shagufta Mehnaz (Penn State University), Ye Wang (Mitsubishi Electric Research Laboratories). Workshop on Red Teaming GenAI: What Can We Learn from Adversaries?
5. "Spatially-Aware Losses for Enhanced Neural Acoustic Fields" by Christopher Ick (New York University), Gordon Wichern (Mitsubishi Electric Research Laboratories), Yoshiki Masuyama (Mitsubishi Electric Research Laboratories), François G Germain (Mitsubishi Electric Research Laboratories), Jonathan Le Roux (Mitsubishi Electric Research Laboratories). Audio Imagination Workshop.
6. "FV-NeRV: Neural Compression for Free Viewpoint Videos" by Sorachi Kato (Osaka University), Takuya Fujihashi (Osaka University), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Takashi Watanabe (Osaka University). Machine Learning and Compression Workshop.
7. "GPT Sonography: Hand Gesture Decoding from Forearm Ultrasound Images via VLM" by Keshav Bimbraw (Worcester Polytechnic Institute), Ye Wang (Mitsubishi Electric Research Laboratories), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories). AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond Workshop.
8. "Smoothed Embeddings for Robust Language Models" by Hase Ryo (Mitsubishi Electric), Md Rafi Ur Rashid (Penn State University), Ashley Lewis (Ohio State University), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Kieran Parsons (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories). Safe Generative AI Workshop.
9. "Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation" by Xiangyu Chen (University of Kansas), Ye Wang (Mitsubishi Electric Research Laboratories), Matthew Brand (Mitsubishi Electric Research Laboratories), Pu Wang (Mitsubishi Electric Research Laboratories), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories). Workshop on Adaptive Foundation Models.
10. "Preference-based Multi-Objective Bayesian Optimization with Gradients" by Joshua Hang Sai Ip (University of California Berkeley), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories), Ali Mesbah (University of California Berkeley), Diego Romeres (Mitsubishi Electric Research Laboratories). Workshop on Bayesian Decision-Making and Uncertainty. Lightning talk spotlight.
11. "TR-BEACON: Shedding Light on Efficient Behavior Discovery in High-Dimensions with Trust-Region-based Bayesian Novelty Search" by Wei-Ting Tang (Ohio State University), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories), Joel A. Paulson (Ohio State University). Workshop on Bayesian Decision-Making and Uncertainty.
12. "MEL-PETs Joint-Context Attack for the NeurIPS 2024 LLM Privacy Challenge Red Team Track" by Ye Wang (Mitsubishi Electric Research Laboratories), Tsunato Nakai (Mitsubishi Electric), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Kento Oonishi (Mitsubishi Electric), Takuya Higashi (Mitsubishi Electric). LLM Privacy Challenge. Special Award for Practical Attack.
13. "MEL-PETs Defense for the NeurIPS 2024 LLM Privacy Challenge Blue Team Track" by Jing Liu (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Tsunato Nakai (Mitsubishi Electric), Kento Oonishi (Mitsubishi Electric), Takuya Higashi (Mitsubishi Electric). LLM Privacy Challenge. Won 3rd Place Award.
MERL members also contributed to the organization of the Multimodal Algorithmic Reasoning (MAR) Workshop (https://marworkshop.github.io/neurips24/). Organizers: Anoop Cherian (Mitsubishi Electric Research Laboratories), Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories), Suhas Lohit (Mitsubishi Electric Research Laboratories), Honglu Zhou (Salesforce Research), Kevin Smith (Massachusetts Institute of Technology), Tim K. Marks (Mitsubishi Electric Research Laboratories), Juan Carlos Niebles (Salesforce AI Research), Petar Veličković (Google DeepMind).
- MERL researchers will attend and present the following papers at the 2024 Advances in Neural Processing Systems (NeurIPS) Conference and Workshops.
See All News & Events for Jing -
-
Awards
-
AWARD MERL Wins Awards at NeurIPS LLM Privacy Challenge Date: December 15, 2024
Awarded to: Jing Liu, Ye Wang, Toshiaki Koike-Akino, Tsunato Nakai, Kento Oonishi, Takuya Higashi
MERL Contacts: Toshiaki Koike-Akino; Jing Liu; Ye Wang
Research Areas: Artificial Intelligence, Machine Learning, Information SecurityBrief- The Mitsubishi Electric Privacy Enhancing Technologies (MEL-PETs) team, consisting of a collaboration of MERL and Mitsubishi Electric researchers, won awards at the NeurIPS 2024 Large Language Model (LLM) Privacy Challenge. In the Blue Team track of the challenge, we won the 3rd Place Award, and in the Red Team track, we won the Special Award for Practical Attack.
-
-
Research Highlights
-
MERL Publications
- "Quantum Diffusion Models for Few-Shot Learning", AAAI Conference on Artificial Intelligence, March 2025.BibTeX TR2025-025 PDF
- @inproceedings{Wang2025mar,
- author = {Wang, Ruhan and Wang, Ye and Liu, Jing and Koike-Akino, Toshiaki},
- title = {{Quantum Diffusion Models for Few-Shot Learning}},
- booktitle = {AAAI Conference on Artificial Intelligence},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-025}
- }
, - "Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage", AAAI Conference on Artificial Intelligence, February 2025.BibTeX TR2025-017 PDF
- @inproceedings{Rashid2025feb,
- author = {Rashid, Md Rafi Ur and Liu, Jing and Koike-Akino, Toshiaki and Wang, Ye and Mehnaz, Shagufta},
- title = {{Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage}},
- booktitle = {AAAI Conference on Artificial Intelligence},
- year = 2025,
- month = feb,
- url = {https://www.merl.com/publications/TR2025-017}
- }
, - "Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents", arXiv, February 2025.BibTeX arXiv
- @article{Lewis2025feb,
- author = {Lewis, Ashley and White, Michael and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Wang, Ye},
- title = {{Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents}},
- journal = {arXiv},
- year = 2025,
- month = feb,
- url = {https://www.arxiv.org/abs/2502.19545}
- }
, - "GPT Sonograpy: Hand Gesture Decoding from Forearm Ultrasound Images via VLM", Advances in Neural Information Processing Systems (NeurIPS), December 2024.BibTeX TR2024-175 PDF Presentation
- @inproceedings{Bimbraw2024dec,
- author = {Bimbraw, Keshav and Wang, Ye and Liu, Jing and Koike-Akino, Toshiaki},
- title = {{GPT Sonograpy: Hand Gesture Decoding from Forearm Ultrasound Images via VLM}},
- booktitle = {Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond Workshop at Neural Information Processing Systems (NeurIPS)},
- year = 2024,
- month = dec,
- url = {https://www.merl.com/publications/TR2024-175}
- }
, - "Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation", Advances in Neural Information Processing Systems (NeurIPS), December 2024.BibTeX TR2024-157 PDF Presentation
- @inproceedings{Chen2024dec,
- author = {Chen, Xiangyu and Wang, Ye and Brand, Matthew and Wang, Pu and Liu, Jing and Koike-Akino, Toshiaki},
- title = {{Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation}},
- booktitle = {Workshop on Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning at Neural Information Processing Systems (NeurIPS)},
- year = 2024,
- month = dec,
- url = {https://www.merl.com/publications/TR2024-157}
- }
,
- "Quantum Diffusion Models for Few-Shot Learning", AAAI Conference on Artificial Intelligence, March 2025.
-
Other Publications
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.BibTeX
- @Inproceedings{deshmukh2022robust,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V},
- title = {Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm},
- booktitle = {2022 IEEE International Symposium on Information Theory (ISIT)},
- year = 2022,
- pages = {1115--1120},
- organization = {IEEE}
- }
, - "CoPur: Certifiably Robust Collaborative Inference via Feature Purification", Advances in Neural Information Processing Systems, 2022.BibTeX
- @Inproceedings{liu2022copur,
- author = {Liu, Jing and Xie, Chulin and Koyejo, Oluwasanmi O and Li, Bo},
- title = {CoPur: Certifiably Robust Collaborative Inference via Feature Purification},
- booktitle = {Advances in Neural Information Processing Systems},
- year = 2022
- }
, - "Rvfr: Robust vertical federated learning via feature subspace recovery", NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership, 2021.BibTeX
- @Inproceedings{liu2021rvfr,
- author = {Liu, Jing and Xie, Chulin and Kenthapadi, Krishnaram and Koyejo, Sanmi and Li, Bo},
- title = {Rvfr: Robust vertical federated learning via feature subspace recovery},
- booktitle = {NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership},
- year = 2021
- }
, - "Information flow optimization in inference networks", ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 8289-8293.BibTeX
- @Inproceedings{deshmukh2020information,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V and Verma, Gunjan},
- title = {Information flow optimization in inference networks},
- booktitle = {ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
- year = 2020,
- pages = {8289--8293},
- organization = {IEEE}
- }
, - "Sparse Bayesian learning for robust PCA: Algorithms and analyses", IEEE Transactions on Signal Processing, Vol. 67, No. 22, pp. 5837-5849, 2019.BibTeX
- @Article{liu2019sparse,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Sparse Bayesian learning for robust PCA: Algorithms and analyses},
- journal = {IEEE Transactions on Signal Processing},
- year = 2019,
- volume = 67,
- number = 22,
- pages = {5837--5849},
- publisher = {IEEE}
- }
, - "Robust PCA via ℓ0-ℓ1 Regularization", IEEE Transactions on Signal Processing, Vol. 67, No. 2, pp. 535-549, 2018.BibTeX
- @Article{liu2018robust,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Robust PCA via $$\backslash$ell \_ $\{$0$\}$ $-$$\backslash$ell \_ $\{$1$\}$ $ Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2018,
- volume = 67,
- number = 2,
- pages = {535--549},
- publisher = {IEEE}
- }
, - "Robust Linear Regression via ℓ0 Regularization", IEEE Transactions on Signal Processing, Vol. 66, No. 3, pp. 698-713, 2017.BibTeX
- @Article{liu2017robust,
- author = {Liu, Jing and Cosman, Pamela C and Rao, Bhaskar D},
- title = {Robust Linear Regression via $$\backslash$ell\_0 $ Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2017,
- volume = 66,
- number = 3,
- pages = {698--713},
- publisher = {IEEE}
- }
,
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.
-
Software & Data Downloads
-
Videos