TR2024-156

SuperLoRA: Parameter-Efficient Unified Adaptation of Large Foundation Models


Abstract:

Low-rank adaptation (LoRA) and its variants are widely employed in fine-tuning large models, including large language models for natural language processing and dif- fusion models for computer vision. This paper proposes a generalized framework called SuperLoRA that unifies and extends different LoRA variants, which can be realized un- der different hyper-parameter settings. Introducing new options with grouping, folding, shuffling, projection, and tensor decomposition, SuperLoRA offers high flexibility and demonstrates superior performance, with up to a 10-fold gain in parameter efficiency for transfer learning tasks.

 

  • Related Research Highlights