arXiv:2506.02724v2 Announce Type: replace
Abstract: The widespread utilization of language models in modern applications is inconceivable without Parameter-Efficient Fine-Tuning techniques, such as low-rank adaptation ($texttt{LoRA}$), which adds trainable adapters to selected layers. Although $texttt{LoRA}$ may obtain accurate solutions, it requires significant memory to train large models and intuition on which layers to add adapters. In this paper, we propose a novel method, $texttt{WeightLoRA}$, which overcomes this issue by adaptive selection of the most critical $texttt{LoRA}$ heads throughout the optimization process. As a result, we can significantly reduce the number of trainable parameters while maintaining the capability to obtain consistent or even superior metric values. We conduct experiments for a series of competitive benchmarks and DeBERTa, BART, and Llama models, comparing our method with different adaptive approaches. The experimental results demonstrate the efficacy of $texttt{WeightLoRA}$ and the superior performance of $texttt{WeightLoRA+}$ in almost all cases.
