Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
arXiv:2502.01755v4 Announce Type: replace Abstract: Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) optimize federated training by reducing computational and communication costs. We propose RoLoRA, a federated framework using alternating optimization to fine-tune LoRA adapters. Our approach emphasizes the importance…
