Globally optimized SVD compression of LLMs via Fermi-function-based rank selection and gauge fixing

2025-12-03 20:00 GMT · 5 months ago aimagpro.com

arXiv:2512.03062v1 Announce Type: new
Abstract: Large Language Models (LLMs) are very demanding in terms of their computational resources. Low-rank decompositions of LLM weights, e.g. via Singular Value Decomposition (SVD), is a promising approach for LLM compression, but presents several practical hurdles, e.g. selecting appropriate layer-wise ranks and getting rid of its parameter redundancy. In this work, we present two physics-inspired improvements to SVD LLM compression: (1) textbf{FermiGrad}, a gradient-descent algorithm that determines globally optimal layer-wise ranks by relaxing the discrete singular-value truncation into a continuous optimization using the Fermi function; (2) textbf{PivGa}, an additional textit{lossless} compression of the low-rank factors that exploits the intrinsic gauge freedom in their parametrization.