A Theoretical Framework for Grokking: Interpolation followed by Riemannian Norm Minimisation

2025-11-05 20:00 GMT · 5 months ago aimagpro.com

arXiv:2505.20172v2 Announce Type: replace
Abstract: We study the dynamics of gradient flow with small weight decay on general training losses $F: mathbb{R}^d to mathbb{R}$. Under mild regularity assumptions and assuming convergence of the unregularised gradient flow, we show that the trajectory with weight decay $lambda$ exhibits a two-phase behaviour as $lambda to 0$. During the initial fast phase, the trajectory follows the unregularised gradient flow and converges to a manifold of critical points of $F$. Then, at time of order $1/lambda$, the trajectory enters a slow drift phase and follows a Riemannian gradient flow minimising the $ell_2$-norm of the parameters. This purely optimisation-based phenomenon offers a natural explanation for the textit{grokking} effect observed in deep learning, where the training loss rapidly reaches zero while the test loss plateaus for an extended period before suddenly improving. We argue that this generalisation jump can be attributed to the slow norm reduction induced by weight decay, as explained by our analysis. We validate this mechanism empirically on several synthetic regression tasks.