Convergence of continuous-time stochastic gradient descent with applications to deep neural networks

2025-11-02 20:00 GMT · 5 months ago aimagpro.com

arXiv:2409.07401v2 Announce Type: replace
Abstract: We study a continuous-time approximation of the stochastic gradient descent process for minimizing the population expected loss in learning problems. The main results establish general sufficient conditions for the convergence, extending the results of Chatterjee (2022) established for (nonstochastic) gradient descent. We show how the main result can be applied to the case of overparametrized neural network training.