arXiv:2512.10825v1 Announce Type: cross
Abstract: We consider the design of smoothings of the (coordinate-wise) max function in $mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=ln(sum^d_iexp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $sim 0.8145ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.
