On efficiently computable functions, deep networks and sparse compositionality

2025-10-14 19:00 GMT · 6 months ago aimagpro.com

arXiv:2510.11942v1 Announce Type: new
Abstract: We show that emph{efficient Turing computability} at any fixed input/output precision implies the existence of emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]^dtoR^m$ is computable in time polynomial in the bit-depths, then for every pair of precisions $(n,m_{mathrm{out}})$ there exists a bounded-fan-in Boolean circuit of size and depth $poly(n+m_{mathrm{out}})$ computing the discretized map; replacing each gate by a constant-size neural emulator yields a deep network of size/depth $poly(n+m_{mathrm{out}})$ that achieves accuracy $varepsilon=2^{-m_{mathrm{out}}}$. We also relate these constructions to compositional approximation rates cite{MhaskarPoggio2016b,poggio_deep_shallow_2017,Poggio2017,Poggio2023HowDS} and to optimization viewed as hierarchical search over sparse structures.