On efficiently computable functions, deep networks and sparse compositionality
arXiv:2510.11942v1 Announce Type: new Abstract: We show that emph{efficient Turing computability} at any fixed input/output precision implies the existence of emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]^dtoR^m$ is computable…
