Archives AI News

On efficiently computable functions, deep networks and sparse compositionality

arXiv:2510.11942v1 Announce Type: new Abstract: We show that emph{efficient Turing computability} at any fixed input/output precision implies the existence of emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]^dtoR^m$ is computable…

Inverse Design in Nanophotonics via Representation Learning

arXiv:2507.00546v2 Announce Type: replace-cross Abstract: Inverse design in nanophotonics, the computational discovery of structures achieving targeted electromagnetic (EM) responses, has become a key tool for recent optical advances. Traditional intuition-driven or iterative optimization methods struggle with the inherently high-dimensional, non-convex…

Sculpting Latent Spaces With MMD: Disentanglement With Programmable Priors

arXiv:2510.11953v1 Announce Type: new Abstract: Learning disentangled representations, where distinct factors of variation are captured by independent latent variables, is a central goal in machine learning. The dominant approach has been the Variational Autoencoder (VAE) framework, which uses a Kullback-Leibler…

Y-shaped Generative Flows

arXiv:2510.11955v1 Announce Type: new Abstract: Modern continuous-time generative models often induce V-shaped transport: each sample travels independently along nearly straight trajectories from prior to data, overlooking shared structure. We introduce Y-shaped generative flows, which move probability mass together along shared…

LayerSync: Self-aligning Intermediate Layers

arXiv:2510.12581v1 Announce Type: cross Abstract: We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion…

QLENS: Towards A Quantum Perspective of Language Transformers

arXiv:2510.11963v1 Announce Type: new Abstract: In natural language processing, current methods for understanding Transformers are successful at identifying intermediate predictions during a model’s inference. However, these approaches function as limited diagnostic checkpoints, lacking a mathematical framework for mechanistically modeling how…

WW-FL: Secure and Private Large-Scale Federated Learning

arXiv:2302.09904v4 Announce Type: replace Abstract: Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices. However, recent research has uncovered vulnerabilities in FL, impacting both security and…