Archives AI News

On the Quantization Robustness of Diffusion Language Models in Coding Benchmarks

arXiv:2604.20079v1 Announce Type: new Abstract: Auto-regressive Large Language Models (LLMs) achieve strong performance on coding tasks, but incur high memory and inference costs. Diffusion-based language models (d-LLMs) offer bounded inference cost via iterative denoising, but their behavior under post-training quantization…

On the Quantization Robustness of Diffusion Language Models in Coding Benchmarks

arXiv:2604.20079v1 Announce Type: new Abstract: Auto-regressive Large Language Models (LLMs) achieve strong performance on coding tasks, but incur high memory and inference costs. Diffusion-based language models (d-LLMs) offer bounded inference cost via iterative denoising, but their behavior under post-training quantization…

Concept Graph Convolutions: Message Passing in the Concept Space

arXiv:2604.20082v1 Announce Type: new Abstract: The trust in the predictions of Graph Neural Networks is limited by their opaque reasoning process. Prior methods have tried to explain graph networks via concept-based explanations extracted from the latent representations obtained after message…

Gauge-covariant stochastic neural fields: Stability and finite-width effects

arXiv:2508.18948v2 Announce Type: replace-cross Abstract: We develop a gauge-covariant stochastic effective field theory for stability and finite-width effects in deep neural systems. The model uses classical commuting fields: a complex matter field, a real Abelian connection field, and a fictitious…