Archives AI News

Diffusion Posterior Sampling is Computationally Intractable

arXiv:2402.12727v2 Announce Type: replace Abstract: Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$. In posterior sampling, one is also given a measurement model $p(y mid x)$ and a measurement $y$, and would like…

Data-driven jet fuel demand forecasting: A case study of Copenhagen Airport

arXiv:2511.05569v1 Announce Type: new Abstract: Accurate forecasting of jet fuel demand is crucial for optimizing supply chain operations in the aviation market. Fuel distributors specifically require precise estimates to avoid inventory shortages or excesses. However, there is a lack of…

Adaptive Testing for Segmenting Watermarked Texts From Language Models

arXiv:2511.06645v1 Announce Type: cross Abstract: The rapid adoption of large language models (LLMs), such as GPT-4 and Claude 3.5, underscores the need to distinguish LLM-generated text from human-written content to mitigate the spread of misinformation and misuse in education. One…

Lookahead Unmasking Elicits Accurate Decoding in Diffusion Language Models

arXiv:2511.05563v1 Announce Type: new Abstract: Masked Diffusion Models (MDMs) as language models generate by iteratively unmasking tokens, yet their performance crucially depends on the inference time order of unmasking. Prevailing heuristics, such as confidence based sampling, are myopic: they optimize…

Adaptive Sample-Level Framework Motivated by Distributionally Robust Optimization with Variance-Based Radius Assignment for Enhanced Neural Network Generalization Under Distribution Shift

arXiv:2511.05568v1 Announce Type: new Abstract: Distribution shifts and minority subpopulations frequently undermine the reliability of deep neural networks trained using Empirical Risk Minimization (ERM). Distributionally Robust Optimization (DRO) addresses this by optimizing for the worst-case risk within a neighborhood of…

Effective Test-Time Scaling of Discrete Diffusion through Iterative Refinement

arXiv:2511.05562v1 Announce Type: new Abstract: Test-time scaling through reward-guided generation remains largely unexplored for discrete diffusion models despite its potential as a promising alternative. In this work, we introduce Iterative Reward-Guided Refinement (IterRef), a novel test-time scaling method tailored to…

Diversified Flow Matching with Translation Identifiability

arXiv:2511.05558v1 Announce Type: new Abstract: Diversified distribution matching (DDM) finds a unified translation function mapping a diverse collection of conditional source distributions to their target counterparts. DDM was proposed to resolve content misalignment issues in unpaired domain translation, achieving translation…

Revisiting Stochastic Approximation and Stochastic Gradient Descent

arXiv:2505.11343v3 Announce Type: replace-cross Abstract: In this paper, we introduce a new approach to proving the convergence of the Stochastic Approximation (SA) and the Stochastic Gradient Descent (SGD) algorithms. The new approach is based on a concept called GSLLN (Generalized…