Archives AI News

Bayesian Network Structural Consensus via Greedy Min-Cut Analysis

arXiv:2504.00467v2 Announce Type: replace Abstract: This paper presents the Min-Cut Bayesian Network Consensus (MCBNC) algorithm, a greedy method for structural consensus of Bayesian Networks (BNs), with applications in federated learning and model aggregation. MCBNC prunes weak edges from an initial…

Continual Learning with Synthetic Boundary Experience Blending

arXiv:2507.23534v2 Announce Type: replace Abstract: Continual learning (CL) seeks to mitigate catastrophic forgetting when models are trained with sequential tasks. A common approach, experience replay (ER), stores past exemplars but only sparsely approximates the data distribution, yielding fragile and oversimplified…

Trading Vector Data in Vector Databases

arXiv:2511.07139v1 Announce Type: cross Abstract: Vector data trading is essential for cross-domain learning with vector databases, yet it remains largely unexplored. We study this problem under online learning, where sellers face uncertain retrieval costs and buyers provide stochastic feedback to…

Diffusion Posterior Sampling is Computationally Intractable

arXiv:2402.12727v2 Announce Type: replace Abstract: Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$. In posterior sampling, one is also given a measurement model $p(y mid x)$ and a measurement $y$, and would like…

Data-driven jet fuel demand forecasting: A case study of Copenhagen Airport

arXiv:2511.05569v1 Announce Type: new Abstract: Accurate forecasting of jet fuel demand is crucial for optimizing supply chain operations in the aviation market. Fuel distributors specifically require precise estimates to avoid inventory shortages or excesses. However, there is a lack of…

Adaptive Testing for Segmenting Watermarked Texts From Language Models

arXiv:2511.06645v1 Announce Type: cross Abstract: The rapid adoption of large language models (LLMs), such as GPT-4 and Claude 3.5, underscores the need to distinguish LLM-generated text from human-written content to mitigate the spread of misinformation and misuse in education. One…

Lookahead Unmasking Elicits Accurate Decoding in Diffusion Language Models

arXiv:2511.05563v1 Announce Type: new Abstract: Masked Diffusion Models (MDMs) as language models generate by iteratively unmasking tokens, yet their performance crucially depends on the inference time order of unmasking. Prevailing heuristics, such as confidence based sampling, are myopic: they optimize…

Adaptive Sample-Level Framework Motivated by Distributionally Robust Optimization with Variance-Based Radius Assignment for Enhanced Neural Network Generalization Under Distribution Shift

arXiv:2511.05568v1 Announce Type: new Abstract: Distribution shifts and minority subpopulations frequently undermine the reliability of deep neural networks trained using Empirical Risk Minimization (ERM). Distributionally Robust Optimization (DRO) addresses this by optimizing for the worst-case risk within a neighborhood of…