Archives AI News

Top-H Decoding: Adapting the Creativity and Coherence with Bounded Entropy in Text Generation

arXiv:2509.02510v1 Announce Type: cross Abstract: Large language models (LLMs), despite their impressive performance across a wide range of tasks, often struggle to balance two competing objectives in open-ended text generation: fostering diversity and creativity while preserving logical coherence. Existing truncated sampling techniques, including temperature scaling, top-$p$ (nucleus) sampling, and min-$p$ sampling, aim to manage this trade-off. However, they exhibit limitations, particularly in the effective incorporation of the confidence of the model into the corresponding sampling strategy. For example, min-$p$ sampling relies on a single top token as a heuristic for confidence, eventually underutilizing the information of the probability distribution. Toward effective incorporation of the confidence of the model, in this paper, we present **top-H** decoding. We first establish the theoretical foundation of the interplay between creativity and coherence in truncated sampling by formulating an **entropy-constrained minimum divergence** problem. We then prove this minimization problem to be equivalent to an **entropy-constrained mass maximization** (ECMM) problem, which is NP-hard. Finally, we present top-H decoding, a computationally efficient greedy algorithm to solve the ECMM problem. Extensive empirical evaluations demonstrate that top-H outperforms the state-of-the-art (SoTA) alternative of min-$p$ sampling by up to **25.63%** on creative writing benchmarks, while maintaining robustness on question-answering datasets such as GPQA, GSM8K, and MT-Bench. Additionally, an *LLM-as-judge* evaluation confirms that top-H indeed produces coherent outputs even at higher temperatures, where creativity is especially critical. In summary, top-H advances SoTA in open-ended text generation and can be *easily integrated* into creative writing applications. The code is available at https://github.com/ErfanBaghaei/Top-H-Decoding.

Context-Action Embedding Learning for Off-Policy Evaluation in Contextual Bandits

arXiv:2509.00648v1 Announce Type: cross Abstract: We consider off-policy evaluation (OPE) in contextual bandits with finite action space. Inverse Propensity Score (IPS) weighting is a widely used method for OPE due to its unbiased, but it suffers from significant variance when the action space is large or when some parts of the context-action space are underexplored. Recently introduced Marginalized IPS (MIPS) estimators mitigate this issue by leveraging action embeddings. However, these embeddings do not minimize the mean squared error (MSE) of the estimators and do not consider context information. To address these limitations, we introduce Context-Action Embedding Learning for MIPS, or CAEL-MIPS, which learns context-action embeddings from offline data to minimize the MSE of the MIPS estimator. Building on the theoretical analysis of bias and variance of MIPS, we present an MSE-minimizing objective for CAEL-MIPS. In the empirical studies on a synthetic dataset and a real-world dataset, we demonstrate that our estimator outperforms baselines in terms of MSE.

Calibration through the Lens of Indistinguishability

arXiv:2509.02279v1 Announce Type: cross Abstract: Calibration is a classical notion from the forecasting literature which aims to address the question: how should predicted probabilities be interpreted? In a world where we only get to observe (discrete) outcomes, how should we evaluate a predictor that hypothesizes (continuous) probabilities over possible outcomes? The study of calibration has seen a surge of recent interest, given the ubiquity of probabilistic predictions in machine learning. This survey describes recent work on the foundational questions of how to define and measure calibration error, and what these measures mean for downstream decision makers who wish to use the predictions to make decisions. A unifying viewpoint that emerges is that of calibration as a form of indistinguishability, between the world hypothesized by the predictor and the real world (governed by nature or the Bayes optimal predictor). In this view, various calibration measures quantify the extent to which the two worlds can be told apart by certain classes of distinguishers or statistical measures.

Feature Augmentations for High-Dimensional Learning

arXiv:2509.00232v1 Announce Type: cross Abstract: High-dimensional measurements are often correlated which motivates their approximation by factor models. This holds also true when features are engineered via low-dimensional interactions or kernel tricks. This often results in over parametrization and requires a fast dimensionality reduction. We propose a simple technique to enhance the performance of supervised learning algorithms by augmenting features with factors extracted from design matrices and their transformations. This is implemented by using the factors and idiosyncratic residuals which significantly weaken the correlations between input variables and hence increase the interpretability of learning algorithms and numerical stability. Extensive experiments on various algorithms and real-world data in diverse fields are carried out, among which we put special emphasis on the stock return prediction problem with Chinese financial news data due to the increasing interest in NLP problems in financial studies. We verify the capability of the proposed feature augmentation approach to boost overall prediction performance with the same algorithm. The approach bridges a gap in research that has been overlooked in previous studies, which focus either on collecting additional data or constructing more powerful algorithms, whereas our method lies in between these two directions using a simple PCA augmentation.

Probabilities of Causation and Root Cause Analysis with Quasi-Markovian Models

arXiv:2509.02535v1 Announce Type: new Abstract: Probabilities of causation provide principled ways to assess causal relationships but face computational challenges due to partial identifiability and latent confounding. This paper introduces both algorithmic simplifications, significantly reducing the computational complexity of calculating tighter bounds for these probabilities, and a novel methodological framework for Root Cause Analysis that systematically employs these causal metrics to rank entire causal paths.

Wild Refitting for Model-Free Excess Risk Evaluation of Opaque ML/AI Models under Bregman Loss

arXiv:2509.02476v1 Announce Type: new Abstract: We study the problem of evaluating the excess risk of classical penalized empirical risk minimization (ERM) with Bregman losses. We show that by leveraging the recently proposed wild refitting procedure (Wainwright, 2025), one can efficiently upper bound the excess risk through the so-called "wild optimism," without relying on the global structure of the underlying function class. This property makes our approach inherently model-free. Unlike conventional analyses, our framework operates with just one dataset and black-box access to the training procedure. The method involves randomized vector-valued symmetrization with an appropriate scaling of the prediction residues and constructing artificially modified outcomes, upon which we retrain a second predictor for excess risk estimation. We establish high-probability performance guarantees both under the fixed design setting and the random design setting, demonstrating that wild refitting under Bregman losses, with an appropriately chosen wild noise scale, yields a valid upper bound on the excess risk. This work thus is promising for theoretically evaluating modern opaque ML and AI models such as deep neural networks and large language models, where the model class is too complex for classical learning theory and empirical process techniques to apply.

A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis

arXiv:2502.16331v2 Announce Type: replace-cross Abstract: Recent works have characterized the function-space inductive bias of infinite-width bounded-norm single-hidden-layer neural networks as a kind of bounded-variation-type space. This novel neural network Banach space encompasses many classical multivariate function spaces, including certain Sobolev spaces and the spectral Barron spaces. Notably, this Banach space also includes functions that exhibit less classical regularity, such as those that only vary in a few directions. On bounded domains, it is well-established that the Gaussian reproducing kernel Hilbert space (RKHS) strictly embeds into this Banach space, demonstrating a clear gap between the Gaussian RKHS and the neural network Banach space. It turns out that when investigating these spaces on unbounded domains, e.g., all of $mathbb{R}^d$, the story is fundamentally different. We establish the following fundamental result: Certain functions that lie in the Gaussian RKHS have infinite norm in the neural network Banach space. This provides a nontrivial gap between kernel methods and neural networks by exhibiting functions that kernel methods easily represent, whereas neural networks cannot.

Distribution estimation via Flow Matching with Lipschitz guarantees

arXiv:2509.02337v1 Announce Type: new Abstract: Flow Matching, a promising approach in generative modeling, has recently gained popularity. Relying on ordinary differential equations, it offers a simple and flexible alternative to diffusion models, which are currently the state-of-the-art. Despite its empirical success, the mathematical understanding of its statistical power so far is very limited. This is largely due to the sensitivity of theoretical bounds to the Lipschitz constant of the vector field which drives the ODE. In this work, we study the assumptions that lead to controlling this dependency. Based on these results, we derive a convergence rate for the Wasserstein $1$ distance between the estimated distribution and the target distribution which improves previous results in high dimensional setting. This rate applies to certain classes of unbounded distributions and particularly does not require $log$-concavity.

FIT-GNN: Faster Inference Time for GNNs that ‘FIT’ in Memory Using Coarsening

arXiv:2410.15001v3 Announce Type: replace-cross Abstract: Scalability of Graph Neural Networks (GNNs) remains a significant challenge. To tackle this, methods like coarsening, condensation, and computation trees are used to train on a smaller graph, resulting in faster computation. Nonetheless, prior research has not adequately addressed the computational costs during the inference phase. This paper presents a novel approach to improve the scalability of GNNs by reducing computational burden during the inference phase using graph coarsening. We demonstrate two different methods -- Extra Nodes and Cluster Nodes. Our study extends the application of graph coarsening for graph-level tasks, including graph classification and graph regression. We conduct extensive experiments on multiple benchmark datasets to evaluate the performance of our approach. Our results show that the proposed method achieves orders of magnitude improvements in single-node inference time compared to traditional approaches. Furthermore, it significantly reduces memory consumption for node and graph classification and regression tasks, enabling efficient training and inference on low-resource devices where conventional methods are impractical. Notably, these computational advantages are achieved while maintaining competitive performance relative to baseline models.

Variational Uncertainty Decomposition for In-Context Learning

arXiv:2509.02327v1 Announce Type: new Abstract: As large language models (LLMs) gain popularity in conducting prediction tasks in-context, understanding the sources of uncertainty in in-context learning becomes essential to ensuring reliability. The recent hypothesis of in-context learning performing predictive Bayesian inference opens the avenue for Bayesian uncertainty estimation, particularly for decomposing uncertainty into epistemic uncertainty due to lack of in-context data and aleatoric uncertainty inherent in the in-context prediction task. However, the decomposition idea remains under-explored due to the intractability of the latent parameter posterior from the underlying Bayesian model. In this work, we introduce a variational uncertainty decomposition framework for in-context learning without explicitly sampling from the latent parameter posterior, by optimising auxiliary queries as probes to obtain an upper bound to the aleatoric uncertainty of an LLM's in-context learning procedure, which also induces a lower bound to the epistemic uncertainty. Through experiments on synthetic and real-world tasks, we show quantitatively and qualitatively that the decomposed uncertainties obtained from our method exhibit desirable properties of epistemic and aleatoric uncertainty.