Archives AI News

Provable Benefits of In-Tool Learning for Large Language Models

arXiv:2508.20755v1 Announce Type: cross Abstract: Tool-augmented language models, equipped with retrieval, memory, or external APIs, are reshaping AI, yet their theoretical advantages remain underexplored. In this paper, we address this question by demonstrating the benefits of in-tool learning (external retrieval) over in-weight learning (memorization) for factual recall. We show that the number of facts a model can memorize solely in its weights is fundamentally limited by its parameter count. In contrast, we prove that tool-use enables unbounded factual recall via a simple and efficient circuit construction. These results are validated in controlled experiments, where tool-using models consistently outperform memorizing ones. We further show that for pretrained large language models, teaching tool-use and general rules is more effective than finetuning facts into memory. Our work provides both a theoretical and empirical foundation, establishing why tool-augmented workflows are not just practical, but provably more scalable.

Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE

arXiv:2508.20103v1 Announce Type: cross Abstract: The optimal asset allocation between risky and risk-free assets is a persistent challenge due to the inherent volatility in financial markets. Conventional methods rely on strict distributional assumptions or non-additive reward ratios, which limit their robustness and applicability to investment goals. To overcome these constraints, this study formulates the optimal two-asset allocation problem as a sequential decision-making task within a Markov Decision Process (MDP). This framework enables the application of reinforcement learning (RL) mechanisms to develop dynamic policies based on simulated financial scenarios, regardless of prerequisites. We use the Kelly criterion to balance immediate reward signals against long-term investment objectives, and we take the novel step of integrating the Time-series Dense Encoder (TiDE) into the Deep Deterministic Policy Gradient (DDPG) RL framework for continuous decision-making. We compare DDPG-TiDE with a simple discrete-action Q-learning RL framework and a passive buy-and-hold investment strategy. Empirical results show that DDPG-TiDE outperforms Q-learning and generates higher risk adjusted returns than buy-and-hold. These findings suggest that tackling the optimal asset allocation problem by integrating TiDE within a DDPG reinforcement learning framework is a fruitful avenue for further exploration.

The Joys of Categorical Conformal Prediction

arXiv:2507.04441v3 Announce Type: replace Abstract: Conformal prediction (CP) is an Uncertainty Representation technique that delivers finite-sample calibrated prediction regions for any underlying Machine Learning model. Its status as an Uncertainty Quantification (UQ) tool, though, has remained conceptually opaque: While Conformal Prediction Regions (CPRs) give an ordinal representation of uncertainty (larger regions typically indicate higher uncertainty), they lack the capability to cardinally quantify it (twice as large regions do not imply twice the uncertainty). We adopt a category-theoretic approach to CP -- framing it as a morphism, embedded in a commuting diagram, of two newly-defined categories -- that brings us three joys. First, we show that -- under minimal assumptions -- CP is intrinsically a UQ mechanism, that is, its cardinal UQ capabilities are a structural feature of the method. Second, we demonstrate that CP bridges the Bayesian, frequentist, and imprecise probabilistic approaches to predictive statistical reasoning. Finally, we show that a CPR is the image of a covariant functor. This observation is relevant to AI privacy: It implies that privacy noise added locally does not break the global coverage guarantee.

Canonical Bayesian Linear System Identification

arXiv:2507.11535v2 Announce Type: replace Abstract: Standard Bayesian approaches for linear time-invariant (LTI) system identification are hindered by parameter non-identifiability; the resulting complex, multi-modal posteriors make inference inefficient and impractical. We solve this problem by embedding canonical forms of LTI systems within the Bayesian framework. We rigorously establish that inference in these minimal parameterizations fully captures all invariant system dynamics (e.g., transfer functions, eigenvalues, predictive distributions of system outputs) while resolving identifiability. This approach unlocks the use of meaningful, structure-aware priors (e.g., enforcing stability via eigenvalues) and ensures conditions for a Bernstein--von Mises theorem -- a link between Bayesian and frequentist large-sample asymptotics that is broken in standard forms. Extensive simulations with modern MCMC methods highlight advantages over standard parameterizations: canonical forms achieve higher computational efficiency, generate interpretable and well-behaved posteriors, and provide robust uncertainty estimates, particularly from limited data.

Inferring processes within dynamic forest models using hybrid modeling

arXiv:2508.01228v2 Announce Type: replace-cross Abstract: Modeling forest dynamics under novel climatic conditions requires a careful balance between process-based understanding and empirical flexibility. Dynamic Vegetation Models (DVM) represent ecological processes mechanistically, but their performance is prone to misspecified assumptions about functional forms. Inferring the structure of these processes and their functional forms correctly from data remains a major challenge because current approaches, such as plug-in estimators, have proven ineffective. We introduce Forest Informed Neural Networks (FINN), a hybrid modeling approach that combines a forest gap model with deep neural networks (DNN). FINN replaces processes with DNNs, which are then calibrated alongside the other mechanistic components in one unified step. In a case study on the Barro Colorado Island 50-ha plot we demonstrate that replacing the growth process with a DNN improves predictive performance and succession trajectories compared to a mechanistic version of FINN. Furthermore, we discovered that the DNN learned an ecologically plausible, improved functional form of the growth process, which we extracted from the DNN using explainable AI. In conclusion, our new hybrid modeling approach offers a versatile opportunity to infer forest dynamics from data and to improve forecasts of ecosystem trajectories under unprecedented environmental change.

Towards Trustworthy Amortized Bayesian Model Comparison

arXiv:2508.20614v1 Announce Type: new Abstract: Amortized Bayesian model comparison (BMC) enables fast probabilistic ranking of models via simulation-based training of neural surrogates. However, the reliability of neural surrogates deteriorates when simulation models are misspecified - the very case where model comparison is most needed. Thus, we supplement simulation-based training with a self-consistency (SC) loss on unlabeled real data to improve BMC estimates under empirical distribution shifts. Using a numerical experiment and two case studies with real data, we compare amortized evidence estimates with and without SC against analytic or bridge sampling benchmarks. SC improves calibration under model misspecification when having access to analytic likelihoods. However, it offers limited gains with neural surrogate likelihoods, making it most practical for trustworthy BMC when likelihoods are exact.

A Metropolis-Adjusted Langevin Algorithm for Sampling Jeffreys Prior

arXiv:2504.06372v3 Announce Type: replace-cross Abstract: Inference and estimation are fundamental in statistics, system identification, and machine learning. When prior knowledge about the system is available, Bayesian analysis provides a natural framework for encoding it through a prior distribution. In practice, such knowledge is often too vague to specify a full prior distribution, motivating the use of default 'uninformative' priors that minimize subjective bias. Jeffreys prior is an appealing uninformative prior because: (i) it is invariant under any re-parameterization of the model, (ii) it encodes the intrinsic geometric structure of the parameter space through the Fisher information matrix, which in turn enhances the diversity of parameter samples. Despite these benefits, drawing samples from Jeffreys prior is challenging. In this paper, we develop a general sampling scheme using the Metropolis-Adjusted Langevin Algorithm that enables sampling of parameter values from Jeffreys prior; the method extends naturally to nonlinear state-space models. The resulting samples can be directly used in sampling-based system identification methods and Bayesian experiment design, providing an objective, information-geometric description of parameter uncertainty. Several numerical examples demonstrate the efficiency and accuracy of the proposed scheme.

Transformers Meet In-Context Learning: A Universal Approximation Theory

arXiv:2506.05200v2 Announce Type: replace-cross Abstract: Large language models are capable of in-context learning, the ability to perform new tasks at test time using a handful of input-output examples, without parameter updates. We develop a universal approximation theory to elucidate how transformers enable in-context learning. For a general class of functions (each representing a distinct task), we demonstrate how to construct a transformer that, without any further weight updates, can predict based on a few noisy in-context examples with vanishingly small risk. Unlike prior work that frames transformers as approximators of optimization algorithms (e.g., gradient descent) for statistical learning tasks, we integrate Barron's universal function approximation theory with the algorithm approximator viewpoint. Our approach yields approximation guarantees that are not constrained by the effectiveness of the optimization algorithms being mimicked, extending far beyond convex problems like linear regression. The key is to show that (i) any target function can be nearly linearly represented, with small $ell_1$-norm, over a set of universal features, and (ii) a transformer can be constructed to find the linear representation -- akin to solving Lasso -- at test time.

Dimension Agnostic Testing of Survey Data Credibility through the Lens of Regression

arXiv:2508.20616v1 Announce Type: cross Abstract: Assessing whether a sample survey credibly represents the population is a critical question for ensuring the validity of downstream research. Generally, this problem reduces to estimating the distance between two high-dimensional distributions, which typically requires a number of samples that grows exponentially with the dimension. However, depending on the model used for data analysis, the conclusions drawn from the data may remain consistent across different underlying distributions. In this context, we propose a task-based approach to assess the credibility of sampled surveys. Specifically, we introduce a model-specific distance metric to quantify this notion of credibility. We also design an algorithm to verify the credibility of survey data in the context of regression models. Notably, the sample complexity of our algorithm is independent of the data dimension. This efficiency stems from the fact that the algorithm focuses on verifying the credibility of the survey data rather than reconstructing the underlying regression model. Furthermore, we show that if one attempts to verify credibility by reconstructing the regression model, the sample complexity scales linearly with the dimensionality of the data. We prove the theoretical correctness of our algorithm and numerically demonstrate our algorithm's performance.

Bounds in Wasserstein Distance for Locally Stationary Processes

arXiv:2412.03414v2 Announce Type: replace-cross Abstract: Locally stationary (LSPs) constitute an essential modeling paradigm for capturing the nuanced dynamics inherent in time series data whose statistical characteristics, including mean and variance, evolve smoothly across time. In this paper, we introduce a novel conditional probability distribution estimator specifically tailored for LSPs, employing the Nadaraya-Watson (NW) kernel smoothing methodology. The NW estimator, a prominent local averaging technique, leverages kernel smoothing to approximate the conditional distribution of a response variable given its covariates. We rigorously establish convergence rates for the NW-based conditional probability estimator in the univariate setting under the Wasserstein metric, providing explicit bounds and conditions that guarantee optimal performance. Extending this theoretical framework, we subsequently generalize our analysis to the multivariate scenario using the sliced Wasserstein distance, an approach particularly advantageous in circumventing the computational and analytical challenges typically associated with high-dimensional settings. To corroborate our theoretical contributions, we conduct extensive numerical simulations on synthetic datasets and provide empirical validations using real-world data, highlighting the estimator's practical relevance and effectiveness in capturing intricate temporal dependencies and underscoring its relevance for analyzing complex nonstationary phenomena.