Archives AI News

Canonical Bayesian Linear System Identification

arXiv:2507.11535v2 Announce Type: replace Abstract: Standard Bayesian approaches for linear time-invariant (LTI) system identification are hindered by parameter non-identifiability; the resulting complex, multi-modal posteriors make inference inefficient and impractical. We solve this problem by embedding canonical forms of LTI systems within the Bayesian framework. We rigorously establish that inference in these minimal parameterizations fully captures all invariant system dynamics (e.g., transfer functions, eigenvalues, predictive distributions of system outputs) while resolving identifiability. This approach unlocks the use of meaningful, structure-aware priors (e.g., enforcing stability via eigenvalues) and ensures conditions for a Bernstein--von Mises theorem -- a link between Bayesian and frequentist large-sample asymptotics that is broken in standard forms. Extensive simulations with modern MCMC methods highlight advantages over standard parameterizations: canonical forms achieve higher computational efficiency, generate interpretable and well-behaved posteriors, and provide robust uncertainty estimates, particularly from limited data.

Inferring processes within dynamic forest models using hybrid modeling

arXiv:2508.01228v2 Announce Type: replace-cross Abstract: Modeling forest dynamics under novel climatic conditions requires a careful balance between process-based understanding and empirical flexibility. Dynamic Vegetation Models (DVM) represent ecological processes mechanistically, but their performance is prone to misspecified assumptions about functional forms. Inferring the structure of these processes and their functional forms correctly from data remains a major challenge because current approaches, such as plug-in estimators, have proven ineffective. We introduce Forest Informed Neural Networks (FINN), a hybrid modeling approach that combines a forest gap model with deep neural networks (DNN). FINN replaces processes with DNNs, which are then calibrated alongside the other mechanistic components in one unified step. In a case study on the Barro Colorado Island 50-ha plot we demonstrate that replacing the growth process with a DNN improves predictive performance and succession trajectories compared to a mechanistic version of FINN. Furthermore, we discovered that the DNN learned an ecologically plausible, improved functional form of the growth process, which we extracted from the DNN using explainable AI. In conclusion, our new hybrid modeling approach offers a versatile opportunity to infer forest dynamics from data and to improve forecasts of ecosystem trajectories under unprecedented environmental change.

Towards Trustworthy Amortized Bayesian Model Comparison

arXiv:2508.20614v1 Announce Type: new Abstract: Amortized Bayesian model comparison (BMC) enables fast probabilistic ranking of models via simulation-based training of neural surrogates. However, the reliability of neural surrogates deteriorates when simulation models are misspecified - the very case where model comparison is most needed. Thus, we supplement simulation-based training with a self-consistency (SC) loss on unlabeled real data to improve BMC estimates under empirical distribution shifts. Using a numerical experiment and two case studies with real data, we compare amortized evidence estimates with and without SC against analytic or bridge sampling benchmarks. SC improves calibration under model misspecification when having access to analytic likelihoods. However, it offers limited gains with neural surrogate likelihoods, making it most practical for trustworthy BMC when likelihoods are exact.

A Metropolis-Adjusted Langevin Algorithm for Sampling Jeffreys Prior

arXiv:2504.06372v3 Announce Type: replace-cross Abstract: Inference and estimation are fundamental in statistics, system identification, and machine learning. When prior knowledge about the system is available, Bayesian analysis provides a natural framework for encoding it through a prior distribution. In practice, such knowledge is often too vague to specify a full prior distribution, motivating the use of default 'uninformative' priors that minimize subjective bias. Jeffreys prior is an appealing uninformative prior because: (i) it is invariant under any re-parameterization of the model, (ii) it encodes the intrinsic geometric structure of the parameter space through the Fisher information matrix, which in turn enhances the diversity of parameter samples. Despite these benefits, drawing samples from Jeffreys prior is challenging. In this paper, we develop a general sampling scheme using the Metropolis-Adjusted Langevin Algorithm that enables sampling of parameter values from Jeffreys prior; the method extends naturally to nonlinear state-space models. The resulting samples can be directly used in sampling-based system identification methods and Bayesian experiment design, providing an objective, information-geometric description of parameter uncertainty. Several numerical examples demonstrate the efficiency and accuracy of the proposed scheme.

Transformers Meet In-Context Learning: A Universal Approximation Theory

arXiv:2506.05200v2 Announce Type: replace-cross Abstract: Large language models are capable of in-context learning, the ability to perform new tasks at test time using a handful of input-output examples, without parameter updates. We develop a universal approximation theory to elucidate how transformers enable in-context learning. For a general class of functions (each representing a distinct task), we demonstrate how to construct a transformer that, without any further weight updates, can predict based on a few noisy in-context examples with vanishingly small risk. Unlike prior work that frames transformers as approximators of optimization algorithms (e.g., gradient descent) for statistical learning tasks, we integrate Barron's universal function approximation theory with the algorithm approximator viewpoint. Our approach yields approximation guarantees that are not constrained by the effectiveness of the optimization algorithms being mimicked, extending far beyond convex problems like linear regression. The key is to show that (i) any target function can be nearly linearly represented, with small $ell_1$-norm, over a set of universal features, and (ii) a transformer can be constructed to find the linear representation -- akin to solving Lasso -- at test time.

Dimension Agnostic Testing of Survey Data Credibility through the Lens of Regression

arXiv:2508.20616v1 Announce Type: cross Abstract: Assessing whether a sample survey credibly represents the population is a critical question for ensuring the validity of downstream research. Generally, this problem reduces to estimating the distance between two high-dimensional distributions, which typically requires a number of samples that grows exponentially with the dimension. However, depending on the model used for data analysis, the conclusions drawn from the data may remain consistent across different underlying distributions. In this context, we propose a task-based approach to assess the credibility of sampled surveys. Specifically, we introduce a model-specific distance metric to quantify this notion of credibility. We also design an algorithm to verify the credibility of survey data in the context of regression models. Notably, the sample complexity of our algorithm is independent of the data dimension. This efficiency stems from the fact that the algorithm focuses on verifying the credibility of the survey data rather than reconstructing the underlying regression model. Furthermore, we show that if one attempts to verify credibility by reconstructing the regression model, the sample complexity scales linearly with the dimensionality of the data. We prove the theoretical correctness of our algorithm and numerically demonstrate our algorithm's performance.

Bounds in Wasserstein Distance for Locally Stationary Processes

arXiv:2412.03414v2 Announce Type: replace-cross Abstract: Locally stationary (LSPs) constitute an essential modeling paradigm for capturing the nuanced dynamics inherent in time series data whose statistical characteristics, including mean and variance, evolve smoothly across time. In this paper, we introduce a novel conditional probability distribution estimator specifically tailored for LSPs, employing the Nadaraya-Watson (NW) kernel smoothing methodology. The NW estimator, a prominent local averaging technique, leverages kernel smoothing to approximate the conditional distribution of a response variable given its covariates. We rigorously establish convergence rates for the NW-based conditional probability estimator in the univariate setting under the Wasserstein metric, providing explicit bounds and conditions that guarantee optimal performance. Extending this theoretical framework, we subsequently generalize our analysis to the multivariate scenario using the sliced Wasserstein distance, an approach particularly advantageous in circumventing the computational and analytical challenges typically associated with high-dimensional settings. To corroborate our theoretical contributions, we conduct extensive numerical simulations on synthetic datasets and provide empirical validations using real-world data, highlighting the estimator's practical relevance and effectiveness in capturing intricate temporal dependencies and underscoring its relevance for analyzing complex nonstationary phenomena.

Latent Factor Point Processes for Patient Representation in Electronic Health Records

arXiv:2508.20327v1 Announce Type: cross Abstract: Electronic health records (EHR) contain valuable longitudinal patient-level information, yet most statistical methods reduce the irregular timing of EHR codes into simple counts, thereby discarding rich temporal structure. Existing temporal models often impose restrictive parametric assumptions or are tailored to code level rather than patient-level tasks. We propose the latent factor point process model, which represents code occurrences as a high-dimensional point process whose conditional intensity is driven by a low dimensional latent Poisson process. This low-rank structure reflects the clinical reality that thousands of codes are governed by a small number of underlying disease processes, while enabling statistically efficient estimation in high dimensions. Building on this model, we introduce the Fourier-Eigen embedding, a patient representation constructed from the spectral density matrix of the observed process. We establish theoretical guarantees showing that these embeddings efficiently capture subgroup-specific temporal patterns for downstream classification and clustering. Simulations and an application to an Alzheimer's disease EHR cohort demonstrate the practical advantages of our approach in uncovering clinically meaningful heterogeneity.

Unbiased Stochastic Optimization for Gaussian Processes on Finite Dimensional RKHS

arXiv:2508.20588v1 Announce Type: cross Abstract: Current methods for stochastic hyperparameter learning in Gaussian Processes (GPs) rely on approximations, such as computing biased stochastic gradients or using inducing points in stochastic variational inference. However, when using such methods we are not guaranteed to converge to a stationary point of the true marginal likelihood. In this work, we propose algorithms for exact stochastic inference of GPs with kernels that induce a Reproducing Kernel Hilbert Space (RKHS) of moderate finite dimension. Our approach can also be extended to infinite dimensional RKHSs at the cost of forgoing exactness. Both for finite and infinite dimensional RKHSs, our method achieves better experimental results than existing methods when memory resources limit the feasible batch size and the possible number of inducing points.

Transfer Learning for Classification under Decision Rule Drift with Application to Optimal Individualized Treatment Rule Estimation

arXiv:2508.20942v1 Announce Type: new Abstract: In this paper, we extend the transfer learning classification framework from regression function-based methods to decision rules. We propose a novel methodology for modeling posterior drift through Bayes decision rules. By exploiting the geometric transformation of the Bayes decision boundary, our method reformulates the problem as a low-dimensional empirical risk minimization problem. Under mild regularity conditions, we establish the consistency of our estimators and derive the risk bounds. Moreover, we illustrate the broad applicability of our method by adapting it to the estimation of optimal individualized treatment rules. Extensive simulation studies and analyses of real-world data further demonstrate both superior performance and robustness of our approach.