Archives AI News

Probabilistic Geometric Principal Component Analysis with application to neural data

arXiv:2509.18469v1 Announce Type: cross Abstract: Dimensionality reduction is critical across various domains of science including neuroscience. Probabilistic Principal Component Analysis (PPCA) is a prominent dimensionality reduction method that provides a probabilistic approach unlike the deterministic approach of PCA and serves…

Enhanced Survival Trees

arXiv:2509.18494v1 Announce Type: cross Abstract: We introduce a new survival tree method for censored failure time data that incorporates three key advancements over traditional approaches. First, we develop a more computationally efficient splitting procedure that effectively mitigates the end-cut preference…

Sum-of-norms regularized Nonnegative Matrix Factorization

arXiv:2407.00706v2 Announce Type: replace-cross Abstract: When applying nonnegative matrix factorization (NMF), the rank parameter is generally unknown. This rank, called the nonnegative rank, is usually estimated heuristically since computing its exact value is NP-hard. In this work, we propose an…

Hyperbolic Coarse-to-Fine Few-Shot Class-Incremental Learning

arXiv:2509.18504v1 Announce Type: cross Abstract: In the field of machine learning, hyperbolic space demonstrates superior representation capabilities for hierarchical data compared to conventional Euclidean space. This work focuses on the Coarse-To-Fine Few-Shot Class-Incremental Learning (C2FSCIL) task. Our study follows the…

Diagonal Linear Networks and the Lasso Regularization Path

arXiv:2509.18766v1 Announce Type: cross Abstract: Diagonal linear networks are neural networks with linear activation and diagonal weight matrices. Their theoretical interest is that their implicit regularization can be rigorously analyzed: from a small initialization, the training of diagonal linear networks…

Manifold learning in metric spaces

arXiv:2503.16187v3 Announce Type: replace-cross Abstract: Laplacian-based methods are popular for the dimensionality reduction of data lying in $mathbb{R}^N$. Several theoretical results for these algorithms depend on the fact that the Euclidean distance locally approximates the geodesic distance on the underlying…

Central Limit Theorems for Asynchronous Averaged Q-Learning

arXiv:2509.18964v1 Announce Type: cross Abstract: This paper establishes central limit theorems for Polyak-Ruppert averaged Q-learning under asynchronous updates. We present a non-asymptotic central limit theorem, where the convergence rate in Wasserstein distance explicitly reflects the dependence on the number of…

Representative Action Selection for Large Action Space Meta-Bandits

arXiv:2505.18269v3 Announce Type: replace-cross Abstract: We study the problem of selecting a subset from a large action space shared by a family of bandits, with the goal of achieving performance nearly matching that of using the full action space. We…